Clone Tools
Constraints: committers
Constraints: files
Constraints: dates
Ensure HFile.Reader objects are closed

Performance workloads are seeing mxosrvrs with a large accumulation

of open sockets, and a resultant decrease in performance. The cause

was HFile.Reader objects used in estimating a table's row count not

being closed. This fix executes a close() on the Reader in a finally

clause, such that it will always be executed even if an exception

is thrown.

Even with this fix, it was noted that early phases of workloads showed

a significant increase in compile times when row count estimation was

enabled. To address this, the estimated counts were saved so that they

would not be repeated as long as the NATable remained in cache. Also,

estimation was avoided completely for metadata tables. With these

changes, my own tests showed a very low overhead for estimation, on

the order of 60-70ms per user table in the workload, regardless of

query complexity or how many queries referenced the table. However,

the results of performance benchmarks executed on clusters did not

improve, and so this change also includes turning off the cqd that enables

row count estimation for the optimizer (a separate cqd controls use

of the feature for Update Statistics, and it is still on by default).

Future work will attempt to address the performance impact of turning

on the cqd.

Change-Id: Ied0369c8def5062d69766198155f8e309bae1ff8

Bulk unload fixes and rework

- rework

- fix for bug 1387377

Change-Id: I7ad6115ab50f291e2ad97a042ec2b8fbc9d256bf

Fix for bug in determining range partition boundary value

When the boundary value of a range partition function is determined by

decoding the encoded value stored as the split boundary in hbase, some mistakes

were being made. This problem was seen for nullable varchar columns. The

symptom is crash during compile with an overflowing stack. The problem

was that the buffer provided to the runtime code to decode the encoded value

was laid out incorrectly. The null bytes were interchanges with the var length

indicator bytes. Thanks to Hans suggesting t his fix.

Patch Set 2:

Chaged testcase as suggested by Hans. Also including a new fix.

This is a fix that that salted indexes to be created when the corresponding

base table has key columns of character type. The fix also allows salted index

to be created when the index is created on columns whose column number

in the base table is larger than the number of columns in the index itself.

Change-Id: Id0bd2b187500d283860d0d12ecb9d8d743b429e9

    • -6197
    • +6234
    • -5
    • +10
Fixed external links and CSS stylesheet for the DCS docbook

Cherry picking changes submitted to proposed/0.8.3 by Susan

In the following XML source files in /dcs/src/docbkx, I updated

the external links (adding xlink:show="new") so that the links

now open in a separate window from the DCS book:

-- book.xml

-- configuration.xml

-- preface.xml

-- troubleshooting.xml

In book.xml, I removed unnecessary and confusing links from the

book's title and from the "Trafodion DCS" logo in the subtitle.

I changed the link in the Abstract so that the word "Trafodion"

links to the Trafodion wiki. I also added a missing period at

the end of that paragraph. I removed the question mark (?) from

the title "DCS" in the Overview section.

In preface.xml, I removed the link from "DCS version" to the

"Connectivity Subsystem" section in the Trafodion wiki (given

that the wiki content is subject to change, resulting in a broken

link) and replaced it with a more appropriate cross-reference to

the download site for the DCS package.

Per Arvind's advice, I updated the "Run modes" and "Example

Configurations" sections in configuration.xml to include the

alternate approach of specifying the host name and number of


I also updated the freebsd_docbook.css stylesheet in

/dcs/src/site/resources/css so that the DCS docbook automatically

appears in sans serif typeface (such as Arial), improving its

readability and look-and-feel.

(cherry picked from commit 4730742c501f5149fb453c9e7bfe5b4cc0e831c4)

Removed dos eol from freebsd_docbook.css as suggested by Susan.

Change-Id: Idd6f20123d628c7fd3a90b523af785575fa87ffe

Bulk load/unload fixes

- changes to to support native compressions

for Clouder and Hortonworks distributions (tested on


- rework from provious checkin.

- fix for bug 1387202 which cause bulk unload to hang

when target loaction is invalid.

Change-Id: Ia6046dfb2b5ff2f986b8306c26a991991a3da780

Merge "Changes to support OSS poc."

    • -13
    • +251
Change jdbc_test directory structure

Change jdbc_test directory structure to org/trafodion/jdbc_test

Change-Id: Ifb6e3a8e96c766c3b0c16656a14f6caf5c411f90

Merge "Make corrections to HFile path"

Merge "DBSecurity: REVOKE ROLE, credential propagation, +"

Merge "Update TOOLSDIR references for Hadoop, HBase, Hive dependencies"

Merge "fix #lp1384430: full sort order for plan1 of NJ"

Make corrections to HFile path

To estimate the row count for an HBase table, the HFiles are

accessed directly from the HDFS file system. When the Trafodion

name of the table includes a delimited id, the quotes must not

be included in the node of the HDFS path representing the

qualified table name. In addition, the hbase.rootdir property

in hbase-site.xml may consist solely of a path rather than a

full URL. It was previously assumed that a full URL would be

present, and the value of the property was used to construct

a object. When a string consisting of only a file

path is passed to the URI constructor, a NullPointerException

is thrown (instead of the expected URISyntaxException), causing

the path we construct to the HFile to be incorrect. The code

was changed to utilize either a file path or a valid URI as the

value of the property.

Change-Id: If35d9da7aaab815a9c1d550bc505d86f0cbcf611

Closes-Bug: 1384959

Changes to support OSS poc.

This checkin contains multiple changes that were added to support OSS poc.

These changes are enabled through a special cqd mode_special_4 and not

yet externalized for general use.

A separate spec contains details of these changes.

These changes have been contributed and pre-reviewed by Suresh, Jim C,

Ravisha, Mike H, Selva and Khaled.

All dev regressions have been run and passed.

Change-Id: I2281c1b4ce7e7e6a251bbea3bf6dc391168f3ca3

  1. … 129 more files in changeset.
Merge "Eliminate minor estimated nullcount inaccuracies"

fix #lp1384430: full sort order for plan1 of NJ

Change-Id: I7f3162af34d16bc5e801305c24438d098a9b09ef

    • -70
    • +72
DBSecurity: REVOKE ROLE, credential propagation, +


1) Corrects a CLI/Executor overwrite problem and removes workaround

code in PrivMgr. Launchpad bug #1371176.

2) REVOKE ROLE now lists referencing and referenced objects when a

revoke request is refused due to dependencies.

3) REVOKE ROLE now reports that the specified grant cannot be found

when grantor has not granted the role to the user. Previously the

misleading error "Not Authorized" was issued, which as confusing when

the user was DB__ROOT. The same change was made for REVOKE COMPONENT

PRIVILEGE. A similar change will be made in the future for revoking

object privileges.

4) REVOKE ROLE now considers grants to PUBLIC before concluding a

revoke would require a dependent object to be dropped.

5) User credential are now propagated to the the compiler process.

Launchpad bug 1373112.


If the priv/role, grantee, grantor tuple does not exist, REVOKE

ROLE/REVOKE COMPONENT PRIVILEGE now reports error 1018: Grant of role or

privilege <name> from <grantor> to <grantee> not found, revoke request


When REVOKE ROLE detects a dependent object, error message 1364 now

reports the referencing and the referenced object.

Cannot revoke role <role-name>. Object <referencing-object> depends on

privileges on object <referenced-object>.

Details for user credential propagation:

The propagate user credentials code has only been partially implemented.

The existing code sends the user ID to the first compiler process.

Other compiler processes started would not get the connected user ID

instead the DB__ROOT user ID became the user by default. Therefore,

privilege checks are succeeding when they should fail.

User credentials consist of an integer user ID and a username. The

existing code only passed the user ID. The compiler process would

then do a metadata look-up to get the username. If we kept this

model, then we would get into an infinite loop:

When the compiler process received the user ID, it did a

metadata read to get the associated username. After reading the

metadata, both the username and user ID was set in context globals.

The metadata lookup code will start another arkcmp process for the

compilation request. The compilation would then start a compiler

process. That compiler process would start another compiler process,


The solution is to send both the username and user ID to the compiler

process. Both values are known at the time the compiler process is

started. This alleviates the need for a database look-up when the

compiler process starts. To do this a new session attribute was

created - SESSION_DATABASE_USER. This session attribute sends both the

user ID and username to the compiler process during startup processing.

Once we were able to start a compiler process and store a user ID other

than DB__ROOT in the Context globals, another similar infinite loop

occurred during privilege checking. For example, a showddl command

starts a compiler process when extracting privilege information. The

compiler calls checkPrivileges to make sure the current user has

privileges. The checkPrivileges statement makes a metadata request

that requires a compilation. This starts up another compiler process.

This compiler process is sent the metadata request. When compiling the

metadata request in the new compiler process, checkPrivileges is called

which starts a compiler process, …

This worked previously because the user passed was DB__ROOT, and the code

in checkPrivileges is short circuited and the metadata call is avoided.

A fix to set the parserflag (INTERNAL_QUERY_FROM_EXEUTIL) before the

metadata request was performed. This fix requires that the file

"SqlParserGlobalsCmn.h" be included in additional files. Including this

file needs to be done with care. In order to get everything to compile,

we changed where this file was included in several places.

Once all these changes were made, the envvar: DBUSER_DEBUG now works.

If set, then good details about how users are sent to different

processes is displayed.

Change-Id: If7538eee38178c2345fe418172c6196b25a20b33

  1. … 16 more files in changeset.
LP Bug 1380733 - Phoenix tests fail with error 73

Analysis has shown that we are generating a commit request result of

COMMIT_CONFLICT in the transactional TrxRegionEndpoint coprocessor

for a transaction. A COMMIT_CONFLICT result to a commit request

ultimately results in an error 73 being returned by SQL to the client.

In this case, a previous commit request, for the same transaction,

in the same region, had resulted in a successful commit analysis of COMMIT_OK.

The subsequent commit request, for the same transaction, is unexpected.

We are continuing to analyze the transactional prepare/commit process

to determine why we have this additional request. For now, we have

added a workaround to the "hasConflict()" transactional processing

to recognize this condition and allow the initial conflict testing results

(in this case a COMMIT_OK) to be the returned result.

Change-Id: I8d35574384a0f64eac650a875827e19031bfd453

Change test reference from com.hp to org.trafodion

Change-Id: Idbd4d7f167f51be7d833771216e540193a788bc1

Merge "rework fix to move global variabls to optDefaults"

Merge "Fix for LP bug 1384506 - get version of metadata returns junk"

Eliminate minor estimated nullcount inaccuracies

When the relative frequency of null values is estimated via

sampling when getting an estimated row count for an HBase

table, there is an (unlikely) situation in which the null

count could be thrown off. If the primary key consists of a

single column, and some row has all null values except for

the primary key, those nulls will be counted incorrectly.

This was caused by comparing two successive KeyValue positions

using < rather than <=.

In addition, if the end of the HFile is reached while taking

the sample, any nulls at the end of the last row will not be

counted, and this has been fixed as well.

o Closes-Bug: #1383835

Change-Id: Ia449d1379d851e8df0f7811e835b5730851c33e2

Fix bug 1383405 and bug 1383597

Bug 1383405: trafodion_mods now checks to see if on a single node

cluster before creating log directories.

Bug 1383597: traf_hortonworks_uninstaller now looks to make sure

there is something to delete before deleting the file.

Change-Id: Ic8a1761d50dbf34aad0d042a09bdd5a11a74d387

Merge "Bulk Load fixes"

rework fix to move global variabls to optDefaults

Change-Id: I70303eb6c2587fe7c151e0737977d2e1802054cf

Merge "Cleanup"

Merge "Fix for UnknownTransaction on aborted txn"

Merge "Delimited col name fix, and backout of upsert fix"


a) remove support for CDH4.2

b) install_local_hadoop: remove names, incorrect comments,

and non-existent hbase-extensions JAR

setup in

Change-Id: I7574cd780524f78e8ab47d764dbe6fd1d4d9e612

    • -10
    • +1
many scanner improvements

+ added sudo access check at the very begging of the scanner,

because some of the configured checks/commands require it

===> This includes a check for requiretty being disabled,

including a special error message.

+ added check for ntpd service

+ added check for iptables firewall

===> I realize that the long one-line script is hard to read! Sorry about that.

I added a backlog task to change the configuration format to allow

multi-line check commands, so that more complex check commands

(e.g., short scripts) are easy to add and read.

+ removed usage of grep -P

+ fixed the ValidNodeName check to report results for the correct node,

by checking the actual hostname -s output on each node

rather than the --nodes parameter value

+ removed ValidClusterName and HomeDirectoryNotOnNFSDisk checks

because they may cause confusion;

it's best to restrict these checks to the trafodion_setup script

+ removed implicit assumption that string compare will always be done

for (eq, ne) and integer compare will always be done for (lt, le, ge, gt)

+ fixed summary output format to be less confusing

+ removed default for the --nodes parameter; this is now a required parameter,

just like in the trafodion_setup script

Change-Id: I96ca15f40e08c1a702b0c2754d1e47da3d03f96a

    • -85
    • +89
    • -18
    • +18
Fix for LP bug 1384506 - get version of metadata returns junk

Change-Id: I818165d7497b2661fed3400dce8f6e8857607dd8