Clone Tools
Constraints: committers
Constraints: files
Constraints: dates
Change hard coded version id to dynamically determined one

Added code to parse the Trafodion version id from the

hbase-trx-n.n.n.jar filename.

Change-Id: Ia8cdf8a6942e6af0bf16a055203fcd874700bb73

Merge "Fix for bug 1353044 - Core dump when creating JNI object"

Fix for bug 1353044 - Core dump when creating JNI object

This problem happens during T2 multi thread testing, because the static

variable JavaMethods_ is not protected. Because of this a second thread

is accessing JavaMethods_ before the first thread finished initializing.

Fix is to add mutex to protect JavaMethods_ initialization.

Change-Id: Icc1adf143df2c36394a77f4e753a63ef64e4d08b

    • -25
    • +122
    • -7
    • +31
Renaming Cloudera/Hortonworks setup scripts.

Changed from traf_ to trafodion_ to fit our naming convention.

Edit: Removed trafodion reference.

Change-Id: I6450504e1a61e979220ccfbefff33f5a5a87e358

    • -0
    • +100
    • -0
    • +173
    • -100
    • +0
    • -173
    • +0
Changing log file location message.

Changed log file location message to be more clear where files are


Change-Id: Ie39c061cc5e7bdf64644c90bbeb66a7be2bd9343

    • -1
    • +1
    • -1
    • +1
Use AQR to retry SQLCODE -1254

This change adds SQLCODE to the list of errors retried with the

action of clearing the metadata cache.

Change-Id: Iec5a79a7ea7c835c73f4751594e17c8396a46cc3

Closes-Bug: #1354150

    • -0
    • +395
    • -0
    • +129
    • -0
    • +2
    • -1
    • +1
Bulk load changes

They include:

- changing workspace folder to /bulkload instead of /user/trafodion/bulkolad.

so that the same path is used for the 3 environments: dev, test and clusters

- fix for hive/test017 issue which happens when the produced hfiles need to be

split in odrer to fit inside the regions boudaries of a salted table. fix

consist in createting the temp space for the processing with the right


Change-Id: I34982158ea365c66a76d37e6f1b4919d8fa94932

Added log files, bug fixes, changed setup scripts

Fixed return error bug.

Created log file directory on remote nodes so that log files can be

placed in them on the remote nodes.

Added log files to trafodion_setup and trafodion_installer.

Seperated hadoop distribution setup from trafodion setup.

traf_ambari_setup and traf_cloudera_setup are still avalible in the

/tools directory and will not run trafodion_setup.

EDIT 1: Removed options from trafodion_installer that are not needed.

Added check to make sure FQDN or IP address are not used as node names.

This is a check that will be needed until bug 1347971 is fixed, then

this check will need to be removed.

Change-Id: Iefb590e04434128e7deff3fdc0973be78ab41df2

    • -0
    • +100
    • -0
    • +173
    • -3
    • +3
    • -3
    • +3
    • -0
    • +438
Merge "Test for [first N] issue in ExHdfsScanTcb (hive)"

Merge "Changes required for enabling Tlog by default"

Fix to redube buffer sizes used by Hive scan.

A previous fix I had commited for this problem was causing certain bulk load

queries to not get a proper plan. Thanks to Khaled and Hans for uncovering

this issue. This change set undoes the previous change and implements the same

logic in the binder. This time the change is made only for the bulk loader.

If it is felt to be safe it can be extended for all inserts/upserts later.

Change-Id: I8e12a435008227aede6ba6bcd333db1dca2ba2f2

First fix of typos and misspellings

Small change to start build jobs

Third patch set adds sql/executor/cluster.h with 2 actual

typos fixed, copyright date changed to match the "Created:"


In fourth patch set, white space changes have been reduced,

just some blank lines are removed.

Change-Id: I05b3b55780badc0cb6671f9455f4244b6c78d5ac

Merge "Fix for bug 1326458: 8813 error accesing result set from T2"

Merge "fix for bug 1354619"

Test for [first N] issue in ExHdfsScanTcb (hive)

This change simply adds a testcase for processing of GET_N

in ExHdfsScanTcb, in cases where the operator scans multiple

parts of an HDFS file.

Change-Id: I78cf9d43eab57215f38dcf3e1fe7b2e18e8b1301

Merge "Fix for loader issue with hive varchar into Trafodion datetime column"

Merge "Fix [first N] issue in ExHdfsScanTcb (hive scan)"

Fix for loader issue with hive varchar into Trafodion datetime column

When loading a hive table that has a timestamp as a character into

a Trafodion target table that has a timestamp data type, we get an

error during optimization, complaining about the type incompatibility.


One issue is that we don't set up the update to select value id map

correctly, this map needs to map the target timestamp to the

cast(source col as timestamp), not the character source column.

Patch set 2, only check for charset conversion when both types are char.

Change-Id: Id599c1f4d9a563003fd394f87c070f257d94c4a7

Fix [first N] issue in ExHdfsScanTcb (hive scan)

This change corrects processing of GET_N in ExHdfsScanTcb, in cases

where the operator scans multiple parts of an HDFS file.

Closes-Bug: #1355477

Change-Id: I931d65a515a89c3e8a5bb12348978d7fd36f047c

Changes to support bulkload and DTM enabling Tlog

bulkload support

1) added code to copy the new trafodion-hbase-extensions jar file

2) added code to create two new folders under hdfs and set their permissions

3) added bulkload settings to hbase regionserver settings

DTM Tlog support

This is a coordinated change needed to support trafodion/core

change I9357aa30c201c8ec357cac7fad69101cd1bb3ff6. Added two new

hbase regionserver settings for hlog and tlog.

Fixed chmod error discovered during review.

Change-Id: I3747c5a4fac6d57d111e4268b9642b806c068974

Changes required for enabling Tlog by default

Change-Id: I9357aa30c201c8ec357cac7fad69101cd1bb3ff6

Adding support to use Hadoop 1.0 in a build environment

Steve Varnau found this issue when setting up a Hortonworks test machine.

HDP 1.3 uses Hadoop 1 interfaces and the libhdfs hdfsDelete() method

has only two parameters in Hadoop 1 while it has three parameters in

Hadoop 2.

Adding an environment variable to be able to use conditional compilation

for the calls to hdfsDelete().

In principle this means that we would need to produce two binaries, one

for Hadoop 1 and a second one for Hadoop 2, but I suspect that the same

binary will work ok for both. That's for two reasons: First, the value

of the extra parameter is irrelevant for our situation and, second, I

don't know when we actually delete an HDFS directory - maybe when we


Change-Id: Ib5d17406a26c98abe39a2130a0cf852e8c4347bb

fix for bug 1354619

fix for bug 1354619.

fix the issue that causes the number of rows returned by

a select statment on hive table to be less than the

actual number of rows.

Change-Id: I10eb59fd9443677f332dc6bca0ef1ca561cdf26d

Merge "Bulk load related fixes (snapshots and filter file)"

Fix for bug 1326458: 8813 error accesing result set from T2

The problem happens for explain statement because T2 driver does not

handle sql query type SQL_EXE_UTIL correctly. For sql query types

SQL_EXE_UTIL when there are ouputs it should behave same as

SQL_SELECT_NON_UNIQUE where ExecFetchClose is not called during the

execute of a result set and just execute is called.

Change-Id: I2e29a054c9afbf119d738eda3092710563295c8f

    • -0
    • +5
Fix for two hive scan issues

1) Hive string columns can be wider in their DDL defnition than their target

column when inserting from hive. With the fix the output of hive scan will

be the narrower target column rather than the original wider hive source


2) option to ignore conversion error during hive/hdfs scan and continue to

next one. The cqd HDFS_READ_CONTINUE_ON_ERROR should be set to ON to enable

this option.

Change-Id: Id8afe497e6c043c4b7bae6d556cee76c42ab0163

Bulk load related fixes (snapshots and filter file)

Change-Id: I87ff3abb8c0995878c166f978b54fb1f8c2955f0

    • -0
    • +29
Merge "Trafodion bulk load changes"

Launchpad bug #1353573

Change-Id: I745739e5939d8001f10384e3c85e94967aae20cf

Trafodion bulk load changes

The changes include:

-A way to specify the maximum size of the Hfiles beyond which the file

will be split.

-Adding the “upsert using load …” statement to run under the load utility

so that if can take advantage of disabling and populating indexes and so on.

the syntax is: load with upsert using load into <trafodion table> select ...

from <table>. "Upsert using load" can still be used seperately from

load utility

-Checks in the compiler to make sure indexes and constraints are disabled

before running the "upsert using load" statement

-Moving seabase tests 015 and 017 to the hive suite as they use hive tables.

Change-Id: I80303e4471d2179718e050c98d954ef56cd4cc4f

    • -10
    • +19
    • -16
    • +74
  1. … 12 more files in changeset.