Trafodion

Clone Tools
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Fix for #1359358 dcs_installer server file format

Changed code to generate the new DCS servers file format

of one line per node with a count of servers for that node

instead of the older format of listing a node multiple times

Old format:

node1

node2

node3

node1

node2

node3

node1

node2

New format:

node1 3

node2 3

node3 2

Change-Id: Ice5ba9ced25ea61b6dea23a2f6036605dcd7a694

Bulk Unload/extract (prototype)

These changes add support for a bulk unload/extract (prototype) to unload data

from trafodion tables into HDFS. Bulk unload unloads data in either

compressed (gzip) or uncompressed formats. When specified, compression take

place before writing data bufers to files. Once the data unloading is done

the files are merged into one single file. If compression is specified

the data are unloaded into files in compressed format and then merged in

compressed format. Otherwise they are unloaded in uncompressed format

and merged into one uncompressed file.

The unload syntax is:

UNLOAD [[WITH option] [, option] ...] INTO <hive-table> query_expression

Where

*<hive-table> is a hive table.

and

*option can be:

- PURGEDATA FROM TARGET: When this option specfied the files under the

<hive-table> are deleted

- COMPRESSION GZIP: when this option is specfied the Gzip compression is

used. the compression takes place in the hive-insert node and data

is written to disk in compressed format (Gzip i the only supported

compression for now)

- MERGE FILE <merged_file-path>: When this option is specified the files

unloaded are merged into one single file <merged-file-path>. if

compressiion is specied the data unloaded in compressed format

and the merged file will be in compressed formt also.

- NO OUTPUT: If this option is specified then no status messge is dispalyed

Change-Id: Ifd6f543d867f29ee752bcb80020c5ad6c16b7277

    • -82
    • +139
    /sql/executor/ExFastTransport.cpp
    • -0
    • +218
    /sql/executor/SequenceFileReader.cpp
  1. … 16 more files in changeset.
Merge "Making Maven build output less verbose"

Fix for bugs 1359361 1360444 1359366 1360449

Bug 1359361: Scripts that required --nodes (trafodion_setup,

ambari_uninstall, cloudera_uninstall, mapr_uninstall) will now return an error

if a node list has not be given.

Bug 1360444: pdsh commands are now only after pdsh has been installed.

Bug 1359366: An error message to fix http_proxy settings is in

trafodion_setup if can not download.

Bug 1360449: Checking for existance of /etc/selinux/config before

copying file.

Change-Id: I2dedc18ac1825b80055fcb2b2e2edb28a7e53001

Merge "Prevent resetting the ASN to correct aging problem"

Making Maven build output less verbose

Maven emits warnings about using an environment variable as the

version. We want to keep doing this, since we have multiple Maven

projects that should all have the same version, stored centrally

in the sqenvcom.sh file.

Suppressing most Maven output, including the warning, on stdout,

but preserving the entire output in the log file

(except for the "clean" target).

Change-Id: Ifee00e65971b20b485fc2eb65fff9135250d6fe3

SQL Compiler/Generator speed improvement

There were 7 places where the ItemExprTreeAsList::entries() routine

was being called inside a for loop when the return value from entries()

did not change while going through the loop. For most xyz::entries()

routines, that is not significant since they just return the value

previously stored in an object. However, ItemExprTreeAsList::entries()

actually traverses a node tree and counts the number of nodes.

This set of changes moves the call to that routine outside the loop.

All dev regressions were run to ensure that the fix has no

side-effects.

Files changed:

sql/generator/GenRelMisc.cpp

sql/optimizer/BindRelExpr.cpp

sql/optimizer/OptLogRelExpr.cpp

sql/parser/SqlParserAux.cpp

Change-Id: Ia5c9200805b839c0c6328e410a27d786e6136ba1

ODBC server compression ability

Change-Id: I41325c359e4c54239baa377e502cde1c6149c16d

    • -2
    • +2
    /conn/odbc/src/odbc/Common/FileSystemDrvr.cpp
    • -4
    • +101
    /conn/odbc/src/odbc/Common/TCPIPSystemSrvr.cpp
    • -0
    • +6
    /conn/odbc/src/odbc/Common/TCPIPSystemSrvr.h
    • -3
    • +5
    /conn/odbc/src/odbc/Common/TransportBase.h
    • -0
    • +109
    /conn/odbc/src/odbc/Common/compression.cpp
    • -0
    • +46
    /conn/odbc/src/odbc/Common/compression.h
ODBC driver compression

Change-Id: If9f87a6c7ba527098df5a80c0a53519cf3845c39

Merge "fix for upsert using load with indexes LP 1364575."

Merge "Cleanup scan related functions"

Merge "Fix #1336979, #1342954, #1340400, #1342945, and #1315194"

Merge "Closes-Bug: #1358983"

fix for upsert using load with indexes LP 1364575.

Change-Id: I6b01e9dd9cfbf274ffafc8ddea5a1066027c6372

Fix for traf_cloudera_mods Hbase error, other fixes

Added check that will look to see what the exact name of hbase to use in

traf_cloudera_mods to make a call to set hbase settings and restart

hbase. (Formally clouderas url was .../hbase1 it is now .../hbase).

Changed parsing in trafodion_mods to allow for Cloudera cluster name to

be more than one word.

Moved SQconfig check in trafodion_installer to after it has been moved

to current directory if on a single node.

Changed error message in trafodion_mods.

Change-Id: I052b06bbe0eac8dadf70f5af051a16358aa35c15

Fix for bug 1348211 Generator error 7000 "root can't produce these values"

For joins involving repartitioning of data to match the other table,

we got this error in some cases, usually seen with salted tables.

Fix is to avoid storing the PIVs (partition input values) in the logical

scan node. Instead they are picked up in the preCodeGen phase, and this

means that if the PIVs have to be rewritten, we pick up the new and

correct values.

Patch set 3: Fixed regression issue by moving PIV logic so

availableValues includes the PIVs.

Change-Id: Ia8f9a83894e504f8d65d37d1589760258cdb8976

Update release version to 0.9.0 (default release)

Just updating to keep in synch with next trafodion release number.

Change-Id: I14a9dc02514dfeea22c2fe080a851e0faf24d5a1

Update release version to 0.9.0

Also patching tests which query the database version. This returns

different result now that core has changed to version 0.9.

Change-Id: Iecf8d31e252c1eb5e19c491f289b9c200c139a2d

Update release version to 0.9.0

Change-Id: I9e37698c598efac2bc1dac0ccb4aa66c7bf29de1

Prevent resetting the ASN to correct aging problem

Change-Id: I8b12fb9c282088708b897d864e383aec5bae1aae

    • -1
    • +0
    /sqf/src/seatrans/tm/hbasetmlib2/hbasetm.cpp
Cleanup scan related functions

Incorporated the comments from change id 331

Changed ExpHbaseInterface_JNI::fetchAllRows method to use the new scan methods

Removed the scan related methods that are no longer used

Triggered the cleanup of Java objects at the time of releaseHTableClient.

Increased the default maximum java heap size to 1024MB from 512MB.

Change-Id: I851bcfa266504f609fdbcba6f2a5e9e6dd2937d3

    • -240
    • +15
    /sql/executor/ExHbaseAccess.cpp
    • -225
    • +76
    /sql/executor/HBaseClient_JNI.cpp
    • -150
    • +66
    /sql/exp/ExpHbaseInterface.cpp
Merge "Enabling HASH2 partitioning of salted tables"

Removed information that was for testing with VM

Removed information that was only for testing on the HP Cloud.

Change-Id: I26cd65c6f827181ee7410763d5476f6a47201983

Merge "Reducing the path length for scan, single row get, batch get operations"

bug 1368271 fix

Change-Id: I7a609467e3192854bd2a31b91c1c4ce150a9bf28

    • -186
    • +330
    /sql/regress/seabase/EXPECTED010
    • -56
    • +70
    /sql/regress/seabase/EXPECTED016
Updated expected files for fullstack2/TEST062

The new Jenkins job core-regress-fullstack2-cdh4.4 was failing

because the release flavor expected file for TEST062 was out of date.

Both flavors have been updated, that is, EXPECTED062.SB for debug

and EXPECTED062.SB.RELEASE for release.

Change-Id: Iaabae7ce89f0b1c7a2ac909808d3e1f9090709ea

    • -30
    • +30
    /sql/regress/fullstack2/EXPECTED062.SB
Merge "Fix for LP #1344181 and a change to control buufer size for Hdfs scan"

Bug fix 1360420, adding noprompt option

Bug fix 1360420: traf_ambari_mods script will know sleep for 2 seconds

after each call to config.sh to make sure config.sh has enough time to

set each of the HBase configurations.

Added --noprompt option to cloudera_setup to allow for more automation.

No longer checking for an id_rsa file if running on one node to allow

for more automation.

Change-Id: If4bde791f5baaf4b538fc15c3b58e1e446552faf

Update phoenix tests for it to work for both JDBC T2 and T4 drivers

Updated 3 files to allow phoenix tests to work for both T2 and T4:

BaseTest.java:

updated the T2 class path that has been recently changed.

ArithmeticQueryTest.java & VariableLengthPKTest.java:

updated the tests to deal with different error messages and minor different

behaviors on decimal rounding.

Change-Id: I1409605073e9fcfeed3bcecfd9085352c051be3e

Enabling HASH2 partitioning of salted tables

- In FileScan::preCodeGen(), make sure we add part key predicates in all

3 cases, a) MDAM, b) with an existing search key, c) without a search key.

- Make sure we don’t do the HBase “constant keys” optimization when we have

partitioning key predicates (HBaseAccess::preCodeGen()).

- Since the partition input values for a HASH2 function are actual hash

values, the key predicate needs to call the Hash2Distrib function to

compute the salt value

(PartitioningFunction::createBetweenPartitioningKeyPredicates())

- When we replace an existing search key with a new one for the partitioning

key predicates, try to include the existing predicates as well

(TableHashPartitioningFunction::createSearchKey())

Change-Id: I092ae85653f320d1d26273a15da4e0ac6b0ae2bc

    • -108
    • +56
    /sql/regress/seabase/EXPECTED016