Clone Tools
Constraints: committers
Constraints: files
Constraints: dates
Merge "Splitting install_local_hadoop into two scripts."

Merge "Fix for SPJ performance:" into stable/1.0

Merge "Added option to disable SQL plan collection for user queries" into stable/1.0

Splitting install_local_hadoop into two scripts.

This makes it possible to call the part that sets up a Hive TPC-DS

database from the Jenkins regression test environment.

Change-Id: If776fb5fb79d62450b7377be1e8a3ee1f23becbd

    • -0
    • +415
    • -311
    • +10
Fix for SPJ performance:

Added code in langman to enable connection pooling for T2 when UDRServer

uses T2.

Change-Id: I038be48925965ae8c1483b71f1bb7104a2868468

Merge "The following Launchpad bugs are fixed in this change:"

    • -35
    • +265
    • -3
    • +31
Merge "Remove code and cqds related to Thrift interface"

memory leak and salted partitions set up fix

change ported from master branch. LP 1418685

Change-Id: I540c6ecda4d44f4ee6112c05cc144061cb50a66e

Merge "T2Driver - more minor fixes"

Fix LP bug 1323865 -ODBC api test fails with Unknown PCode instruction

During PCODE generation, we were attempting to generate a PCODE

instruction to compare two operands for equality. The two operands

had a data type of REC_BYTE_V_ASCII_LONG which is used only by

ODBC. PCIT::getMemoryAddressingMode() does not currently know how

to handle that datatype so it returned AM_NONE for the operand

type. That resulted in a failure later.

Fix was to detect the operand(s) of that datatype and call

ex_clause::pCodeGenerate(...) rather than doing PCODE generation

of the current expression.

Note: Also found a line saying

return ex_clause::pCodeGenerate(space, f);

which has been missing for a long time. We got away with it

because the preceding 'if' was always false for Trafodion.

Files changed:


Change-Id: I441f5b57fb1b5a67f4e4872d308efe0af5d320db

Fix stubs build.

Change-Id: Ie63538c6a422454408ea584dcc4529460dbf5659

Remove code and cqds related to Thrift interface

ExpHbaseInterface_Thrift class was removed a few months ago. Completing

that cleanup work. exp/Hbase_types.{cpp,h} still remain. These are Thrift

generated files but we use the structs/classes generated for JNI access.

Change-Id: I7bc2ead6cc8d6025fb38f86fbdf7ed452807c445

  1. … 5 more files in changeset.
Change to avoid placing large scan results in RegionServer cache

By default the result of every Scan and Get request to HBase is placed in

the RegionServer cache. When a scan returns a lot of rows this can lead to

cache thrashing, causing results which are being shared by other queries

to be flushed out. This change uses cardinality estimates and hbase row size

estimates along with the configured size of region server cache size to

determine when such thrashing may occur. Heap size for region server is

specified through a cqd HBASE_REGION_SERVER_MAX_HEAP_SIZE. The units are in MB.

The fraction of this heap allocated to block cache is read from the config

file once per session through a lightweight (no disk access) JNI call. The

hueristic used is approximate as it does not consider total number of region

servers or that sometimes a scan may be concentrated in one or a few region

servers. We simply do not place rows in RS cache, if the memory used by all

rows in a scan will exceed the cache in a single RS. Change can be overridden

with cqd HBASE_CACHE_BLOCKS 'ON'. The default is now SYSTEM. Change applies

to both count(*) coproc plans and regular scans.

Change-Id: I0afc8da44df981c1dffa3a77fb3efe2f806c3af1

  1. … 6 more files in changeset.
Merge "Added option to disable SQL plan collection for user queries"

T2Driver - more minor fixes

Change-Id: I6f9ea31172e428a18be77ee87df6da8212847b56

    • -1
    • +1
LP and other fixes.

-- LP 1414074. Added 'cleanup obsolete volatile tables' command

-- Added support for 'get all volatile tables' command

-- LP 1411864. Alter Sequence now correctly returns an error if value

specified exceeds the max value

-- LP 1413743. Error indicating sequence number max is reached is now

being returned instead of a numeric overflow error ff max largeint value

is reached while generating sequence numbers.

-- LP 1418685. Partition information is now being set up correctly for

delimited salted tables.

-- changed copyright message

Change-Id: Ic9e532204890a68ea0616b99a3170a0cc735ad53

Signed-off-by: Anoop Sharma <>

    • -386
    • +51
Merge "Changes in T2 driver to support new NA server."

Fix for hang encountered during hive tests.

Added proper copyright.

Added comment to make it clear to readers of the code.

Hive tests have been hanging for quite a while on the official slave


The cause for this particular hang was that the main thread was

destroying the cursor (ExLobCursor) object . The worker thread was kind

of slow and it continues to access the cursor after the main thread had

destroyed it. pthread calls exhibit undefined behavior when

uninitialized mutex or condition variables are accessed. Added code to

prevent this timing issue in the worker threads.

Added trace utlitlty to diagnose hangs and execution issues.

The trace messages look like this and get logged into trace file on the

local directory named trace_threads.<pid>. It is controlled by an

environment variable TRACE_HDFS_THREAD_ACTIONS.The envvar is checked

only ones and the file handle is checked each time the trace needs to be

done. If file handle is NULL, no message is logged.

Change-Id: I95519b61d71339c719e37500bccae111ce070a15

Fix for more leaks in NATable

Added code to delete more elements in NATable destructor. This

fixes leaks when NATable is destroyed.

Changed the NATable Cache Management to use allocated size rather

than the total size.

This also enables seabase/TEST020 to pass in debug mode.

Change-Id: Ifebc20e602c0149f73f9cefcd11b37c69d9eec74

Merge "Fix memory leak when ComDiagsArea is not deleted" into stable/1.0

Merge "HBaseClient now logs correctly. Updated copyright"

Merge "Added new property to disable sqlplan for user queries"

Disable seabase/TEST024

Change-Id: Ife00eedd8b09aae17ada84959930847bdf5684a0

    • -2
    • +2
Merge "Fix memory leak when ComDiagsArea is not deleted"

Bulk unload optimization using snapshot scan

resubmitting after facing git issues

The changes consist of:

*implementing the snapshot scan optimization in the Trafodion scan operator

*changes to the bulk unload changes to use the new snapshot scan.

*Changes to scripts and permissions (using ACLS)

*Rework based on review


*Snapshot Scan:


**Added support for snapshot scan to Trafodion scan

**The scan expects the hbase snapshots themselves to be created before running

the query. When used with bulk unload the snapshots can created by bulk unload

**The snapshot scan implementation can be used without the bulk-unload. To use

the snapshot scan outside bulk-unload we need to use the below cqds


-- the snapshot name will the table name concatenated with the suffix-string


-- temp dir needed for the hbase snapshotsca

cqd TRAF_TABLE_SNAPSHOT_SCAN_TMP_LOCATION '/bulkload/temp_scan_dir/'; n

**snapshot scan can be used with table scan, index scans etc…

*Bulk unload utility :


**The bulk unload optimization is due the newly added support for snapshot scan.

By default bulk unload uses the regular scan. But when snapshot scan is

specified it will use snapshot scan instead of regular scan

**To use snapshot scan with Bulk unload we need to specify the new options in


***using NEW in the above syntax means the bulk unload tool will create new

snapshots while using EXISTING means bulk unload expect the snapshot to

exist already.

***The snapshot names are based on the table names in the select statement. The

snapshot name needs to start with table name and have a suffix QUOTED-STRING

***For example for “unload with NEW SNAPSHOT HAVING SUFFIX ‘SNAP111’ into ‘tmp’

select from cat.sch.table1; “ the unload utiliy will create a snapshot


‘SNAP111’ into ‘tmp’ select from cat.sch.table1; “ the unload utility will

expect a snapshot CAT.SCH.TABLE1_SNAP111; to be existing already. Otherwise

an error is produced.

***If this newly added options is not used in the syntax bulk unload will use

the regular scan instead of snapshot scan

**The bulk unload queries the explain plan virtual table to get the list of

Trafodion tables that will be scanned and based on the case it either creates

the snapshots for those tables or verifies if they already exists or not

*Configuration changes


**Enable ACLs in hdfs




**All developper regression tests were run and all passed

**bulk unload and snapshot scan were tested on the cluster


**Example of using snapshot scan without bulk unload:

(we need to create the snapshot first )


--- SQL operation complete.


--- SQL operation complete.

>>cqd TRAF_TABLE_SNAPSHOT_SCAN_TMP_LOCATION '/bulkload/temp_scan_dir/';

--- SQL operation complete.

>>select [first 5] c1,c2 from tt10;

C1 C2

--------------------- --------------------

.00 0

.01 1

.02 2

.03 3

.04 4

--- 5 row(s) selected.

**Example of using snapshot scan with unload:




INTO '/bulkload/unload_TT14_3' select * from seabase.TT20 ;

Change-Id: Idb1d1807850787c6717ab0aa604dfc9a37f43dce

    • -187
    • +528
    • -11
    • +39
    • -102
    • +266
    • -10
    • +282
  1. … 21 more files in changeset.
Added option to disable SQL plan collection for user queries

Added a new option that works on top of the new DCS property -SQLPLAN to disable

collection of query plan for user queries.

Cleaned up SessionWatchDog method to alloc/dealloc statement handles

between writing of stats, which may contribute to memory leak.

Change-Id: I6ded905c0b8197047f36268ae34f4a5f308d9e17

(cherry picked from commit 159c02117cd3b491c3c04936fbc395567680faa6)

    • -52
    • +88
    • -1
    • +29
Added option to disable SQL plan collection for user queries

Added a new option that works on top of the new DCS property -SQLPLAN to disable

collection of query plan for user queries.

Cleaned up SessionWatchDog method to alloc/dealloc statement handles

between writing of stats, which may contribute to memory leak.

Change-Id: I6ded905c0b8197047f36268ae34f4a5f308d9e17

    • -52
    • +88
    • -1
    • +29
The following Launchpad bugs are fixed in this change:

Bug 1370749: Now using MAX_USERNAME_LEN instead of hardcoded value

Bug 1413760: CREATE TABLE LIKE was failing in some circumstances because

SHOWDDL was including the BY clause. Ownership rules changes in

CREATE TABLE changed when ANSI schemas was implemented, so the BY clause

is no longer needed.

Bug 1392107: Privileges granted on a view are no longer lost if the

view is replaced via CREATE OR REPLACE VIEW.

Bug 1370740: A potential memory corruption problem is now avoided

by reworking the authorization name lookup functions.

Bug 1413767: Previously DROP SCHEMA CASCADE would fail to drop a

table with an IDENTITY column.

Bug 1413758: Previously DROP TABLE CASCADE did not drop nested views.

Bug 1412891: Previously DROP TABLE CASCADE failed if a dependent object

contained a delimited name.

Changes are present for 1392086, but the work is not yet completed.

This problem is related to roles and security keys.

Code changes are also present for giving ownership of an object to

another authorization ID, but these changes are not complete. A

description of

the changes is included.

The GIVE command transfers ownership of a SQL item from one

authorization ID to another. Implemented in this delivery is


GIVE ALL transfers all SQL items owned by an authorization ID to another

authorization ID. Current or new owner can be a user or a role. The

GIVE ALL command requires the ALTER privilege.


GIVE SCHEMA behavior depends on the type of schema and whether RESTRICT

or CASCADE is specified. For private schemas, all the objects in the

schema are given, as well as the schema itself. For shared schemas,

only the

schema is given, unless the CASCADE option is specified. In that case,


of all the objects in the shared schema is given to the new owner. Use


the CASCADE option requires the ALTER_SCHEMA privilege. Otherwise, GIVE

SCHEMA only requires the user to be the owner of the schema.


NOTE: RESTRICT and CASCADE are not applicable to private schemas and are


GIVE OBJECT is added to the syntax but is not implemented and may not

be implemented.

A more detailed blueprint will be provided prior to the final delivery

of GIVE.

Change-Id: I7449da599dc80de1c0659164e684841cda4647c8

    • -0
    • +272
  1. … 20 more files in changeset.
Merge "Fix for NATable Heap leak"

Fix memory leak when ComDiagsArea is not deleted

In ExControlTcb, the allocated ComDiagsArea for the embedded compiler

to use did not get freed, resulting 336-byte block leak in the

ContextCli heap each time a set query default statement was executed.

This fix gets the memory for the Statement heap and frees it after use.

Change-Id: I0ef68e6d3c9372a8963ff8da375479af8d64c198