Trafodion

Clone Tools
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Cleanup

a) sqenvcom.sh: remove support for CDH4.2

b) install_local_hadoop: remove names, incorrect comments,

and non-existent hbase-extensions JAR

setup in hbase-env.sh.

Change-Id: I7574cd780524f78e8ab47d764dbe6fd1d4d9e612

    • -10
    • +1
    /sqf/sql/scripts/install_local_hadoop
many scanner improvements

+ added sudo access check at the very begging of the scanner,

because some of the configured checks/commands require it

===> This includes a check for requiretty being disabled,

including a special error message.

+ added check for ntpd service

+ added check for iptables firewall

===> I realize that the long one-line script is hard to read! Sorry about that.

I added a backlog task to change the configuration format to allow

multi-line check commands, so that more complex check commands

(e.g., short scripts) are easy to add and read.

+ removed usage of grep -P

+ fixed the ValidNodeName check to report results for the correct node,

by checking the actual hostname -s output on each node

rather than the --nodes parameter value

+ removed ValidClusterName and HomeDirectoryNotOnNFSDisk checks

because they may cause confusion;

it's best to restrict these checks to the trafodion_setup script

+ removed implicit assumption that string compare will always be done

for (eq, ne) and integer compare will always be done for (lt, le, ge, gt)

+ fixed summary output format to be less confusing

+ removed default for the --nodes parameter; this is now a required parameter,

just like in the trafodion_setup script

Change-Id: I96ca15f40e08c1a702b0c2754d1e47da3d03f96a

    • -85
    • +89
    /installer/tools/trafodion_scanner
    • -18
    • +18
    /installer/tools/trafodion_scanner.cfg
Fix for LP bug 1384506 - get version of metadata returns junk

Change-Id: I818165d7497b2661fed3400dce8f6e8857607dd8

Fix for UnknownTransaction on aborted txn

Our asynchronous prepare commit was not waiting for the responses

if the prepare received a commit conflict or commit error. This

was causing the client to send an abort to the region even if it

was responding with a read-only prepare response. This caused the

UnknownTransactionException since we were aborting a region that

responded read-only (and already retired the transaction).

Change-Id: Ia452a90d862fd5a5bf7fa6c81612808573adf454

Changed verison of java being downloaded

Cloudera changed java version that needs to be downloaded with

5.1.2.

Change-Id: I4ffe709c9f7c0eccc5dbaebdbb9a8449ace49d7b

Merge "Bug 1383491, Some problems remaining with incorrect ESP boundaries"

Delimited col name fix, and backout of upsert fix

This delivery fixes two launchpad bugs:

1383531: Create table .. like .. store by() does not take delimited

column names. See CmpSeabaseDDLtable.cpp for change.

Details:

When the create table like statement is requested, the create table like

code calls describe to get the description of the source table. After

getting the describe text back for the source table, the create table

like code adds a STORE BY clause. The code to add the STORE BY clause is

not handling delimited column names correctly.

1376835: initialize authorization failing with unique constraint error.

See PrivMgrPrivileges.cpp and PrivMgrRoles.cpp for change.

Details:

Previously delivered a fix to work around this problem (change-Id:

Id701d031ab9b9c2ebdc0584b01a2b5af9fc02b26) which changed the insert

.. selects to upsert .. selects. After this workaround was delivered

the correct fix was released (undo disable txns for DDL change-Id:

Ib37e202b9239305bd1e38e2761b587a4316ee439).

This delivery changes the upsert's back to insert's. It also fixes a

problem with the insertSelect statement when inserting into the

OBJECT_PRIVILEGES table because sequence generators (SG) were not being

initialized properly.

Change-Id: I296c49a446c11f2ec019c6eb7e723538cae79c27

Bulk Load fixes

- fix for bug 1383849 . Releasing the bulk load objects once load is done.

- bulk load now uses CIF by default. This does not apply to populating

indexes using bulk load.

- fix for hive/test015 so it does not fail on the test machines

Change-Id: Iaafe8de8eb60352b0d4c644e9da0d84a4068688c

    • -0
    • +10
    /sql/sqlcomp/CmpSeabaseDDLindex.cpp
Fix LP bug 1325716 -TIME(1) default CURRENT_TIME reported wrong values

When doing CSE (Common Subexpression Elimination), if we come across a

convert clause, we must interrogate the details of the conversion

before deciding this is a match with another conversion we have

already done. In particular, the source's precision, scale, and type

must be the same as in the previous conversion and, likewise, for

the target.

Pre-reviewed by Justin.

All dev regressions were run to ensure that the fix has no

side-effects.

Files changed:

sql/exp/ExpPCodeOptimizations.cpp

Change-Id: I2705cef151ef163a43e1eef31ee47ef94d164051

Merge "Eliminate excessive region refresh;remove warnings"

Merge "Fix bug 1370151 - PCODE Optimization code was looping forever"

Update TOOLSDIR references for Hadoop, HBase, Hive dependencies

Update the section of Hadoop dependencies that references TOOLSDIR

locations. The purpose is to use the same build-time dependencies

regardless of the distribution installed on the build machine.

Build should work without having a distro installed and without using

install_local_hadoop before building. But if you have local_hadoop

installed, that context still gets preference over TOOLSDIR.

The installed-distro settings (HortonWorks, Cloudera, MapR) are used

for run-time dependencies.

TOOLSDIR are build-time references and are only used if we are in a

source tree. So if your machine has a distro installed and TOOLSDIR

defined, then you'll get build-time references if sqenvcom.sh finds a

Makefile, indicating we are in source tree.

Add a check that we matched at least one supported distro (otherwise

we quietly don't set up CLASSPATH and trafodion won't start).

Update Hortorworks check. The library paths no longer contain *HDP*,

so ambari is the only valid check. Also remove the HADOOP_1 setting,

as that no longer is valid with HDP 2.1 distro.

Change-Id: I1b86bf8c454467c6adef15e66014fd6da59b1f15

Blueprint: infra-build-hadoop-deps

Merge "SQL Memory allocation tracing and overflow detection"

Merge "Use new HFIle location for HBase 0.98"

Merge "Two minor bug fixes."

Merge "Update release number"

Bug 1383491, Some problems remaining with incorrect ESP boundaries

When making up missing columns from HBase split keys that are

shorter than the actual key, there were a couple of issues:

These keys were created as decoded key values, but the code

below decoded them a second time.

Also, the code did not handle nullable columns properly.

These two fixes also solved the remaining issue with interval columns,

mentioned in bug 1375902.

Change-Id: Ie311fcc33c1a6920b68227fc2fff43a386f3c2e8

Fix bug 1370151 - PCODE Optimization code was looping forever

Under some circumstances, the PCode optimization logic was calling

memset() with a length argument that was a negative value.

Change-Id: Ie4aa96b6614ccfe9ffe6fb5d88cca41a046c7de7

Merge "Fix for LP bug 1376306"

SQL Memory allocation tracing and overflow detection

1) Added code to record stack info when given size of memory blocks are

allocated and dump these info out to a file or terminal if the

deallocation is not made at the time the heap is distructed. This works

for heaps constructed from NAMemory.

2) Fixed the memory debug code to detected memory overflow at the

de-allocation time.

These two features are enabled through environment variables in objects

compiled in debug mode. See sql/common/NAMemory.cpp for details.

3) Fixes for memory usage issues found when testing above code.

Change-Id: Id0e180aee3d069de11836904e80a4290b180dc67

Fix of bug #1328730:

JDBC T4 getTables methods only returns leading 100 rows,

if the total results is larger than 100. The second fetch

of JDBC Driver gets nothing. MXOSRVR actually fetch all rows

in statments data buffer, but in the second fetch, server

sends the buffer starts at the same address as last fetch, in

which the leading bytes of previous fetch are all already set

to '0'.

To fix this, uses a offset in statment to mark the start address

for next fetch of the buffer.

This passes dev unit tests for this specific problem, and QA

regression tests.

Change-Id: Ifd098dec38acb0ca7ff0cec153328d3b1349f3e1

==================================================================

OCT 29th:

Add initialization for SRVR_STMT_HDL.outputDataValue.pad_to_offset_8_

to 0, remove NULL(zero) validation for pSrvrStmt->outputDataValue._length

and pSrvrStmt->outputDataValue._buffer, for performance consideration.

    • -0
    • +1
    /conn/odbc/src/odbc/nsksrvrcore/csrvrstmt.cpp
    • -8
    • +17
    /conn/odbc/src/odbc/nsksrvrcore/srvrothers.cpp
Fix for LP bug 1376306

- With this fix bulk loading salted tables and indexes now generates parallel

plans. Both salted base tables and salted indexes were tested

- if attemp_esp_parallelism cqd is set to off an error is returned

- also removed unneeded variables from sqenvcom.sh

Change-Id: I2a85d902070a4f35e3fe54b426a4277afaa60399

Eliminate excessive region refresh;remove warnings

Change-Id: I845caf5f5e4aab5d2c12c848ed0dc5df1396380d

    • -4
    • +4
    /sqf/src/seatrans/tm/hbasetmlib2/Makefile
Merge "DoP adjustment for small queries. Rework"

Merge "Remove deprecated code"

Merge "delete three new data members in ~OptDefaults"

Merge "Support for SALTED index."

Merge "Ensure UID returned for MD tables is non-null"

Remove deprecated code

Change-Id: Ia93f9350369d6f6d43d736139b3a5cc70a27dd17

Merge "Add transaction id to tracing during transactional aggregation execution"