Trafodion

Clone Tools
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Fix for bug 1446802

This is a fix on ODBC driver side specifically

against MXOSRVR recent change by bug 1438775

Change-Id: Id1dff8f45a9e121149781ba2a84c983f9265b90f

Fix for bug 1446802

This is a fix on ODBC driver side specifically

against MXOSRVR recent change by bug 1438775

Change-Id: Id1dff8f45a9e121149781ba2a84c983f9265b90f

(cherry picked from commit ca56ccb5b3c054a6ceb042b1d43faacb9564052b)

LP 1446802: ODBC SQLFetch fails when column size is bigger than 64K

Change-Id: Iaa125e18c40bc5953f646740d12ba3b9bbf1c5fb

LP 1446802: ODBC SQLFetch fails when column size is bigger than 64K

Change-Id: Iaa125e18c40bc5953f646740d12ba3b9bbf1c5fb

Merge "Fix for bug 1446043"

Changes to enable Rowset select - Fix for bug 1423327

HBase always returns an empty result set when the row is not found. Trafodion

is changed to exploit this concept to project no data in a rowset select.

Now optimizer has been enabled to choose a plan involving Rowset Select

where ever possible. This can result in plan changes for the queries -

nested join plan instead of hash join,

vsbb delete instead of delete,

vsbb insert instead of regular insert.

A new CQD HBASE_ROWSET_VSBB_SIZE is now added to control the hbase rowset size.

The default values is 1000

Change-Id: Id76c2e6abe01f2d1a7b6387f917825cac2004081

    • -59
    • +158
    /sql/executor/HBaseClient_JNI.cpp
    • -21
    • +21
    /sql/regress/compGeneral/EXPECTED071
  1. … 5 more files in changeset.
Adding break out of preSplit() loop on exception

Also commenting out log message that is filling up client-side

logs even if there is no activity. The zookeeper message will

still be printed as DEBUG to show if the ZNode is not present.

The break of out preSplit() was needed as on a RS abort, the

preSplit can get stuck in a state where it is keeping the RS

process up and in a while/sleep loop.

Change-Id: Ic93ded73d80ecb61c6ceda94339dc53f52ea1bf1

Set Zookeeper's maxClientCnxns=0 and add Ambari 2.0 support

Added code to modify zookeeper's maxClientCnxns=0 to allow

for unlimited connections in Cloudera because build jobs

were failing due to hitting max connections. This was not

a problem in Hortonworks.

Updated ambari_setup script to use latest Ambari 2.0 version.

Change-Id: I74ec08c5dacfe78dad5934fad8634b3c01f62440

ODB bug fixes by Maurizio

Miscellaneous fixes by Maurizio. Some of the changes are related to the

help screen.

Change-Id: Ifbff89b8d6269a00d9f1e2752813d19ef3d852a3

Allow double msg_init.

Change-Id: Iec27a52ba54de72ebef721088bc459ef19ed5a8b

    • -0
    • +8
    /sqf/src/seatrans/tm/hbasetmlib2/idtm.cpp
Merge "mapTransactionStates is now static in RMInterface"

Merge "Change default review branch for stable/1.1" into stable/1.1

Merge "Fixes to copyright check script"

Fix for bug 1446043

SPJ's can contain duplicate column names coming from different tables

which will be resolved later by renaming the columns. so there is no

need to check for duplicates at the beginning of bind node for SPJ's.

Change-Id: I28146c698bae7622e27b326ab0411e9a3ef56c2c

Fix for bug 1446043

SPJ's can contain duplicate column names coming from different tables

which will be resolved later by renaming the columns. so there is no

need to check for duplicates at the beginning of bind node for SPJ's.

Change-Id: I28146c698bae7622e27b326ab0411e9a3ef56c2c

mapTransactionStates is now static in RMInterface

This fixes a bug which prevented local transactions from working.

Added changes to allow SSCC to get global Ids from the IdTm Server

optimizations in SSCC region endpoint coprocessor

Change-Id: I5d7d286e8a9b831a28e372412ec381cd113de017

Merge "Report ports used by install_local_hadoop"

Fix LP bug 1446402 - LIKE patterns longer than 127 chars don't work well

If a fixed part of a LIKE pattern is longer than 127 characters, then

you get "matches" on column values that should not match. An example of

such a pattern would be:

'%ABC...Z0123...9ABC...Z0123...9ABC...Z0123...9abcdefghij%'

where the fixed part [the part between two % (or _) characters]

is 128 characters long.

The root cause of the problem was another place in PCODE logic

where a signed char was being used to hold a length value.

By using an unsigned char, we can go up to 255 chars in a fixed part

of a LIKE pattern. If a fixed part is longer than 255, the SQL Compiler

should not be attempting to use PCODE for the LIKE predicate so things

should be fine.

Change-Id: I2ff8e00dedeb3145602f57eed7418ea7b3c17a77

change LOG.info to LOG.trace in SSCC critical path"

Change-Id: Ibaa344d2f9556c05dd29413db7e716e04eaf095a

Report ports used by install_local_hadoop

Added options -n and -v to install_local_hadoop so it prints the

ports numbers it configures. Also improved the help text which

now is:

install_local_hadoop [ -p {<start port num> | rand | fromDisplay} ]

[ -y ]

[ -n ]

[ -v ]

-p configures non-standard ports, and is one of:

-p <start port num> custom cases

-p rand for shared systems, use a random start port number

between 9000 and 49000 that is divisible by 200

-p fromDisplay if you are running on a VNC session

-y answers interactive questions implicitly with yes

-n takes no action, useful with -v

-v lists the port values used

See script header for use of optional environment variables.

For example:

$ install_local_hadoop -v -n

MY_DCS_MASTER_INFO_PORT=40010

MY_DCS_MASTER_PORT=37800

MY_DCS_SERVER_INFO_PORT=40030

MY_HADOOP_DN_HTTP_PORT_NUM=50075

MY_HADOOP_DN_IPC_PORT_NUM=50020

MY_HADOOP_DN_PORT_NUM=50010

MY_HADOOP_HDFS_PORT_NUM=9000

MY_HADOOP_JOB_TRACKER_HTTP_PORT_NUM=50030

MY_HADOOP_NN_HTTP_PORT_NUM=50070

MY_HADOOP_SECONDARY_NN_PORT_NUM=50090

MY_HADOOP_SHUFFLE_PORT_NUM=8080

MY_HADOOP_TASK_TRACKER_PORT_NUM=50060

MY_HBASE_MASTER_INFO_PORT_NUM=60010

MY_HBASE_MASTER_PORT_NUM=60000

MY_HBASE_REGIONSERVER_INFO_PORT_NUM=60030

MY_HBASE_REGIONSERVER_PORT_NUM=60020

MY_HBASE_REST_PORT_NUM=8080

MY_HBASE_ZOOKEEPER_LEADERPORT_NUM=3888

MY_HBASE_ZOOKEEPER_PEERPORT_NUM=2888

MY_HBASE_ZOOKEEPER_PROPERTY_CLIENTPORT_NUM=2181

MY_REST_SERVER_PORT=4200

MY_REST_SERVER_SECURE_PORT=4201

MY_SQL_PORT_NUM=3346

MY_YARN_ADMIN_PORT_NUM=8033

MY_YARN_HTTP_PORT_NUM=8088

MY_YARN_LOCALIZER_PORT_NUM=8040

MY_YARN_NM_PORT_NUM=8041

MY_YARN_RESMAN_PORT_NUM=8032

MY_YARN_SCHED_PORT_NUM=8030

MY_YARN_TRACKER_PORT_NUM=8031

$

Because more options were added, changed the test of them to a loop instead

of sequential tests.

Removed trailing spaces.

Change-Id: Ia4c7acd89a7556c4f14af7e75b8f28c98abee48c

    • -64
    • +129
    /sqf/sql/scripts/install_local_hadoop
Fixes to copyright check script

1. Changed updateCopyrightCheck.py so that it does not use the

argparse module (removing the dependency on Python 2.7).

2. Deleted some debug code.

3. Fixed some line continuation bugs.

Change-Id: I32a0d80d6cf3a8927ef4537c473cf14891bda483

Merge "Expected file change for failing hive test."

Update release version to 1.2 - dcs

Distinguish master branch from stable/1.1 branch.

If next release is determined to be a major release, this will

be updated again before release. Assuming minor release keeps

options open.

Test depends on version of server, since we are transitioning the

release version, relax the tests to accept 1.1 or 1.2.

Change-Id: I299cffad66dd661f05c711142ab05ffe3271e768

Update release version to 1.2 - core

Distinguish master branch from stable/1.1 branch.

If next release is determined to be a major release, this will

be updated again before release. Assuming minor release keeps

options open.

Need updated JDBC tests that accept either release number.

Depends-On: I299cffad66dd661f05c711142ab05ffe3271e768

Change-Id: I5f2fc71f9e31a5df3fa8d611c224e3ef9d491b3f

Merge "create meta column family for transaction data"

Expected file change for failing hive test.

A previous checkin that enhanced an error message and caused this diff. So

just had to update the expected file.

Change-Id: I1cc5133e7bf971b98fc6e79f7cad05553eed6b46

(cherry picked from commit 233d4ca7bf16d5bb629be069e87a2f06e5956c02)

Expected file change for failing hive test.

A previous checkin that enhanced an error message and caused this diff. So

just had to update the expected file.

Change-Id: I1cc5133e7bf971b98fc6e79f7cad05553eed6b46

Added more exception handling

Change-Id: Ida87f03b4891050de3df31d48dd0b88f89b556c8

    • -153
    • +217
    /src/test/pytests/test_p2.py
Merge "Adding logic to the core Trafodion build to check for copyrights."

Merge "Closes-Bug: 1438340"