Trafodion

Clone Tools
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Stop sending user CQDs at embedded compile switch

Do not send user entered CQDs when switching to embedded META CmpContext

to compile metadata queries.

Also do the holdAndSet needed CQDs only when the embedded (META)

CmpContext is first switched to. These CQDs won't be restored when

switching back so that we won't send the same set of CQDs again next

time the CmpContext is used.

LP bug 1438372, Compiler returns internal error from

../optimizer/opt.cpp line 6907

LP bug 1444739, compGeneral/TEST006 failed with tmp fix for bug 1438372

Change-Id: I92398aeeaf53820d3f8af1fb73e9c1d6f449ce5e

    • -82
    • +66
    /sql/regress/compGeneral/EXPECTED042
    • -53
    • +93
    /sql/sqlcomp/CmpSeabaseDDLcommon.cpp
test case & del bug

Change-Id: I2ea628efeab0ec012b80dd5d0b0471f12d2634d6

Merge "Fix problematic assert that occassionally shows up at shutdown."

DDL Transactions, retry logic upon HA failures.

Change-Id: Ib574c21ae24b36f2507dbc6b24f85fd3e09da8ee

Fix for 1452424 vsbb scan/delete cause query to return wrong result

VSBB update/delete were not tracking the number of rows in the

buffer.This has been corrected.

Change-Id: I2a89ccd9a84832c4771481de2ee8503e912ce0d8

    • -4
    • +171
    /sql/regress/seabase/EXPECTED011
Merge remote branch 'gerrit/master' into traf/traf-config

Merge "Fix 1455585: Error 1234 returned trying to run SQL"

Added ssh test, fixed tar bug, testing traf user

1. Added ssh test to make sure able to connect

2. Fixed tar bug when running installer from other directory than

/installer/

3. Testing to make sure user is not running trafodion_installer from

the trafodion user id.

4. Added message to make sure user checks their firewall settings

if they can not access Hadoop.

Edit 1: Changed error reporting. Change locatoin of Trafodion user id

check.

Edit 2: Most check outside the for loop. Changed "much" to "must".

Change-Id: I967c94ff23a0f0d55a1ca3a8ec7d51b7fb48307f

Merge "Enabling Bulk load and Hive Scan error logging/skip feature"

Fix 1455585: Error 1234 returned trying to run SQL

If code is refreshed with https://review.trafodion.org/1635 (which adds the

first phase of column level privileges) authorization is enabled, and running

in debug mode; error 1234 is returned and you are not able to drop or

initialize authorization. SQL requests fail.

This delivery fixes the issue. First it allow operations to proceed even if

there are missing privilege manager tables. Second, we have turned off checks

for column level privileges (can be turned on by setting cqd CAT_TEST_BOOL).

Lastly, we do not run the column privileges regression test.

Change-Id: Ic427c4e9413b6c7208313c0e0755ca6aabd8a2cd

    • -1
    • +1
    /sql/regress/tools/runregr_catman1.ksh
Add global ID server into SSCC

The startId and commitId are now generated by a global ID server so

they are the same throughout the cluster. Without a global server

queries that spanned 2 region servers would return inconsistent results

Change-Id: If18cc7dc9d309dbc25fbf05c5666e47bd918ddeb

    • -1
    • +1
    /sqf/export/include/dtm/tmtransaction.h
  1. … 17 more files in changeset.
added fullstack2/FILTER062

Change-Id: I2ed813d5674897109fb973124c245cdd1aa789f9

    • -0
    • +43
    /sql/regress/fullstack2/FILTER062
Merge "Invoking sqenv.sh repeatedly does not change shell environment"

Merge "various lp and other fixes, details below."

    • -17
    • +58
    /sql/sqlcomp/CmpSeabaseDDLcommon.cpp
    • -24
    • +99
    /sql/sqlcomp/CmpSeabaseDDLtable.cpp
Merge "Part 1 of ALTER TABLE/INDEX ALTER HBASE_OPTIONS support"

    • -7
    • +137
    /sql/sqlcomp/CmpSeabaseDDLtable.cpp
Merge "Using the language manager for UDF compiler interface"

Support big column larger than 32k for JBDC T2

[bug 1451707]Call ResultSet.next() function failed when expect to select 200k utf8

column size from a table.

[bug 1451693]T2 server return trunctated column size to T2 client.

Change-Id: Icb2b9a9089c17d4c8e64c4af0c68468efbcd19a3

    • -31
    • +74
    /conn/jdbc_type2/native/SQLMXCommonFunctions.cpp
    • -5
    • +14
    /conn/jdbc_type2/native/SqlInterface.cpp
    • -9
    • +21
    /conn/jdbc_type2/native/SrvrCommon.cpp
Part 1 of ALTER TABLE/INDEX ALTER HBASE_OPTIONS support

blueprint alter-table-hbase-options

This set of changes includes the following:

1. Syntax for ALTER TABLE/INDEX ALTER HBASE_OPTIONS

2. Compiler binding and generation logic

3. Run-time logic to update the metadata TEXT table with the

new options.

Still to come is:

1. Logic to actually change the HBase object

2. Transactional logic to actually change the HBase object

The functionality in this set of changes is only marginally

useful. If you manually change hbase options on a Trafodion

object using the hbase shell, you could use this ALTER

TABLE/INDEX command to update the Trafodion metadata. (Of

course some care would have to be taken to use the same

options!).

Change-Id: Id0a5513fe80853c06acdbbf6cc9fd50492fd07b2

    • -0
    • +141
    /sql/parser/StmtDDLAlterIndexHBaseOptions.h
    • -0
    • +128
    /sql/parser/StmtDDLAlterTableHBaseOptions.h
  1. … 4 more files in changeset.
Option to use ConcurrentHashMap vs synchronizedSet

Default is synchronizedSet

Option is set in ms.env

DTM_USE_CONCURRENTHM=1

The synchronization option is for the participatingRegions

list in the client-side TransactionState.

We are seeing an issue where a duplicate region is being

inserted into the particpant list and it is possible that a

synchronizedSet is not blocking concurrent access as we

expected.

Change-Id: I433c0adc513f6a5336d88f6373d8f296cbc28edd

various lp and other fixes, details below.

-- added support for self referencing constraints

-- limit clause can now be specified as a param

(select * from t limit ?)

-- lp 1448261. alter table add identity col is not allowed and now

returns an error

-- error is returned if a specified constraint in an alter/create statement

exists on any table

-- lp 1447343. cannot have more than one identity columns.

-- embedded compiler is now used to get priv info during invoke/showddl.

-- auth info is is not reread if already initialized

-- sequence value function is now cacheable

-- lp 1448257. inserts in volatile table with identity column now work

-- lp 1447346. inserts with identity col default now work if inserted

in a salted table.

-- only one compiler is now needed to process ddl operations with or

without authorization enabled

-- query cache in embedded compiler is now cleared if user id changes

-- pre-created default schema 'SEABASE' can no longer be dropped

-- default schema 'SCH' is automatically created if running regressions

and it doesn't exist.

-- improvements in regressions run.

-- regressions run no longer call a script from another sqlci session

to init auth, create default schema

and insert into defaults table before every regr script

-- switched the order of regression runs

-- updates from review comments.

Change-Id: Ifb96d9c45b7ef60c67aedbeefd40889fb902a131

    • -22
    • +27
    /conn/odbc/src/odbc/nsksrvr/SrvrConnect.cpp
  1. … 55 more files in changeset.
Enabling Bulk load and Hive Scan error logging/skip feature

Also Fixed the hanging issue with Hive scan (ExHdfsScan operator) when there

is an error in data conversion.

ExHbaseAccessBulkLoadPrepSQTcb was not releasing all the resources when there

is an error or when the last buffer had some rows.

Error logging/skip feature can be enabled in

hive scan using CQDs and in bulk load using the command line options.

For Hive Scan

CQD TRAF_LOAD_CONTINUE_ON_ERROR ‘ON’ to skip errors

CQD TRAF_LOAD_LOG_ERROR_ROWS ‘ON’ to log the error rows in Hdfs files.

For Bulk load

LOAD WITH CONTINUE ON ERROR [TO <location>] – to skip error rows

LOAD WITH LOG ERROR ROWS – to log the error rows in hdfs files.

The default parent error logging directory in hdfs is /bulkload/logs. The error

rows are logged in subdirectory ERR_<date>_<time>. A separate hdfs file is

created for every process/operator involved in the bulk load in this directory.

Error rows in hive scan are logged in

<sourceHiveTableName>_hive_scan_err_<inst_id>

Error rows in bulk upsert are logged in

<destTrafTableName>_traf_upsert_err_<inst_id>

Bulk load can also aborted after a certain number of error rows are seen using

LOAD WITH LOG ERROR ROWS, STOP AFTER <n> ERROR ROWS option

Change-Id: Ief44ebb9ff74b0cef2587705158094165fca07d3

    • -369
    • +413
    /sql/executor/ExExeUtilLoad.cpp
    • -198
    • +380
    /sql/executor/ExHdfsScan.cpp
  1. … 19 more files in changeset.
fix for 1452993

[bug 1452993] T2 don't read the property file from System Properties but

T4 do it.

After this check-in, user can use System.setProperty("properties", file)

to give the driver a default perperty file as T4.

Change-Id: If5f669af86612a99b2eb53093d9fa8492249a000

Adding separate pending txn wait for preSplit

Pending wait does not have limit, will wait until

pending transactions have completed.

Also adding new properties to customize behavior:

hbase.transaction.split.drain.early

-If 'true' then split operation will not wait on

active transactions to complete. Will wait on

pending transactions. Default is 'false'

hbase.transaction.split.active.delay

-Sets time in milliseconds that the preSplit

observer will poll for the active transaction

list to be empty. Default is 15000 (15 seconds)

hbase.transaction.split.pending.delay

-Sets time in milliseconds that the preSplit

observer will poll for the pending transaction

list to be empty. Default is 500 (0.5 seconds)

Change-Id: I5d5a9cf376bfc8afed789e65f4f2b3b0e48ab6b5

Reserving default ports (37800 and 40010) for DCS

Edit 1: Previous code was overwriting all previous reserved ports.

Edit 2: Typo

Edit 3: Typos. Added support for saving previous reserved ports.

Edit 4: Spacing.

Change-Id: I19294116da0aa062129a90a2e2ae83ca19a26237

Merge "Batch2PC Endpoint Coprocessor Phase 2:flush optimization"

remove cluster.conf.

    • -0
    • +276
    /sqf/sql/scripts/sqconfigdb.pm
    • -0
    • +22
    /sqf/src/seabed/test/gocleandb
    • -0
    • +22
    /sqf/src/seabed/test/godb
    • -0
    • +45
    /sqf/src/seabed/test/godb.pl
Batch2PC Endpoint Coprocessor Phase 2:flush optimization

Change-Id: I9db4819f2eda74710e0199b9ece5c84aea3a0fd6

Merge "change for scan"

Merge "Changes to reduce the memory leak in T2 Driver."

change for scan

SsccTransactionState.handleResult()

add if to decide whether data will display.

modify blank

SsccRegionEndpoint.java

modify blank

Change-Id: I971db75c5ff4ee8fca395d221ae8575cc0272188