Clone
 

suresh.subbiah@hp.com <suresh.subbiah@hp.com> in Trafodion

Fix for TRAFODION-4

    • -16
    • +22
    /core/sql/optimizer/BindRelExpr.cpp
    • -12
    • +113
    /core/sql/regress/executor/EXPECTED015.SB
    • -31
    • +39
    /core/sql/regress/hive/EXPECTED015
Fix for LP 1460771

LP 1460771 descrbes a data corruption problem that occurs when upserting into

a table with an index. Fix suggested by Anoop is to transform the upsert

statement into a merge statement. The transformation shown below in now done

in the binder, where the upsert statement provided is changed internally

to the corresponding merge statement.

prepare s1 from upsert into test1 values (1,1) ;

prepare s2 from merge into test1 on a = 1

when matched then update set b = 1

when not matched then insert values (1,1) ;

prepare s1 from upsert into test1 select * from test2 ;

prepare s2 from merge into test1 using (select * from test2) z(a,b) on

test1.a = z.a

when matched then update set b = z.b

when not matched then insert values (z.a,z.b) ;

prepare s1 from upsert into test1 values (1,1),(2,2) ;

prepare s2 from merge into test1 using (values (1,1),(2,2)) z(a,b) on

test1.a = z.a

when matched then update set b = z.b

when not matched then insert values (z.a,z.b) ;

Existing merge statement had an issue data corruption on index table when a

veg is not formed between source value and key column on target side. This

occurs when the source is a tuplelist or something like a valueidUnion. This

has been addressed by breaking VEG formation between old and new values when

a merge is done into a table with an index.

---Patchset 2

Address review comments. This entire fix was possible due to help from Anoop,

Hans and Prashanth. Thank you.

--Patchset 3

I am sorry I forgot to address the comment about system columns. Indeed with

patchset 2 I was not able to upsert into a salted table with index. This is now

addressed as suggested.

Change-Id: I778a7c431e993c8b54dbef6b40b44b09d6cc9f8e

Rework for incremental IM during bulk load

Address comments by Hans and fix 1 regression failure

A regression failure in executor/test013 was caused due to how external

names as used with volatile indexes. This has been fixed in GenRelExeUtil.cpp

The parser change suggested could not be made due to increasing conflicts.

Thank you for the feedback.

Change-Id: Icdf5dbbf90673d44d5d0ccb58086266520fcf5c3

    • -28
    • +12
    /sql/sqlcomp/CmpSeabaseDDLindex.cpp
Changes in Patchset2

Fixed issues found during review.

Most of the changes are related to disbling this change for unique indexes.

When a unique index is found, they alone are disabled during the load.

Other indexes are online and are handled as described below. Once the base

table and regular indexes have been loaded, unique indexes are loaded from

scratch using a new command "populate all unique indexes on <tab-name>".

A simlilar command "alter table <tab-name> disable all unique indexes"

is used to disable all unique indexes on a table at the start of load.

Cqd change setting allow_incompatible_assignment is unrelated and fixes an

issue related to loading timestamp types from hive.

Odb change gets rid of minor warnings.

Thanks to all three reviewers for their helpful comments.

-----------------------------------

Adding support for incremental index maintenance during bulk load.

Previously when bulk loading into a table with indexes, the indexes are first

disabled, base table is loaded and then the indexes are populated from

scratch one by one. This could take a long time when the table has significant

data prior to the load.

Using a design by Hans this change allows indexes to be loaded in the same

query tree as the base table. The query tree looks like this

Root

|

NestedJoin

/ \

Sort Traf_load_prep (into index1)

|

Exchange

|

NestedJoin

/ \

Sort Traf_load_prep (i.e. bulk insert) (into base table)

|

Exchange

|

Hive scan

This design and change set allows multiple indexes to be on the same tree.

Only one index is shown here for simplicity. LOAD CLEANUP and LOAD COMPLETE

statements also now perform these tasks for the base table along with all

enabled indexes

This change is enabled by default. If a table has indexes it will be

incrementally maintained during bulk load.

The WITH NO POPULATE INDEX option has been removed

A new option WITH REBUILD INDEXES has been added. With this option we get

the old behaviour of disabling all indexes before load into table and

then populate all of them from scratch.

Change-Id: Ib5491649e753b81e573d96dfe438c2cf8481ceca

  1. … 21 more files in changeset.
ODB bug fixes by Maurizio

Miscellaneous fixes by Maurizio. Some of the changes are related to the

help screen.

Change-Id: Ifbff89b8d6269a00d9f1e2752813d19ef3d852a3

Fixes for a few scalar UDF bugs

LP 1426605: change in NormRelExpr.cpp. When left linearizing a join backbone

sufficients inputs were not being provided. The change ensures that inputs

from the old tree and still marked as required inputs for a node in the new

tree.

LP 1420530: Error handling added to BiArith::bindNode.

LP 1420938: Error handling to CREATE FUNCTION statement to flag more than 32

parameters.

LP 1421438: showddl [function | procedure | table_mapping function] <name>;

now works. If one of the optional tokens is not specified then we will look

for a table called <name>.

Patch Set 1

Changes to address comments by Dave.

One more fix in ExUdr.cpp. There is no LP for this bug. If a dll is missing

at runtime or other LOAD errors during UDF fixup could lead to an assertion,

since we try to place an error in UDF's up queue, before there are entries

in the corresponding down queue. Fix is to remove this line and let existing

error handling report this error. Thanks for your help Hans.

Couple of items that I forgot to mention before

1) Changes in Analyzer.cpp related to printing predecessorJBBC are due to Hans.

2) Showddl code is mostly refactored from previous versions.

Change-Id: Idfde89d73c47735c4405befa6b9cdd4ae0d2e641

    • -0
    • +10
    /sql/sqlcomp/CmpSeabaseDDLroutine.cpp
Fix for bug 1329361 and minor improvements in hive scan

This is a joint checkin with Sandhya Sundaresan.

PreOpens during hive scan were not working correctly, cause the same range

to be read twice. The read triggered by the preopen was not used since it had

incorrect cursor name. This is now fixed.

Reduced number of LOB threads to 2

Removed multi cursor code since it is not used

Skip reading the first range if it has 0 bytes to be read.

Change-Id: I91cff41134490435165da7d59c955c7215b3c6b8

    • -229
    • +139
    /sql/executor/ExHdfsScan.cpp
Specify compiler context when querying natable virtual table interface

select * from table(natablecache('ALL','local')) ;

select * from table(natablecacheentries('meta','local')) ;

select * from table(naroutinecache('user','local')) ;

Following the querycache virtual table interface, now the natable and

naroutine cache virtual table interfaces also support specifying the

name of the context we want to query. The first parameter can be 'all' or

name of context (e.g. 'meta', 'user', 'ustat'). The second parameter

can be local or remote. Parameters are case insensitive.

For remote compiler we only query the context pointed to by activeschemadb.

The column num_entries has been added to natablecache virtual table.

This change can be used to monitor memory growth in these caches.

Patch Set 2:

Address issues found by Dave. Changes in 4 files

sql/arkcmp/NATableSt.cpp

sql/arkcmp/NATableSt.h

sql/arkcmp/QueryCacheSt.cpp

sql/optimizer/NARoutine.cpp

Changes cover

A minor leak in in HQCIterator

returning FALSE in NATable/RoutineCacheStats iterator getNext methods

Change-Id: Icf15c93b9ae3c3f523d0abe1580ce7280c5b0d84

  1. … 6 more files in changeset.
Remove code and cqds related to Thrift interface

ExpHbaseInterface_Thrift class was removed a few months ago. Completing

that cleanup work. exp/Hbase_types.{cpp,h} still remain. These are Thrift

generated files but we use the structs/classes generated for JNI access.

Change-Id: I7bc2ead6cc8d6025fb38f86fbdf7ed452807c445

  1. … 5 more files in changeset.
Change to avoid placing large scan results in RegionServer cache

By default the result of every Scan and Get request to HBase is placed in

the RegionServer cache. When a scan returns a lot of rows this can lead to

cache thrashing, causing results which are being shared by other queries

to be flushed out. This change uses cardinality estimates and hbase row size

estimates along with the configured size of region server cache size to

determine when such thrashing may occur. Heap size for region server is

specified through a cqd HBASE_REGION_SERVER_MAX_HEAP_SIZE. The units are in MB.

The fraction of this heap allocated to block cache is read from the config

file once per session through a lightweight (no disk access) JNI call. The

hueristic used is approximate as it does not consider total number of region

servers or that sometimes a scan may be concentrated in one or a few region

servers. We simply do not place rows in RS cache, if the memory used by all

rows in a scan will exceed the cache in a single RS. Change can be overridden

with cqd HBASE_CACHE_BLOCKS 'ON'. The default is now SYSTEM. Change applies

to both count(*) coproc plans and regular scans.

Change-Id: I0afc8da44df981c1dffa3a77fb3efe2f806c3af1

  1. … 6 more files in changeset.
Enable two HBASE_OPTIONS : Compact and Dutability

HBase table attributes COMPACTION_ENABLED and DURABILITY can now be set

through HBASE_OPTIONS clause of CREATE TABLE statement.

COMPACT can be set to TRUE or FALSE

DURABILITY can be set to 'ASYNC_WAL', 'FSYNC_WAL', 'SKIP_WAL', 'SYNC_WAL' and

'USE_DEFAULT'.

Both attributes must be used with extreme caution.

Change-Id: Id197d92958c609fd331816646d5917045029ecce

Reworked fix for LP bug 1404951

The scan cache size for an mdam probe is now set to the hbase default of 100.

Setting it values like 1 or 2 resulted in intermittent failures. The cqd

COMP_BOOL_184 can be set ON to get a cache size of 1 for mdam probes.

Root cause for this intermittent failure will be investigated later.

Change-Id: Ic05a77ecb0deeb260784f156de251a0f0dbdf49c

    • -7
    • +5
    /sql/regress/seabase/DIFF010.KNOWN.SB.OS
Fixes for log file reader UDF

Following issues have been resolved

1) Multi-line error messages are now supported.

2) Default event log file name of master_exec_* is now supported

3) When an event log file name other the default is specified in the config

file, the log file reader UDF will now pick it up. The directory is still

expected to be $MY_SQROOT/logs.

Patch Set 2

Address comment issue found by Dave

Change-Id: Ia54543bb8a6f6f8122988424761b52416192794e

    • -42
    • +160
    /sql/sqludr/PredefUdrReadfile.cpp
Fix for LP bug 1404951

This is an intermittent problem that appears on the build machine,

caused by Change-Id I5b570c42712d4c38157181c3b76bf9a3ab6e2ed9. In this

delivery we are increasing the number of rows fetched by hbase scan call

to two rows, up from the previous value of one row. This is only for the

scan call used to determine mdam probe keys.

Change-Id: I12f20084bf53c188000db24ed8a698b3fdc7f41b

Fix for Mdam access causes large number of rows to be accessed.

A fix by Dave Birdsall and Anoop Sharma on a problem where mdam probe was

causing large number of rows to be accessed. The issue was that the scan cache

was being set to 10000 based on cardinality estimates, but the mdam probe

is being to retrieve only one row at most. Additinal changes to improve

debugging with mdam predicate network.

Dummy delta change to get check tests to run again.

Change-Id: I5b570c42712d4c38157181c3b76bf9a3ab6e2ed9

Fix for bug in determining range partition boundary value

When the boundary value of a range partition function is determined by

decoding the encoded value stored as the split boundary in hbase, some mistakes

were being made. This problem was seen for nullable varchar columns. The

symptom is crash during compile with an overflowing stack. The problem

was that the buffer provided to the runtime code to decode the encoded value

was laid out incorrectly. The null bytes were interchanges with the var length

indicator bytes. Thanks to Hans suggesting t his fix.

Patch Set 2:

Chaged testcase as suggested by Hans. Also including a new fix.

This is a fix that that salted indexes to be created when the corresponding

base table has key columns of character type. The fix also allows salted index

to be created when the index is created on columns whose column number

in the base table is larger than the number of columns in the index itself.

Change-Id: Id0bd2b187500d283860d0d12ecb9d8d743b429e9

    • -6197
    • +6234
    /sql/regress/seabase/EXPECTED010
    • -5
    • +10
    /sql/sqlcomp/CmpSeabaseDDLcommon.cpp
Support for SALTED index.

CREATE INDEX supports a new clause "SALT LIKE TABLE".

This causes SALT column to be leading column in the index

Duplicate columns in the index table are now eliminated. This is a bugfix.

Showddl and Invoke will show the SALT syntax and column respectively for

the index. CREATE index also supports HBASE_OPTIONS clause.

A bug seen when NULLABLE partitioning columns are used is also fixed.

Patch Set 2 : All rework from Patch Set 1, excpt for NATable.cpp

Patch Set 3 : Rework in NATable.cpp. Thanks to Hans for all help. Nullable

partition key columns will now generate evenly distributed dataflow through

esps.

Patch Set 4: Fix 3 issues found for the work done in Patch Set 3. Change is only in NATable.cpp

Change-Id: If378ffca29ee83dd4b7928c784b8d34d76f50049

  1. … 11 more files in changeset.
Fix for LP #1344181 and a change to control buufer size for Hdfs scan

1) A fix to support "get functions in schema <sch>" and

"get table_mapping functions in schema <sch>"

2) A rework of previous to change to control buffer sizes used by Hdfs Scan.

The previous change attempted to control buffer sizes by minimizing outputs

provided by the scan. But sometimes a parent operator may ask for the scan

to be partitioned or ordered on one of the removed outputs and this would

cause a plan to be not generated. So this chnage does not try to minimize

outputs. BottomValues of updateToSelectMap is manipulated such that if a

scan transforms a value it reads in and makes it smaller in length by more

than 512 bytes (settable by comp_int_98) then only the smaller value is in

the output. The penalty of not having the desired output is that an extra

exchange will be added to repartition data. This change has been done

for all scans though the memory problem was initailly seen for Hive scans.

Please see comment in BindRelExpr.cpp for a better description.

Change-Id: I347096157ea1456f3c95854dca2816018ab607a3

Fix to redube buffer sizes used by Hive scan.

A previous fix I had commited for this problem was causing certain bulk load

queries to not get a proper plan. Thanks to Khaled and Hans for uncovering

this issue. This change set undoes the previous change and implements the same

logic in the binder. This time the change is made only for the bulk loader.

If it is felt to be safe it can be extended for all inserts/upserts later.

Change-Id: I8e12a435008227aede6ba6bcd333db1dca2ba2f2

Fix for two hive scan issues

1) Hive string columns can be wider in their DDL defnition than their target

column when inserting from hive. With the fix the output of hive scan will

be the narrower target column rather than the original wider hive source

column.

2) option to ignore conversion error during hive/hdfs scan and continue to

next one. The cqd HDFS_READ_CONTINUE_ON_ERROR should be set to ON to enable

this option.

Change-Id: Id8afe497e6c043c4b7bae6d556cee76c42ab0163

Update 2 expected files to handle daily build test failures.

In my previous delivery I had added a new field to explain output,

but did not make corresponding testware changes. This delivery fixes

that mistake.

Change-Id: Ia531b45341a8998538994909b546a5f1f8997dd7

    • -102
    • +174
    /sql/regress/seabase/EXPECTED010
    • -63
    • +70
    /sql/regress/seabase/EXPECTED016
Change to automatically set the HBase client side cache size.

Previously the size of HBase Scan cache size was set through a CQD.

With this change this cache size will be set to a value between [min, max]

based of the estimated number of rows accessed per scan node instance.

The values of min and max can be specified through cqds. The default values

are [100, 10000]. The previous default value was 100. The desired effect

of this change is to improve throughput for large scans. Simple experiments

are showing an improvement in the range of to 2-3X.

Patch Set 2:

Set cache size for update and delete too, this will be functional when

card estimate code is added later

Added stubs for estRowsAccess for Update and Delete

Changed name of cqds

Added cache size info to explain output

modularized code by putting logic to set cache size in a method

Patch Set 3:

For scan the number of rows is estimated as MAX(rowsAccessed, maxCardEst).

This is to guard against cardinality underestimations as suggested by Qifan

Change-Id: I5fac104666bd125671c818fc71ddad666f186ad7

Fix for query cache issue for Hive selects.

Two changes

1)Fix for bug #1293816.

2)Discontinue linking in libprotobuf.so, since it is currently unused.

For the hive query cache bug, the issue was that any change to a HDFS file

in a Hive directory, or to the directory itself (add/drop a file), was not

reflected in the query cache key. So the compiler could give a plan with an

incorrect list of HDFS files to the HDFSScan operator. The fix is to add

max(fileInfo.mLastMod) to the query cache key. The max is taken over all

files for a given Hive table. The number of files for a given Hive table has

also been added to the query cache key to cover cases where a file is deleted

from a Hive directory. Both query cache and query text cache are addressed.

The mLastMod time for each file and the number of files are determined through

the libHdfs call hdfsListDirectory(), which we already make.

Linking in libprotobuf.so is causing issues on certain MapR clusters since

MapR also uses this library and sometimes the version used by MapR is

different from what Trafodion uses. Since this library is not being used by

Trafodion stack right now, we will no longer link in this library in SQL or

connectivity layers. When a fix is found for the version incompatibility

issue, this change will be reversed.

Patche Set 2.

Thank you Dave for catching these issues. They have been resolved in

Patch Set 2.

Change-Id: Idbe599a876fdcaf77d2bdb9fdbf4b77a3f431e46

    • -1
    • +4
    /conn/odbc/src/odbc/nsksrvrcore/Makefile
    • -0
    • +2
    /sql/nskgmake/tdm_sqlmxevents/Makefile
GET commands for SPJ and UDF

Fix for bug #1322691

These 7 GET statements are now supported.

get procedures [in schema <schema-name>];

get libraries [in schema <schema-name>];

get functions [in schema <schema-name>];

get table_mapping functions [in schema <schema-name>];

get procedures for library <library-name>;

get functions for library <library-name>;

get table_mapping for library <library-name>;

A fix for a problem reported by Anoop where GET TABLES IN SCHEMA <hive-schema-name> did not work is also fixed. The issue was that the schema name was not getting passed through when the internal statement was used to retrieve tables ina hive schema.

Change-Id: I4649a0432ed766504a95cb6da34b0c01881fc1c3

    • -67
    • +112
    /sql/regress/udr/EXPECTED107.SB
Fix for a few Hive issues

a) Improved Hive file drectory parsing to include a MapR format with no hostName or portNum. The logic to parse the directory name was previously repeated in three places in code, now we have one common place to encapsulate this parsing logic. Most files have been touched due to this refactoring

b) Fix for Launchpad bug #1320344. Hive Delete will now raise an error

c) Fix for Launchpad bug #1320385. Hive Update will now raise an error

d) Hive Merge will also raise an error. A query cache problem described in #1320344 and #1320385 has not been resolved in this delivery.

Patch Set 2:

This patch set addresses the two issues that Hans found. Thank you.

Patch Set 3:

This patch set addresses the two issues that Dave found. Thank you.

Change-Id: I19764227663dbb5c2d410608864a000592ce964a

    • -0
    • +16
    /sql/comexe/ComTdbFastTransport.cpp
    • -34
    • +43
    /sql/generator/GenFastTransport.cpp
  1. … 4 more files in changeset.