Trafodion

Clone Tools
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Merge remote branch 'phoenix_test/stable/1.1' into stable/1.1

Move phoenix_test into test subdir to combine repos

  1. … 76 more files in changeset.
Merge remote branch 'phoenix_test/stable/1.0' into stable/1.0

Move phoenix_test into test subdir to combine repos

  1. … 78 more files in changeset.
Merge remote branch 'phoenix_test/master'

Move install into subdir to combine repos

    • -0
    • +712
    /install/installer/LICENSE.TXT
    • -0
    • +84
    /install/installer/bashrc_default
    • -0
    • +202
    /install/installer/dcs_installer
    • -0
    • +125
    /install/installer/rest_installer
    • -0
    • +33
    /install/installer/setup_known_hosts.exp
    • -0
    • +104
    /install/installer/tools/ambari_setup
    • -0
    • +188
    /install/installer/tools/clouderaMoveDB.sh
    • -0
    • +180
    /install/installer/tools/cloudera_setup
    • -0
    • +135
    /install/installer/tools/cloudera_uninstall
    • -0
    • +133
    /install/installer/tools/hortonworks_uninstall
  1. … 52 more files in changeset.
Merge remote branch 'install/master'

Move dcs into subdir to combine repos

  1. … 560 more files in changeset.
Merge remote branch 'dcs/stable/1.1' into stable/1.1

Move dcs into subdir to combine repos

  1. … 468 more files in changeset.
Merge remote branch 'dcs/stable/1.0' into stable/1.0

Move dcs into subdir to combine repos

  1. … 562 more files in changeset.
Merge remote branch 'dcs/master'

Move core into subdir to combine repos

    • -573
    • +0
    /conn/jdbc_type2/native/Benchmark.cpp
  1. … 10754 more files in changeset.
Move core into subdir to combine repos

    • -573
    • +0
    /conn/jdbc_type2/native/Benchmark.cpp
    • -163
    • +0
    /conn/jdbc_type2/native/Benchmark.h
  1. … 10608 more files in changeset.
Move core into subdir to combine repos

Use: git log --follow -- <file>

to view file history thru renames.

    • -573
    • +0
    /conn/jdbc_type2/native/Benchmark.cpp
  1. … 10823 more files in changeset.
Merge remote branch 'core/master'

Initial dummy commit

Error handling.

Merge "Fix bug 1323826 - SELECT with long IN predicate causes core file"

Update parser for errors.

Merge branch 'master' into traf/traf-config

Merge "Rework for incremental IM during bulk load"

Merge "Configuring hbase option MAX_VERSION via SQL"

Fixed logging error

traf_start was creating its own trafodion_install<timestamp>.log file.

Now only one file is being created.

Change-Id: Ica4c362224074316a027b939333f7a70d8565686

Rework for incremental IM during bulk load

Address comments by Hans and fix 1 regression failure

A regression failure in executor/test013 was caused due to how external

names as used with volatile indexes. This has been fixed in GenRelExeUtil.cpp

The parser change suggested could not be made due to increasing conflicts.

Thank you for the feedback.

Change-Id: Icdf5dbbf90673d44d5d0ccb58086266520fcf5c3

    • -28
    • +12
    /sql/sqlcomp/CmpSeabaseDDLindex.cpp
Fix bug 1323826 - SELECT with long IN predicate causes core file

Actually, this check-in does not completely fix the problem, but

it does allow IN predicates (and NOT IN predicates) to have a list

with as many as 3100 items in the list.

NOTE: There are many places in the SQL Compiler code that use recursion.

The changes in this check-in address the issue for long IN lists

and, to some extent, for INSERT statements that attempt to insert

many rows with a single INSERT statement. However, it is still possible

for someone to try a list that is too long. As you make the lists

longer, you find more recursive routines that have the same type of

problem(s) that are being fixed for certain routines by this check-in.

This check-in also fixes a couple of minor problems in the logic used to

debug Native Expressions code. These problems were in

.../sql/generator/Generator.cpp and

.../sql/exp/ExpPCodeOptsNativeExpr.cpp

There were 3 different techniques used to reduce the stack space usage of

various recursive routines that get invoked as a result of long IN lists

or NOT IN lists:

1) Move variables from the stack to heap.

2) Recode the recursive routine to pull out sections of code (not needed

during the recursion) and put those in their own routine. This cuts

the stack space usage because it enables the C++ compiler to generate

code for the recursive routine that needs significantly less stack

space.

3) Declare variables of type ARRAY on the stack (where the ARRAY

overhead is allocated from stack, but the contents come from heap)

to hold certain pieces of data where each recursive level of calling

needs its own value for the variable AND then change the code to use a

'while' loop to process the nodes in the node tree in the same order

that the original recursive routine would have processed the nodes.

Files changed for reducing stack space usage:

sql/optimizer/ItemCache.cpp - use method 2 on ItemExpr::generateCacheKey()

sql/optimizer/NormItemExpr.cpp - use method 2 on ItemExpr::normalizeNode()

and method 1 on BiLogic::predicateEliminatesNullAugmentedRows()

sql/generator/GenPreCode.cpp - use method 2 on

ItemExpr::replaceVEGExpressions()

sql/optimizer/ItemExpr.cpp - use method 2 on ItemExpr::unparsed()

AND ItemExpr::ItemExpr::synthTypeAndValueId()

sql/optimizer/OptRange.cpp - use method 3 on OptRangeSpec::buildRange()

sql/optimizer/BindItemExpr.cpp - use method 3 on

ItemExpr::convertToValueIdSet()

sql/optimizer/NormRelExpr.cpp - use method 3 on

Scan::applyAssociativityAndCommutativity()

sql/optimizer/ItemExpr.h - declare new methods that were created

sql/optimizer/ItemLog.h - declare new methods that were created

Finally, this check-in changes the default value for a CQD named

PCODE_MAX_OPT_BRANCH_CNT from 19000 to 12000. This was to fix a problem

where we used too much *heap* space when we tried to optimize a PCODE

Expression that had too many separate blocks of PCODE instructions (such

as results from a very long NOT IN list.) With this change, we will

choose to run with unoptimized PCODE if trying to optimize the PCODE

would result in overflowing the heap space available.

Change-Id: Ie8ddbab07de2a40095a80adac7873db8c5cb74ac

    • -10
    • +10
    /sql/exp/ExpPCodeOptsNativeExpr.cpp
    • -107
    • +165
    /sql/optimizer/BindItemExpr.cpp
    • -183
    • +245
    /sql/optimizer/ItemExpr.cpp
    • -142
    • +203
    /sql/optimizer/NormRelExpr.cpp
Remove trailing spaces from keys.

Avoid scanner timeout for Update Statistics

For performance reasons, Update Stats pushes sampling down into HBase,

using a filter that returns only randomly selected rows. When the

sampling rate is very low, as is the case when the default sampling

protocol (which includes a sample limit of a million rows) is used on

a very large table, a long time can be taken in the region server

before returning to Trafodion, with the resultant risk of an

OutOfOrderScannerNextException. To avoid these timeouts, this fix

reduces the scanner cache size (the number of rows accumulated before

returning) used by a given scan based on the sampling rate. If an

adequate return time can not be achieved in this manner without

going below the scanner cache minimum prescribed by the

HBASE_NUM_CACHE_ROWS_MIN cqd, then the scanner cache reduction is

complemented by a modification of the sampling rate used in HBase.

The sampling rate used in HBase is increased, but the overall rate

is maintained by doing supplementary sampling of the returned rows in

Trafodion. For example, if the original sampling rate is .000001,

and reducing the scanner cache to the minimum still results in an

excessive average time spent in the region server, the sampling

may be split into a .00001 rate in HBase and a .01 rate in Trafodion,

resulting in the same effective .000001 overall rate.

Change-Id: Id05ab5063c2c119c21b5c6c002ba9554501bb4e1

Closes-Bug: #1391271

Configuring hbase option MAX_VERSION via SQL

Change-Id: I88041d539b24de1289c15654151f5320b67eb289