optimizer

Clone Tools
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Merge "Fix for LP 1460771"

initial support for returning multiple versions and column timestamps

This feature is not yet externalized.

Support added to:

-- return multiple versions of rows

-- select * from table {versions N | MAX | ALL}

-- get hbase timestamp of a column

-- select hbase_timestamp(col) from t;

-- get version number of a column.

-- select hbase_version(col) from t

Change-Id: I37921681fc606a22c19d2c0cb87a35dee5491e1e

  1. … 34 more files in changeset.
Fix for LP 1460771

LP 1460771 descrbes a data corruption problem that occurs when upserting into

a table with an index. Fix suggested by Anoop is to transform the upsert

statement into a merge statement. The transformation shown below in now done

in the binder, where the upsert statement provided is changed internally

to the corresponding merge statement.

prepare s1 from upsert into test1 values (1,1) ;

prepare s2 from merge into test1 on a = 1

when matched then update set b = 1

when not matched then insert values (1,1) ;

prepare s1 from upsert into test1 select * from test2 ;

prepare s2 from merge into test1 using (select * from test2) z(a,b) on

test1.a = z.a

when matched then update set b = z.b

when not matched then insert values (z.a,z.b) ;

prepare s1 from upsert into test1 values (1,1),(2,2) ;

prepare s2 from merge into test1 using (values (1,1),(2,2)) z(a,b) on

test1.a = z.a

when matched then update set b = z.b

when not matched then insert values (z.a,z.b) ;

Existing merge statement had an issue data corruption on index table when a

veg is not formed between source value and key column on target side. This

occurs when the source is a tuplelist or something like a valueidUnion. This

has been addressed by breaking VEG formation between old and new values when

a merge is done into a table with an index.

---Patchset 2

Address review comments. This entire fix was possible due to help from Anoop,

Hans and Prashanth. Thank you.

--Patchset 3

I am sorry I forgot to address the comment about system columns. Indeed with

patchset 2 I was not able to upsert into a salted table with index. This is now

addressed as suggested.

Change-Id: I778a7c431e993c8b54dbef6b40b44b09d6cc9f8e

  1. … 2 more files in changeset.
Fixes from review to sqvers

commit 04f3812f112a5629a563f02d7e72c5fa503c6a8d

Author: Sandhya Sundaresan <sandhya.sundaresan@hp.com>

Date: Sun Jun 14 04:23:21 2015 +0000

Preliminary checkin of lob support for external files. Inserts from http

files, hdfs files and lob local files are supported. Added support for

new extract synttax. Extract from lob

columns to hdfs files has been added . More work needed to support

binary files and very large files . Current limit is 1G.

Also fixed some error handling issues

Fixed some substring warning issues in the lobtostring/stringtolob

functions.

Added references and interfaces to curl library that is needed to read external http

files.

More work needed before this support can be used

Change-Id: Ieacaa3e4b7fa2a040764888c90ef0b029f107a8b

Change-Id: Ife3caf13041f8106a999d06808b69e5b6a348a6b

  1. … 26 more files in changeset.
Migrate from log4cpp to log4cxx

This change is a wholesale removal of log4cpp from source tree.

log4cxx is an external library installed via RPM, or user build

to default /usr/lib64 and /usr/include directories. Some of the

QRLogger and CommonLogger code was changed to use the new log4cxx

APIs.

Change-Id: I248bac0a8ffbfea6cbc1ba847867b30638892eae

  1. … 206 more files in changeset.
Code change for ESP colocation strategy described in LP 1464306

Change-Id: I838bc57541bf753e3bad90ae7776e870958c930a

  1. … 8 more files in changeset.
Squashed commit of the following:

commit 94e16c214c20c3ab74bf8bf7cef5dedc91285488

Author: qchen <qifan.chen@hp.com>

Date: Fri Jun 5 16:55:51 2015 +0000

port AS, turn of AS for now

commit 1311090436706506c9ce4e8d455c425d896812c8

Author: qchen <qifan.chen@hp.com>

Date: Fri Jun 5 16:54:55 2015 +0000

port AS, rework

commit 7377923f5ef036adcb07e50ec92aca2625ed3fe0

Author: qchen <qifan.chen@hp.com>

Date: Thu Jun 4 01:49:17 2015 +0000

AS porting

Change-Id: I6ba4349409cbee8d344cba97802febf9fec5c1ea

  1. … 3 more files in changeset.
Costing and statistics compiler interfaces for UDFs

blueprint cmp-tmudf-compile-time-interface

bug 1433192

This change adds compiler interfaces for UDFs that give information

about statistics of the result table and also a cost estimate. It also

has more code for the upcoming Java UDF feature, retrieving updated

invocation infos and returning them back to the executor/compiler C++

code.

Description of the changes in more detail:

- Addressed remaining review comments from my last checkin,

https://review.trafodion.org/1655

- Make sure that user-generated exceptions during deallocation of

a routine are reported. These happens in the destructor of the

object derived from tmudr::UDR. For Java, we may need a deallocate

method.

- Java and JNI code to serialize the updated UDRInvocationInfo and

UDRPlanInfo object after calling the user code and return them back

through the JNI interface to the calling C++ code.

- The cost method source files had some inline methods defined in

the .cpp file and used an include file that included other .cpp

files. Make didn't pick up changes made in these files. Removed

this code and changed it to regular methods and inlines.

- Replaced some Context * parameters in costing with PlanWorkSpace *,

to be able to get to UDF-related info that's stored in a special

PlanWorkSpace.

- Changed the behavior or isBigMemoryOperator() for TMUDFs. If the

UDF writer specifies the DoP for the UDF invocation, then consider

it a BMO.

- If possible, synthesize the HASH2 partitioning function of a TMUDF's

child as the partitioning function of the UDF. This can be done if

the partitioning key gets passed through the UDF.

- Statistics interface for TMUDFs:

- TMUDF now populates statistics field in the UDRInvocationInfo

object and calls the describeStatistics() method.

- Added an estimated # of partitions for partitioned input tables

of TMUDFs. Also changed row count methods to "estimated" row count.

- Added code to incorporate the information on row count and UEC

provided by the UDF writer into statistics of the TMUDF. This code

is not that suitable for coding it as the default implementation

of describeStatistics(). Therefore, the default implementation of

describeStatistics() does nothing, but the compiler applies some

heuristics in case the UDF writer provides no statistics.

- Changed cost method for TMUDFs to incorporate an estimated cost

per row from the UDF writer. There is no special compiler interface

call to ask for the cost, it can be set from the

describeDesiredDegreeOfParallelism() call and, once supported, from

the describePlanProperties() call. Note that we don't have immediate

plans to support describePlanProperties(), that might come after 2.0.

Patch Set 3: Addressed Dave's review comments.

Patch Set 4: Fixed misplaced copyright in expected file.

Change-Id: Ia9ae076b7ae1fc2968c3d253d6d2d0e1d9a2ea40

  1. … 31 more files in changeset.
various fixes and enhancements, details below.

-- improved DDL performance by not invalidating internal create/alter

operations.

-- added an optimization during CREATE INDEX to not go through

'upsert using load' processing if source table is empty.

-- added support for ISO datetime format (2015-06-01T07:35:20Z)

-- added support for RESET option to ALTER SEQUENCE and IDENTITY.

This will reset generated seq num to the START VALUE.

-- added support for cqd TRAF_STRING_AUTO_TRUNCATE.

If set, strings will be automatically truncated during insert/update.

-- fixed sqlci to pass in correct varchar param len indicator (2 or 4 bytes).

-- changed sizeof(short) to correct vcindlen (2 or 4 bytes)

-- removed some NA_SHADOWCALLS defines

Change-Id: Ie6715435d9c210ae6c2db4ff6bc0545c1b196979

  1. … 38 more files in changeset.
Merge "Column-level privileges - part 2"

Move core into subdir to combine repos

  1. … 10754 more files in changeset.
Move core into subdir to combine repos

  1. … 10608 more files in changeset.
Move core into subdir to combine repos

Use: git log --follow -- <file>

to view file history thru renames.

  1. … 10823 more files in changeset.
Rework for incremental IM during bulk load

Address comments by Hans and fix 1 regression failure

A regression failure in executor/test013 was caused due to how external

names as used with volatile indexes. This has been fixed in GenRelExeUtil.cpp

The parser change suggested could not be made due to increasing conflicts.

Thank you for the feedback.

Change-Id: Icdf5dbbf90673d44d5d0ccb58086266520fcf5c3

  1. … 5 more files in changeset.
Fix bug 1323826 - SELECT with long IN predicate causes core file

Actually, this check-in does not completely fix the problem, but

it does allow IN predicates (and NOT IN predicates) to have a list

with as many as 3100 items in the list.

NOTE: There are many places in the SQL Compiler code that use recursion.

The changes in this check-in address the issue for long IN lists

and, to some extent, for INSERT statements that attempt to insert

many rows with a single INSERT statement. However, it is still possible

for someone to try a list that is too long. As you make the lists

longer, you find more recursive routines that have the same type of

problem(s) that are being fixed for certain routines by this check-in.

This check-in also fixes a couple of minor problems in the logic used to

debug Native Expressions code. These problems were in

.../sql/generator/Generator.cpp and

.../sql/exp/ExpPCodeOptsNativeExpr.cpp

There were 3 different techniques used to reduce the stack space usage of

various recursive routines that get invoked as a result of long IN lists

or NOT IN lists:

1) Move variables from the stack to heap.

2) Recode the recursive routine to pull out sections of code (not needed

during the recursion) and put those in their own routine. This cuts

the stack space usage because it enables the C++ compiler to generate

code for the recursive routine that needs significantly less stack

space.

3) Declare variables of type ARRAY on the stack (where the ARRAY

overhead is allocated from stack, but the contents come from heap)

to hold certain pieces of data where each recursive level of calling

needs its own value for the variable AND then change the code to use a

'while' loop to process the nodes in the node tree in the same order

that the original recursive routine would have processed the nodes.

Files changed for reducing stack space usage:

sql/optimizer/ItemCache.cpp - use method 2 on ItemExpr::generateCacheKey()

sql/optimizer/NormItemExpr.cpp - use method 2 on ItemExpr::normalizeNode()

and method 1 on BiLogic::predicateEliminatesNullAugmentedRows()

sql/generator/GenPreCode.cpp - use method 2 on

ItemExpr::replaceVEGExpressions()

sql/optimizer/ItemExpr.cpp - use method 2 on ItemExpr::unparsed()

AND ItemExpr::ItemExpr::synthTypeAndValueId()

sql/optimizer/OptRange.cpp - use method 3 on OptRangeSpec::buildRange()

sql/optimizer/BindItemExpr.cpp - use method 3 on

ItemExpr::convertToValueIdSet()

sql/optimizer/NormRelExpr.cpp - use method 3 on

Scan::applyAssociativityAndCommutativity()

sql/optimizer/ItemExpr.h - declare new methods that were created

sql/optimizer/ItemLog.h - declare new methods that were created

Finally, this check-in changes the default value for a CQD named

PCODE_MAX_OPT_BRANCH_CNT from 19000 to 12000. This was to fix a problem

where we used too much *heap* space when we tried to optimize a PCODE

Expression that had too many separate blocks of PCODE instructions (such

as results from a very long NOT IN list.) With this change, we will

choose to run with unoptimized PCODE if trying to optimize the PCODE

would result in overflowing the heap space available.

Change-Id: Ie8ddbab07de2a40095a80adac7873db8c5cb74ac

  1. … 4 more files in changeset.
Avoid scanner timeout for Update Statistics

For performance reasons, Update Stats pushes sampling down into HBase,

using a filter that returns only randomly selected rows. When the

sampling rate is very low, as is the case when the default sampling

protocol (which includes a sample limit of a million rows) is used on

a very large table, a long time can be taken in the region server

before returning to Trafodion, with the resultant risk of an

OutOfOrderScannerNextException. To avoid these timeouts, this fix

reduces the scanner cache size (the number of rows accumulated before

returning) used by a given scan based on the sampling rate. If an

adequate return time can not be achieved in this manner without

going below the scanner cache minimum prescribed by the

HBASE_NUM_CACHE_ROWS_MIN cqd, then the scanner cache reduction is

complemented by a modification of the sampling rate used in HBase.

The sampling rate used in HBase is increased, but the overall rate

is maintained by doing supplementary sampling of the returned rows in

Trafodion. For example, if the original sampling rate is .000001,

and reducing the scanner cache to the minimum still results in an

excessive average time spent in the region server, the sampling

may be split into a .00001 rate in HBase and a .01 rate in Trafodion,

resulting in the same effective .000001 overall rate.

Change-Id: Id05ab5063c2c119c21b5c6c002ba9554501bb4e1

Closes-Bug: #1391271

  1. … 6 more files in changeset.
Configuring hbase option MAX_VERSION via SQL

Change-Id: I88041d539b24de1289c15654151f5320b67eb289

  1. … 10 more files in changeset.
Column-level privileges - part 2

Support for column-level privileges will be in multiple deliveries.

This delivery add the following portions:

1. DML operations (SELECT, INSERT, UPDATE) now recognize granted

column-level privileges.

2. CREATE VIEW now recognizes granted column-level privileges.

3. Revoke of object-level privileges now revokes the corresponding

column-level privilege.

Missing functionality:

1. Privileges can be granted to roles and revoked from roles, but

REVOKE ROLE does not consider column-level privileges when

determining

if an object depends on a role's granted privileges.

2. Column-level revoke does not enforce RESTRICT, i.e., privileges

may be revoked even if there are dependent privileges.

3. ALTER TABLE DROP COLUMN does not remove associated column-level

privileges, nor does it check for dependent objects.

Change-Id: Ieba04c77edb945dfeb1994e9949b54072289465e

  1. … 9 more files in changeset.
Changes in Patchset2

Fixed issues found during review.

Most of the changes are related to disbling this change for unique indexes.

When a unique index is found, they alone are disabled during the load.

Other indexes are online and are handled as described below. Once the base

table and regular indexes have been loaded, unique indexes are loaded from

scratch using a new command "populate all unique indexes on <tab-name>".

A simlilar command "alter table <tab-name> disable all unique indexes"

is used to disable all unique indexes on a table at the start of load.

Cqd change setting allow_incompatible_assignment is unrelated and fixes an

issue related to loading timestamp types from hive.

Odb change gets rid of minor warnings.

Thanks to all three reviewers for their helpful comments.

-----------------------------------

Adding support for incremental index maintenance during bulk load.

Previously when bulk loading into a table with indexes, the indexes are first

disabled, base table is loaded and then the indexes are populated from

scratch one by one. This could take a long time when the table has significant

data prior to the load.

Using a design by Hans this change allows indexes to be loaded in the same

query tree as the base table. The query tree looks like this

Root

|

NestedJoin

/ \

Sort Traf_load_prep (into index1)

|

Exchange

|

NestedJoin

/ \

Sort Traf_load_prep (i.e. bulk insert) (into base table)

|

Exchange

|

Hive scan

This design and change set allows multiple indexes to be on the same tree.

Only one index is shown here for simplicity. LOAD CLEANUP and LOAD COMPLETE

statements also now perform these tasks for the base table along with all

enabled indexes

This change is enabled by default. If a table has indexes it will be

incrementally maintained during bulk load.

The WITH NO POPULATE INDEX option has been removed

A new option WITH REBUILD INDEXES has been added. With this option we get

the old behaviour of disabling all indexes before load into table and

then populate all of them from scratch.

Change-Id: Ib5491649e753b81e573d96dfe438c2cf8481ceca

  1. … 29 more files in changeset.
Merge "Enabling Bulk load and Hive Scan error logging/skip feature"

  1. … 6 more files in changeset.
Merge "various lp and other fixes, details below."

  1. … 10 more files in changeset.
Part 1 of ALTER TABLE/INDEX ALTER HBASE_OPTIONS support

blueprint alter-table-hbase-options

This set of changes includes the following:

1. Syntax for ALTER TABLE/INDEX ALTER HBASE_OPTIONS

2. Compiler binding and generation logic

3. Run-time logic to update the metadata TEXT table with the

new options.

Still to come is:

1. Logic to actually change the HBase object

2. Transactional logic to actually change the HBase object

The functionality in this set of changes is only marginally

useful. If you manually change hbase options on a Trafodion

object using the hbase shell, you could use this ALTER

TABLE/INDEX command to update the Trafodion metadata. (Of

course some care would have to be taken to use the same

options!).

Change-Id: Id0a5513fe80853c06acdbbf6cc9fd50492fd07b2

  1. … 17 more files in changeset.
various lp and other fixes, details below.

-- added support for self referencing constraints

-- limit clause can now be specified as a param

(select * from t limit ?)

-- lp 1448261. alter table add identity col is not allowed and now

returns an error

-- error is returned if a specified constraint in an alter/create statement

exists on any table

-- lp 1447343. cannot have more than one identity columns.

-- embedded compiler is now used to get priv info during invoke/showddl.

-- auth info is is not reread if already initialized

-- sequence value function is now cacheable

-- lp 1448257. inserts in volatile table with identity column now work

-- lp 1447346. inserts with identity col default now work if inserted

in a salted table.

-- only one compiler is now needed to process ddl operations with or

without authorization enabled

-- query cache in embedded compiler is now cleared if user id changes

-- pre-created default schema 'SEABASE' can no longer be dropped

-- default schema 'SCH' is automatically created if running regressions

and it doesn't exist.

-- improvements in regressions run.

-- regressions run no longer call a script from another sqlci session

to init auth, create default schema

and insert into defaults table before every regr script

-- switched the order of regression runs

-- updates from review comments.

Change-Id: Ifb96d9c45b7ef60c67aedbeefd40889fb902a131

  1. … 60 more files in changeset.
Enabling Bulk load and Hive Scan error logging/skip feature

Also Fixed the hanging issue with Hive scan (ExHdfsScan operator) when there

is an error in data conversion.

ExHbaseAccessBulkLoadPrepSQTcb was not releasing all the resources when there

is an error or when the last buffer had some rows.

Error logging/skip feature can be enabled in

hive scan using CQDs and in bulk load using the command line options.

For Hive Scan

CQD TRAF_LOAD_CONTINUE_ON_ERROR ‘ON’ to skip errors

CQD TRAF_LOAD_LOG_ERROR_ROWS ‘ON’ to log the error rows in Hdfs files.

For Bulk load

LOAD WITH CONTINUE ON ERROR [TO <location>] – to skip error rows

LOAD WITH LOG ERROR ROWS – to log the error rows in hdfs files.

The default parent error logging directory in hdfs is /bulkload/logs. The error

rows are logged in subdirectory ERR_<date>_<time>. A separate hdfs file is

created for every process/operator involved in the bulk load in this directory.

Error rows in hive scan are logged in

<sourceHiveTableName>_hive_scan_err_<inst_id>

Error rows in bulk upsert are logged in

<destTrafTableName>_traf_upsert_err_<inst_id>

Bulk load can also aborted after a certain number of error rows are seen using

LOAD WITH LOG ERROR ROWS, STOP AFTER <n> ERROR ROWS option

Change-Id: Ief44ebb9ff74b0cef2587705158094165fca07d3

  1. … 32 more files in changeset.
Using the language manager for UDF compiler interface

blueprint cmp-tmudf-compile-time-interface

This change includes new CLI calls, to be used in the compiler to

invoke routines. Right now, only trusted routines are supported,

executed in the same process as the caller, but in the future we may

extend this to isolated routines. Using a CLI call allows us to share

the language manager between compiler and executor, since language

manager resources such as the JVM and loaded DLLs exist only once per

process. This change is in preparation for Java UDFs.

Changes in a bit more detail:

- Added 4 new CLI calls to allocate a routine, invoke it, retrieve

updated invocation and plan infos and deallocate (put) the routine.

The CLI globals now have a C/C++ and a Java language manager that

is allocated on demand.

- The compiler no longer loads a DLL for the UDF compiler interface,

it uses the new CLI calls instead.

- DDL syntax is changed to allow TMUDFs in Java (not officially

supported, so don't use it quite yet).

- TMUDFs in C are no longer supported, only C++ and Java are.

Converted remaining TMUDF tests to C++.

- C++ TMUDFs now do a basic verification at DDL time, so errors

like missing entry points are detected earlier. Validation for

Java TMUDFs is also done through the CLI.

- Make sure we have no memory or resource leaks:

- CmpContext keeps track of UDF-related objects allocated on

system heap and in the CLI, cleaned up at the end of a statement

- CLI keeps a list of allocated trusted routines, cleaned up

when a CLI context is deallocated

- Using ExeCliInterface class to make the new CLI calls (4 new calls

added).

- Removed CmpCli class in the optimizer directory and converted

tracking compiler to use ExeCliInterface as well.

- Compile-time parameter values are no longer baked into the

UDRInvocationInfo. Instead, they are provided as an input row, the

same way as they are provided at runtime.

- Bug fixes in C++ UDR code, mostly related to serialization and

to multiple interactions with the UDF through serialized objects.

- Added more info to UDRInvocationInfo (SQL access type, etc.).

- Since there are multiple plans per invocation, each of which

can have multiple interactions with the UDF, plans need to be

numbered so the UDF side can tell them apart to attach the

right state (owned by the UDF) to it.

- The language manager needs some functions that are provided by

the process it's running in. Added those (empty, for now) functions

as cli/CliImplLmExtFunc.cpp.

- Added a new class for Java TMUDFs, LmRoutineJavaObj. Added methods

to allocate such routines and to load their class as well as to

create Java objects by invoking the default constructor through JNI.

- Java TMUDFs use the new UDR interface (to be provided by Suresh and

Pavani). In the language manager, the container is the class of

the UDF, the external path is the fully qualified jar name. The

Java method name is <init>, the default constructor, with signature

"()V". Some code changes were required to do this.

- Created a new directory trafodion/core/sql/src for Java sources in

the sql engine. Right now, only language manager java

sources are in this directory, but I am planning to move the other

java sources under sql in a future checkin. Suresh and Pavani

will add their UDF-related Java files there as well.

- Renamed the udr jar to trafodion-sql-<version>.jar, in anticipation

of combining all the sql Java sources into this jar.

- Created a maven project file trafodion/core/sql/pom.xml and

changed makefiles to invoke maven to build java sources.

- More work to separate new UDR interface from older SPInfo object,

so that we can get rid of SPInfo if/when we don't support the older

style anymore.

- Small fix to odb makefile, make clean failed when executed twice.

Patch set 2: Adding a custom filter for test regress/udr/TEST108.

Change-Id: Ic827a42ac25505fb1ee451b79636c0f9349d8841

  1. … 87 more files in changeset.
get indexLevel and blockSize from Hbase metadata to use in costing code.

Change-Id: I7b30364ec83a763d3391ddc39e12adec2ca1bd00

  1. … 7 more files in changeset.
Merge "Remove some dead code"

Remove some dead code

Remove dead code concerned with constraint and schema labels.

This is an anachronism from pre-open-source versions of the code.

Most of the code removed is in the compiler, with a small amount

of cli and executor code removed.

Change-Id: Ic8a833bb15d1ca9a0e2e2683f2d4644b44c4f96b

  1. … 9 more files in changeset.
Security fixes for 144553, 1414125, and 1393529

1445583: showstats command performance slow with security enabled

Several changes were made to improve performance:

Performance optimization:

NATable.cpp: NATable::setupPrivs

- If the current user is the object owner, then default the privilege bitmap

to object Owner values - no need to call PrivMgr to get privileges

Caching optimization:

We are now caching privmgr metadata tables in compiler cache when the compiler

context is instantiated. This avoids a metadata lookup for these tables.

- Added new methods that return if the table is part of the PrivMgr schema

- Adjusted CmpSeabaseDDL::createMDdescs to include privmgr metadata in the

cached entries

- Adjusted CmpSeabaseDDL::getMDtableInfo to check for privmgr metadata tables

from the cached entries

- Removed obsolete code CmpSeabaseDDL::alterSeabaseDropColumn

- changed CmpSeabaseDDL::getSeabaseTableDesc to check for both system and

privmgr metadata from compiler cache

- added new method CmpSeabaseDDL::getPKeyInfoForTable that returns the

primary key name and UID for a table. This is needed when dropping privmgr

metadata tables

Removed extraneous recompilations of HISTOGRAM structures:

Today, update statistics and showstats are reloading NATable entries

for HISTOGRAM tables on every access. This is because the parserflag

ALLOW_SPECIALTABLETYPE is turned on. When this flag is turned, the compiler

always reloads the cache entries - see code from CmpMain::sqlcomp:

//if using special tables e.g. using index as base table

//select * from table (index_table T018ibc);

//then refresh metadata cache

if(Get_SqlParser_Flags(ALLOW_SPECIALTABLETYPE) &&

CmpCommon::context()->schemaDB_->getNATableDB()->cachingMetaData())

CmpCommon::context()->schemaDB_->getNATableDB()->refreshCacheInThisStatement();

Changed code to not set ALLOW_SPECIALTABLETYPE and ALLOW_PHONYCHARACTERS

parserflags by default. Individual statements are setting these flags as needed.

1414125: User without priv can view data in metadata tables

The problem is that a user with priv cannot view data in metadata tables.

Even when a user had SELECT privilege on a system or privmgr metadata table,

the request failed.

The problem is that parameter 2 sent to CmpDescribeIsAuthorized in

hs_globals.cpp is NULL so SELECT priv is not checked. If the user has SHOW

component privilege, it works. A call was added to getPrivileges for metadata

tables before calling CmpDescribeIsAuthorized.

1393529: Core dump accessing MD table descriptors

When "UPDATE STATISTICS LOG [ON, OFF, CLEAR]" is specified by a non DB__ROOT

user, a core dump occurred. This happens because the isAuthorized check is

performed expecting a NATable structure. This command does not need any

special security checks.

Updated traf_authentication_setup script to support a new installation option

Change-Id: If7dbf3ec66e5beb7d88bda61ef32611401dd97b9

  1. … 8 more files in changeset.
Fix for bug 1446043

SPJ's can contain duplicate column names coming from different tables

which will be resolved later by renaming the columns. so there is no

need to check for duplicates at the beginning of bind node for SPJ's.

Change-Id: I28146c698bae7622e27b326ab0411e9a3ef56c2c

  1. … 2 more files in changeset.