Trafodion

Clone Tools
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Make corrections to HFile path

To estimate the row count for an HBase table, the HFiles are

accessed directly from the HDFS file system. When the Trafodion

name of the table includes a delimited id, the quotes must not

be included in the node of the HDFS path representing the

qualified table name. In addition, the hbase.rootdir property

in hbase-site.xml may consist solely of a path rather than a

full URL. It was previously assumed that a full URL would be

present, and the value of the property was used to construct

a java.net.URI object. When a string consisting of only a file

path is passed to the URI constructor, a NullPointerException

is thrown (instead of the expected URISyntaxException), causing

the path we construct to the HFile to be incorrect. The code

was changed to utilize either a file path or a valid URI as the

value of the property.

Change-Id: If35d9da7aaab815a9c1d550bc505d86f0cbcf611

Closes-Bug: 1384959

Changes to support OSS poc.

This checkin contains multiple changes that were added to support OSS poc.

These changes are enabled through a special cqd mode_special_4 and not

yet externalized for general use.

A separate spec contains details of these changes.

These changes have been contributed and pre-reviewed by Suresh, Jim C,

Ravisha, Mike H, Selva and Khaled.

All dev regressions have been run and passed.

Change-Id: I2281c1b4ce7e7e6a251bbea3bf6dc391168f3ca3

  1. … 129 more files in changeset.
Merge "Eliminate minor estimated nullcount inaccuracies"

fix #lp1384430: full sort order for plan1 of NJ

Change-Id: I7f3162af34d16bc5e801305c24438d098a9b09ef

    • -70
    • +72
    /sql/optimizer/OptPhysRelExpr.cpp
DBSecurity: REVOKE ROLE, credential propagation, +

Overview

1) Corrects a CLI/Executor overwrite problem and removes workaround

code in PrivMgr. Launchpad bug #1371176.

2) REVOKE ROLE now lists referencing and referenced objects when a

revoke request is refused due to dependencies.

3) REVOKE ROLE now reports that the specified grant cannot be found

when grantor has not granted the role to the user. Previously the

misleading error "Not Authorized" was issued, which as confusing when

the user was DB__ROOT. The same change was made for REVOKE COMPONENT

PRIVILEGE. A similar change will be made in the future for revoking

object privileges.

4) REVOKE ROLE now considers grants to PUBLIC before concluding a

revoke would require a dependent object to be dropped.

5) User credential are now propagated to the the compiler process.

Launchpad bug 1373112.

Externals

If the priv/role, grantee, grantor tuple does not exist, REVOKE

ROLE/REVOKE COMPONENT PRIVILEGE now reports error 1018: Grant of role or

privilege <name> from <grantor> to <grantee> not found, revoke request

ignored.

When REVOKE ROLE detects a dependent object, error message 1364 now

reports the referencing and the referenced object.

Cannot revoke role <role-name>. Object <referencing-object> depends on

privileges on object <referenced-object>.

Details for user credential propagation:

The propagate user credentials code has only been partially implemented.

The existing code sends the user ID to the first compiler process.

Other compiler processes started would not get the connected user ID

instead the DB__ROOT user ID became the user by default. Therefore,

privilege checks are succeeding when they should fail.

User credentials consist of an integer user ID and a username. The

existing code only passed the user ID. The compiler process would

then do a metadata look-up to get the username. If we kept this

model, then we would get into an infinite loop:

When the compiler process received the user ID, it did a

metadata read to get the associated username. After reading the

metadata, both the username and user ID was set in context globals.

The metadata lookup code will start another arkcmp process for the

compilation request. The compilation would then start a compiler

process. That compiler process would start another compiler process,

etc.

The solution is to send both the username and user ID to the compiler

process. Both values are known at the time the compiler process is

started. This alleviates the need for a database look-up when the

compiler process starts. To do this a new session attribute was

created - SESSION_DATABASE_USER. This session attribute sends both the

user ID and username to the compiler process during startup processing.

Once we were able to start a compiler process and store a user ID other

than DB__ROOT in the Context globals, another similar infinite loop

occurred during privilege checking. For example, a showddl command

starts a compiler process when extracting privilege information. The

compiler calls checkPrivileges to make sure the current user has

privileges. The checkPrivileges statement makes a metadata request

that requires a compilation. This starts up another compiler process.

This compiler process is sent the metadata request. When compiling the

metadata request in the new compiler process, checkPrivileges is called

which starts a compiler process, …

This worked previously because the user passed was DB__ROOT, and the code

in checkPrivileges is short circuited and the metadata call is avoided.

A fix to set the parserflag (INTERNAL_QUERY_FROM_EXEUTIL) before the

metadata request was performed. This fix requires that the file

"SqlParserGlobalsCmn.h" be included in additional files. Including this

file needs to be done with care. In order to get everything to compile,

we changed where this file was included in several places.

Once all these changes were made, the envvar: DBUSER_DEBUG now works.

If set, then good details about how users are sent to different

processes is displayed.

Change-Id: If7538eee38178c2345fe418172c6196b25a20b33

  1. … 16 more files in changeset.
LP Bug 1380733 - Phoenix tests fail with error 73

Analysis has shown that we are generating a commit request result of

COMMIT_CONFLICT in the transactional TrxRegionEndpoint coprocessor

for a transaction. A COMMIT_CONFLICT result to a commit request

ultimately results in an error 73 being returned by SQL to the client.

In this case, a previous commit request, for the same transaction,

in the same region, had resulted in a successful commit analysis of COMMIT_OK.

The subsequent commit request, for the same transaction, is unexpected.

We are continuing to analyze the transactional prepare/commit process

to determine why we have this additional request. For now, we have

added a workaround to the "hasConflict()" transactional processing

to recognize this condition and allow the initial conflict testing results

(in this case a COMMIT_OK) to be the returned result.

Change-Id: I8d35574384a0f64eac650a875827e19031bfd453

Change test reference from com.hp to org.trafodion

Change-Id: Idbd4d7f167f51be7d833771216e540193a788bc1

Merge "rework fix to move global variabls to optDefaults"

Merge "Fix for LP bug 1384506 - get version of metadata returns junk"

Eliminate minor estimated nullcount inaccuracies

When the relative frequency of null values is estimated via

sampling when getting an estimated row count for an HBase

table, there is an (unlikely) situation in which the null

count could be thrown off. If the primary key consists of a

single column, and some row has all null values except for

the primary key, those nulls will be counted incorrectly.

This was caused by comparing two successive KeyValue positions

using < rather than <=.

In addition, if the end of the HFile is reached while taking

the sample, any nulls at the end of the last row will not be

counted, and this has been fixed as well.

o Closes-Bug: #1383835

Change-Id: Ia449d1379d851e8df0f7811e835b5730851c33e2

Fix bug 1383405 and bug 1383597

Bug 1383405: trafodion_mods now checks to see if on a single node

cluster before creating log directories.

Bug 1383597: traf_hortonworks_uninstaller now looks to make sure

there is something to delete before deleting the file.

Change-Id: Ic8a1761d50dbf34aad0d042a09bdd5a11a74d387

Merge "Bulk Load fixes"

rework fix to move global variabls to optDefaults

Change-Id: I70303eb6c2587fe7c151e0737977d2e1802054cf

Merge "Cleanup"

Merge "Fix for UnknownTransaction on aborted txn"

Merge "Delimited col name fix, and backout of upsert fix"

Cleanup

a) sqenvcom.sh: remove support for CDH4.2

b) install_local_hadoop: remove names, incorrect comments,

and non-existent hbase-extensions JAR

setup in hbase-env.sh.

Change-Id: I7574cd780524f78e8ab47d764dbe6fd1d4d9e612

    • -10
    • +1
    /sqf/sql/scripts/install_local_hadoop
many scanner improvements

+ added sudo access check at the very begging of the scanner,

because some of the configured checks/commands require it

===> This includes a check for requiretty being disabled,

including a special error message.

+ added check for ntpd service

+ added check for iptables firewall

===> I realize that the long one-line script is hard to read! Sorry about that.

I added a backlog task to change the configuration format to allow

multi-line check commands, so that more complex check commands

(e.g., short scripts) are easy to add and read.

+ removed usage of grep -P

+ fixed the ValidNodeName check to report results for the correct node,

by checking the actual hostname -s output on each node

rather than the --nodes parameter value

+ removed ValidClusterName and HomeDirectoryNotOnNFSDisk checks

because they may cause confusion;

it's best to restrict these checks to the trafodion_setup script

+ removed implicit assumption that string compare will always be done

for (eq, ne) and integer compare will always be done for (lt, le, ge, gt)

+ fixed summary output format to be less confusing

+ removed default for the --nodes parameter; this is now a required parameter,

just like in the trafodion_setup script

Change-Id: I96ca15f40e08c1a702b0c2754d1e47da3d03f96a

    • -85
    • +89
    /installer/tools/trafodion_scanner
    • -18
    • +18
    /installer/tools/trafodion_scanner.cfg
Fix for LP bug 1384506 - get version of metadata returns junk

Change-Id: I818165d7497b2661fed3400dce8f6e8857607dd8

Fix for UnknownTransaction on aborted txn

Our asynchronous prepare commit was not waiting for the responses

if the prepare received a commit conflict or commit error. This

was causing the client to send an abort to the region even if it

was responding with a read-only prepare response. This caused the

UnknownTransactionException since we were aborting a region that

responded read-only (and already retired the transaction).

Change-Id: Ia452a90d862fd5a5bf7fa6c81612808573adf454

Changed verison of java being downloaded

Cloudera changed java version that needs to be downloaded with

5.1.2.

Change-Id: I4ffe709c9f7c0eccc5dbaebdbb9a8449ace49d7b

Merge "Bug 1383491, Some problems remaining with incorrect ESP boundaries"

Delimited col name fix, and backout of upsert fix

This delivery fixes two launchpad bugs:

1383531: Create table .. like .. store by() does not take delimited

column names. See CmpSeabaseDDLtable.cpp for change.

Details:

When the create table like statement is requested, the create table like

code calls describe to get the description of the source table. After

getting the describe text back for the source table, the create table

like code adds a STORE BY clause. The code to add the STORE BY clause is

not handling delimited column names correctly.

1376835: initialize authorization failing with unique constraint error.

See PrivMgrPrivileges.cpp and PrivMgrRoles.cpp for change.

Details:

Previously delivered a fix to work around this problem (change-Id:

Id701d031ab9b9c2ebdc0584b01a2b5af9fc02b26) which changed the insert

.. selects to upsert .. selects. After this workaround was delivered

the correct fix was released (undo disable txns for DDL change-Id:

Ib37e202b9239305bd1e38e2761b587a4316ee439).

This delivery changes the upsert's back to insert's. It also fixes a

problem with the insertSelect statement when inserting into the

OBJECT_PRIVILEGES table because sequence generators (SG) were not being

initialized properly.

Change-Id: I296c49a446c11f2ec019c6eb7e723538cae79c27

Bulk Load fixes

- fix for bug 1383849 . Releasing the bulk load objects once load is done.

- bulk load now uses CIF by default. This does not apply to populating

indexes using bulk load.

- fix for hive/test015 so it does not fail on the test machines

Change-Id: Iaafe8de8eb60352b0d4c644e9da0d84a4068688c

    • -0
    • +10
    /sql/sqlcomp/CmpSeabaseDDLindex.cpp
Fix LP bug 1325716 -TIME(1) default CURRENT_TIME reported wrong values

When doing CSE (Common Subexpression Elimination), if we come across a

convert clause, we must interrogate the details of the conversion

before deciding this is a match with another conversion we have

already done. In particular, the source's precision, scale, and type

must be the same as in the previous conversion and, likewise, for

the target.

Pre-reviewed by Justin.

All dev regressions were run to ensure that the fix has no

side-effects.

Files changed:

sql/exp/ExpPCodeOptimizations.cpp

Change-Id: I2705cef151ef163a43e1eef31ee47ef94d164051

Merge "Eliminate excessive region refresh;remove warnings"

Merge "Fix bug 1370151 - PCODE Optimization code was looping forever"

Update TOOLSDIR references for Hadoop, HBase, Hive dependencies

Update the section of Hadoop dependencies that references TOOLSDIR

locations. The purpose is to use the same build-time dependencies

regardless of the distribution installed on the build machine.

Build should work without having a distro installed and without using

install_local_hadoop before building. But if you have local_hadoop

installed, that context still gets preference over TOOLSDIR.

The installed-distro settings (HortonWorks, Cloudera, MapR) are used

for run-time dependencies.

TOOLSDIR are build-time references and are only used if we are in a

source tree. So if your machine has a distro installed and TOOLSDIR

defined, then you'll get build-time references if sqenvcom.sh finds a

Makefile, indicating we are in source tree.

Add a check that we matched at least one supported distro (otherwise

we quietly don't set up CLASSPATH and trafodion won't start).

Update Hortorworks check. The library paths no longer contain *HDP*,

so ambari is the only valid check. Also remove the HADOOP_1 setting,

as that no longer is valid with HDP 2.1 distro.

Change-Id: I1b86bf8c454467c6adef15e66014fd6da59b1f15

Blueprint: infra-build-hadoop-deps

Merge "SQL Memory allocation tracing and overflow detection"

Merge "Use new HFIle location for HBase 0.98"