Trafodion

Clone Tools
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Fix for when REST server is not present in the build

If installing a 1.0 Trafodion build, the REST server is not

present. This fix adds logic to check if the REST server is

in the Trafodion build and handle it appropriately by not trying

to isntall it.

Change-Id: Ied4fb00403422e72926842f274f275675990ec5c

Adding logic to the core Trafodion build to check for copyrights.

The core Makefile now includes a check-copyrights step, which

checks to see if any changed files in your workspace need

copyright updates. If so, the build fails; the script will tell

you what files to update.

If you prefer, you can configure the script so that it will

automatically update the copyrights for you instead of failing

your build. To do that, export the environment variable

UPDATE_COPYRIGHTS=YES before doing your make.

Note that new files will only be checked if you have previously

done a "git add" command for them. Otherwise only changed

existing files are checked.

Change-Id: Ia643be91bea34832fddcd5a9467959f0726d71d9

    • -0
    • +234
    /updateCopyrightCheck.py
Change default review branch for stable/1.1

Change-Id: Ia83ac3e88b185e2b3e9193d4432b587e2c293c01

Merge "Fix for bug 1437102"

Merge "Fix for bug 1438775"

Closes-Bug: 1438340

Fixes 'sqnodeipcrm' errors reported during sqstart.

Change-Id: Ie3172fea38382c1e643e5a6f14e8ec7233cb3c3a

Fix for bug 1437102

Recently we turned off fast path IPC processiong for T2 to fix an other bug. With that we are exectuing non fast path code which was not being used for a long time even in SQ. As some changes were made to IPC layer, changes to this path were missed. So any ESP plan in T2 is hanging in non fast path because we are using BAWAITIOX instead of thread specific BAWAITIOXTS. This was not caught because none of T2 tests have any ESP plan queries.

Change-Id: Ie1e59d108fe6b49656407885bbabca2d1ebc60f6

Fix for bug 1438775

The fetch buffer size calculation did not account for varchar indicator

length for columns greater than 32K. The indicator length in this case is 4 bytes instead of 2. Currently, the buffer length calculation was using 2, which resulted in allocating insufficient memory leading to a corruption.

Fixes bug 1438775

Change-Id: Ib16b6644ca3c7f36d96687a33ea36ad4f0ffe903

Fix for bug 1443688

Explain plan is now collected only for non-unique query types and queries that generate stats.

Also, fixed a bug where explain plan was being collected even though the

statistics feature is disabled.

Fixes bug 1443688

Change-Id: I67433083758044e1da0071241e00c4a09e701dbd

    • -0
    • +11
    /conn/odbc/src/odbc/nsksrvr/SrvrConnect.cpp
Merge "Support for TM not running state - output for REST/HPDSM"

Merge "Fix LP Bug 1382686 - LIKE predicate fails on some UTF8 columns"

Merge "[bug:1440941]T2 Phoenix tests creating cores at CliStatement::doOltExecute"

    • -0
    • +6
    /conn/jdbc_type2/native/SqlInterface.cpp
Support for TM not running state - output for REST/HPDSM

Added a new state for dtmci to handle all error situations when

a response is not received by the TM. So far, these errors are

handled as a NOT RUNNING state for the TM. Display in json format

has been corrected.

Change-Id: Id19c73a47ad4ae49dac231962e1d3d5e8ae96dc0

Merge "Fixes in T2 driver to enable OE performance run"

Fix LP Bug 1382686 - LIKE predicate fails on some UTF8 columns

The description of this LP bug says that the LIKE predicate fails when

the pattern string starts with a '%' and when selecting from a view, but

not when selecting from the underlying table. However, in the supplied

reproducible test case, the view's column had a character set of UTF8

while the underlying table had a character set of UCS2.

As it turns out, the real problem is not related to selecting from a

view. The root cause is a bug in the PCODE implementation for the LIKE

predicate and this problem can occur any time the LIKE predicate is

applied to a column declared as VARCHAR(nnn) CHARACTER SET UTF8 where

128 <= nnn <= 255... and the problem may not be limited to situations

where the LIKE pattern starts with a '%'.

When nnn > 255, the PCODE implmentation is not used, so the problem does

not occur then.

The root cause is a place in the code where the length of the column (in

characters, not bytes) is stored in a single byte and is retrieved as if

that byte contains a *signed* value. When 128 <= nnn <= 255, that

retrieval results in a negative value. The fix is to retrieve the value

as an *unsigned* value.

NOTE: This commit changes 2 lines of code. The second line changed is

the only one necessary to fix the problem. The first line is being

changed as well because it has a similar problem and would prevent the

LIKE predicate from working properly if the LIKE pattern had more than

127 pattern parts.

Change-Id: Ideb063cbd62b9155e9b1f579bcd0edb187e8a1c8

Merge "Updated known diff file for compGeneral/TEST006"

[bug:1440941]T2 Phoenix tests creating cores at CliStatement::doOltExecute

Change-Id: I31d42e96bf15ebd58ca9bd2f9a2532030d0f0e02

    • -0
    • +6
    /conn/jdbc_type2/native/SqlInterface.cpp
Merge "Updated DCS documentation"

Updated known diff file for compGeneral/TEST006

Temporarily allow failures at propagating MODE_SPECIAL_1 cqd. The real

problem will be traced on another LP bug.

Change-Id: I45be0496eaa01f13cac7b953fd3313ad91da2447

Updated DCS documentation

Change-Id: I5fa14215b759b184b9f77e318db5246fc4c7f8a8

    • -8
    • +17
    /src/main/resources/dcs-default.xml
Merge "hive/test002 fix"

Merge "LP 1442730 Eliminating SQL Cancel for PUBLISHING Queries"

Merge "Fix for bug 1442932 and bug 1442966, encoding for varchar"

Fast Transport fix

fix for LP#1444575.

This checkin addresses an issue with size of the buffer where

we get the row before conveting to delimited format. The

buffer in this case is a single row buffer.

Change-Id: I33ad4bb0a5f2f84b8f56983b76b1b9ba73c9f6f6

Merge "LP 1444044, dup col definitions are not detected"

Merge "Temporary fix for LP bug 1438372"

Fixes in T2 driver to enable OE performance run

Following errors are ignored at the T2 jdbc driver to conform

to JDBC/ODBC Standard.

ERROR[8605] Committing a transaction which has not started.

ERROR[8609] Waited rollback performed without starting a transaction.

Memory corruption causing java core in T2 OE run.

Row count is treated as 32 bit integer while SQL expects 64 bit

numeric value to be passed to SQL_EXEC_GetDiagnosticsStmtInfo2.

This was causing the corruption.

There was a possibility that the row count buffer was used

after de-allocation. Fixed this code in Type2 JDBC driver

Change-Id: If0ae5475ed9986c8996cb324e679a615e62cd9b1

    • -4
    • +16
    /conn/jdbc_type2/native/SqlInterface.cpp
LP 1444044, dup col definitions are not detected

During a create table or create view, duplicate col

names are not being detected and an error is not being

returned. This results in an incorect table or view

being created. An error may or may not be returned

later depending on how that object is being used.

That has been fixed to detect dup cols and return an

error message during create time.

Fixing it now so users dont create incorrect objects

by mistake.

Change-Id: I7ad772adcb067159ab80a487dead8dffc62bb546

    • -0
    • +14
    /sql/sqlcomp/CmpSeabaseDDLcommon.cpp
Fix for bug 1442932 and bug 1442966, encoding for varchar

Submitting this before finishing regressions on workstation, in the

interest of time.

Key encodings for VARCHAR values used to put a varchar length indicator

in front of the encoded value. The value was the max. length of the

varchar and the indicator was 2 or 4 bytes long, depending on the

length of the indicator in the source field. That length used to

depend only on the max number of bytes in the field, for >32767

bytes we would use a 4 byte VC length indicator.

Now, with the introduction of long rows, the varchar indicator length

for varchars in aligned rows is always 4 bytes, regardless of the

character length. This causes a problem for the key encoding.

We could have computed the encoded VC indicator length from the field

length. Anoop suggested a better solution, not to include the VC

indicator at all, since that is unnecessary. Note that for HBase row

keys stored on disk, we already remove the VC indicator by converting

such keys from varchar to fixed char. Therefore, the issue happens

only for encoding needed in a query, for example when sorting or in a

merge join or union.

Description of the fix:

1. Change CompEncode::synthType not to include the VC length

indicator in the encoded buffer. This change also includes

some minor code clean-up.

2. Change the assert in CompEncode::codeGen not to include the

VC indicator length anymore.

3. Changes in ex_function_encode::encodeKeyValue():

a) Read 2 and 4 byte VC length indicators for VARCHAR/NVARCHAR.

b) Small code cleanup, don't copy buffer for case-insensitive

encode, since that is not necessary.

c) Don't write max length as VC length indicator into target

and adjust target offsets accordingly (for VARCHAR/NVARCHAR).

4. Other changes in sql/exp/exp_function.cpp:

d) Handle 2 and 4 byte VC len indicators in hash function

and Hive hash function (problems unrelated to LP bugs fixed).

e) Add some asserts for cases where we assume VC length indicator

is a 2 byte integer.

CompDecode is not yet changed. Filed bug 1444134 to do that for

the next release, since that change is less urgent.

Patch set 2: Copyright notice changes only.

Patch set 3: Updated expected regression test file that

prints out encoded key in hex.

Change-Id: Idab3ed488f8c1b9aabedba4689bfb8d7286b9538

    • -18
    • +18
    /sql/regress/charsets/EXPECTED001
refactor gomon.cold.