Clone Tools
  • last updated 25 mins ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
DRILL-7442: Create multi-batch row set reader

Adds a ResultSetReader that works across multiple batches

in a result set. Reuses the same row set and readers if

schema is unchanged, creates a new set if the schema changes.

Adds a unit test for the result set reader.

Adds a "rebind" capability to the row set readers to focus

on new buffers under an existing set of vectors. Used when

a new batch arrives, if the schema is unchanged.

Extends row set classses to be aware of the BatchAccessor class

which encapsulates a container and optional selection vector,

and tracks schema changes.

Moves row set tests into the same package as the row sets.

(Row set classes were moved a while back, but the tests were

not moved.)

Renames some BatchAccessor methods.

closes #1897

    • -0
    • +7
    ./accessor/reader/AbstractTupleReader.java
    • -2
    • +12
    ./accessor/reader/ArrayReaderImpl.java
    • -1
    • +23
    ./accessor/reader/BaseScalarReader.java
    • -2
    • +11
    ./accessor/reader/UnionReaderImpl.java
  1. … 56 more files in changeset.
DRILL-7445: Create batch copier based on result set framework

The result set framework now provides both a reader and writer.

This PR provides a copier that copies batches using this

framework. Such a copier can:

- Copy selected records

- Copy all records, such as for an SV2 or SV4

The copier uses the result set loader to create uniformly-sized

output batches from input batches of any size. It does this

by merging or splitting input batches as needed.

Since the result set reader handles both SV2 and SV4s, the

copier can filter or reorder rows based on the SV associated

with the input batch.

This version assumes single stream of input batches, and handles

any schema changes in that input by creating output batches

that track the input schema. This would be used in, say, the

selection vector remover. A different design is needed for merging

such as in the merging receiver.

Adds a "copy" method to the column writers. Copy is implemented

by doing a direct memory copy from source to destination vectors.

A unit test verifies functionality for various use cases

and data types.

closes #1899

    • -8
    • +8
    ./accessor/reader/AbstractTupleReader.java
    • -1
    • +0
    ./accessor/reader/ArrayReaderImpl.java
    • -0
    • +12
    ./accessor/reader/BaseScalarReader.java
    • -0
    • +1
    ./accessor/reader/OffsetVectorReader.java
    • -6
    • +6
    ./accessor/reader/UnionReaderImpl.java
    • -0
    • +14
    ./accessor/writer/AbstractArrayWriter.java
    • -4
    • +18
    ./accessor/writer/AbstractTupleWriter.java
    • -4
    • +4
    ./accessor/writer/BaseVarWidthWriter.java
    • -0
    • +8
    ./accessor/writer/BitColumnWriter.java
  1. … 25 more files in changeset.
DRILL-7441: Fix issues with fillEmpties, offset vectors

Fixes subtle issues with offset vectors and "fill empties"

logic.

Drill has an informal standard that if a batch has no rows, then

offset vectors within that batch should have zero size. Contrast

this with batches of size 1 that should have offset vectors of

size 2. Changed to enforce this rule throughout.

Nullable, repeated and variable-width vectors have "fill empties"

logic that is used in two places: when setting the value count and

when preparing to write a new value. The current logic is not

quite right for either case. Added tests and fixed the code to

properly handle each case.

Revised the batch validator to enforce the offset-vector length of 0 for

0-sized batches rule. The result was much simpler code.

Added tools to easily print a batch, restoring some code that

was recently lost when the RowSet classes were moved.

Code cleanup in all files touched.

Added logic to "dirty" allocated buffers when testing to ensure

logic is not sensitive to the "pristine" state of new buffers.

Added logic to the column writers to enforce the zero-size-batch rule

for offset vectors. Added unit tests for this case.

Fixed the column writers to set the "lastSet" mutator value for

nullable types since other code relies on this value.

Removed the "setCount" field in nullable vectors: turns out

it is not actually used.

closes #1896

    • -8
    • +16
    ./accessor/writer/OffsetVectorWriterImpl.java
    • -15
    • +17
    ./complex/AbstractRepeatedMapVector.java
    • -12
    • +4
    ./complex/BaseRepeatedValueVector.java
    • -15
    • +27
    ./complex/EmptyValuePopulator.java
  1. … 37 more files in changeset.
DRILL-7439: Batch count fixes for six additional operators

Enables vector checks, and fixes batch count and vector issues for:

* StreamingAggBatch

* RuntimeFilterRecordBatch

* FlattenRecordBatch

* MergeJoinBatch

* NestedLoopJoinBatch

* LimitRecordBatch

Also fixes a zero-size batch validity issue for the CSV reader when

all files contain no data.

Includes code cleanup for files touched in this PR.

closes #1893

    • -14
    • +17
    ./accessor/writer/OffsetVectorWriterImpl.java
    • -4
    • +9
    ./complex/BaseRepeatedValueVector.java
  1. … 20 more files in changeset.
DRILL-7436: Fix record count, vector structure issues in several operators

Adds additional vector checks to the BatchValidator.

Enables checking for the following operators:

* FilterRecordBatch

* PartitionLimitRecordBatch

* UnnestRecordBatch

* HashAggBatch

* RemovingRecordBatch

Fixes vector count issues for each of these.

Fixes empty-batch (record count = 0) handling in several of the

above operators. Added a method to VectorContainer to correctly

create an empty batch. (An empty batch, counter-intuitively,

needs vectors allocated to hold the 0 value in the first

position of each offset vector.)

Disables verbose logging for MongoDB tests. Details are written to

the log rather than the console.

Disables two invalid Mongo tests. See DRILL-7428.

Adjusts the expression tree materializer to not add the LATE type

to Union vectors. (See DRILL-7435.)

Ensures that Union vectors contain valid vectors for each subtype.

The present fix is a work-around, see DRILL-7434 for a better

long-term fix.

Cleans up code formatting and other minor issues in each file touched

during the fixes in this PR.

  1. … 36 more files in changeset.
DRILL-7414: EVF incorrectly sets buffer writer index after rollover

Enabling the vector validator on the "new" scan operator, in cases

in which overflow occurs, identified that the DrillBuf writer index

was not properly set for repeated vectors.

Enables such checking, adds unit tests, and fixes the writer index

issue.

closes #1878

  1. … 5 more files in changeset.
DRILL-7412: Minor unit test improvements

Many tests intentionally trigger errors. A debug-only log setting

sent those errors to stdout. The resulting stack dumps simply cluttered

the test output, so disabled error output to the console.

Drill can apply bounds checks to vectors. Tests run via Maven

enable bounds checking. Now, bounds checking is also enabled in

"debug mode" (when assertions are enabled, as in an IDE.)

Drill contains two test frameworks. The older BaseTestQuery was

marked as deprecated, but many tests still use it and are unlikely

to be changed soon. So, removed the deprecated marker to reduce the

number of spurious warnings.

Also includes a number of minor clean-ups.

closes #1876

    • -16
    • +13
    ./complex/RepeatedValueVector.java
  1. … 14 more files in changeset.
DRILL-7377: Nested schemas for dynamic EVF columns

The Result Set Loader (part of EVF) allows adding columns up-front

before reading rows (so-called "early schema.") Such schemas allow

nested columns (maps with members, repeated lists with a type, etc.)

The Result Set Loader also allows adding columns dynamically

while loading data (so-called "late schema".) Previously, the code

assumed that columns would be added top-down: first the map, then

the map's contents, etc.

Charles found a need to allow adding a nested column (a repeated

list with a declared list type.)

This patch revises the code to use the same mechanism in both the

early- and late-schema cases, allowing adding nested columns at

any time.

Testing: Added a new unit test case for the repeated list late

schema with content case.

    • -5
    • +21
    ./accessor/writer/AbstractTupleWriter.java
  1. … 5 more files in changeset.
DRILL-7254: Read Hive union w/o nulls

  1. … 20 more files in changeset.
DRILL-7373: Fix problems involving reading from DICT type

- Fixed FieldIdUtil to resolve reading from DICT for some complex cases;

- optimized reading from DICT given a key by passing an appropriate Object type to DictReader#find(...) and DictReader#read(...) methods when schema is known (e.g. when reading from Hive tables) instead of generating it on fly based on int or String path and key type;

- fixed error when accessing value by not existing key value in Avro table.

    • -4
    • +21
    ./complex/impl/SingleDictReaderImpl.java
  1. … 10 more files in changeset.
DRILL-7252: Read Hive map using Dict<K,V> vector

  1. … 16 more files in changeset.
DRILL-7350: Move RowSet related classes from test folder

    • -1
    • +1
    ./accessor/writer/AbstractTupleWriter.java
  1. … 290 more files in changeset.
DRILL-7341: Vector reAlloc may fail after exchange

closes #1838

  1. … 3 more files in changeset.
DRILL-7315: Revise precision and scale order in the method arguments

    • -2
    • +2
    ./complex/impl/MapOrListWriterImpl.java
  1. … 27 more files in changeset.
DRILL-7273: Introduce operators for handling metadata

closes #1886

    • -0
    • +6
    ./complex/impl/RepeatedMapReaderImpl.java
    • -1
    • +7
    ./complex/impl/SingleMapReaderImpl.java
  1. … 152 more files in changeset.
DRILL-7258: Remove field width limit for text reader

The V2 text reader enforced a limit of 64K characters when using

column headers, but not when using the columns[] array. The V3 reader

enforced the 64K limit in both cases.

This patch removes the limit in both cases. The limit now is the

16MB vector size limit. With headers, no one column can exceed 16MB.

With the columns[] array, no one row can exceed 16MB. (The 16MB

limit is set by the Netty memory allocator.)

Added an "appendBytes()" method to the scalar column writer which adds

additional bytes to those already written for a specific column or

array element value. The method is implemented for VarChar, Var16Char

and VarBinary vectors. It throws an exception for all other types.

When used with a type conversion shim, the appendBytes() method throws

an exception. This should be OK because, the previous setBytes() should

have failed because a huge value is not acceptable for numeric or date

types conversions.

Added unit tests of the append feature, and for the append feature in

the batch overflow case (when appending bytes causes the vector or

batch to overflow.) Also added tests to verify the lack of column width

limit with the text reader, both with and without headers.

closes #1802

    • -1
    • +6
    ./accessor/writer/AbstractArrayWriter.java
    • -0
    • +5
    ./accessor/writer/BaseScalarWriter.java
    • -0
    • +7
    ./accessor/writer/BaseVarWidthWriter.java
    • -0
    • +3
    ./accessor/writer/ScalarArrayWriter.java
    • -0
    • +3
    ./accessor/writer/dummy/DummyScalarWriter.java
  1. … 14 more files in changeset.
DRILL-7278: Refactor result set loader projection mechanism

Drill 1.16 added a enhanced scan framework based on the row set

mechanisms, and a "provisioned schema" feature build on top

of that framework. Conversion of the log reader plugin to use

the framework identified additional features we wish to add,

such as marking a column as "special" (not expanded in a wildcard

query.)

This work identified that the code added for provisioned schemas in

Drill 1.16 worked, but is a bit overly complex, making it hard to add

the desired new feature.

This patch refactors the "reader" projection code:

* Create a "projection set" mechanism that the reader can query to ask,

"the caller just added a column. Should it be projected or not?"

* Unifies the type conversion mechanism added as part of provisioned

schemas.

* Added the "special column" property for both "reader" and "provided"

schemas.

* Verified that provisioned schemas work with maps (at least on the scan

framework side.)

* Replaced the previous "schema transformer" mechanism with a new "type

conversion" mechanism that unifies type conversion, provided schemas

and an optional custom type conversion mechanism.

* Column writers can report if they are projected. Moved this query

from metadata to the column writer itself.

* Extended and clarified documentation of the feature.

* Revised and/or added unit tests.

closes #1797

    • -0
    • +3
    ./accessor/writer/AbstractArrayWriter.java
    • -0
    • +3
    ./accessor/writer/AbstractScalarWriterImpl.java
    • -9
    • +3
    ./accessor/writer/AbstractTupleWriter.java
    • -0
    • +3
    ./accessor/writer/UnionWriterImpl.java
    • -0
    • +3
    ./accessor/writer/dummy/DummyArrayWriter.java
    • -0
    • +3
    ./accessor/writer/dummy/DummyScalarWriter.java
  1. … 62 more files in changeset.
DRILL-7257: Set nullable var-width vector lastSet value

Turns out this is due to a subtle issue with variable-width nullable

vectors. Such vectors have a lastSet attribute in the Mutator class.

When using "transfer pairs" to copy values, the code somehow decides

to zero-fill from the lastSet value to the record count. The row set

framework did not set this value, meaning that the RemovingRecordBatch

zero-filled the dir0 column when it chose to use transfer pairs rather

than copying values. The use of transfer pairs occurs when all rows in

a batch pass the filter prior to the removing record batch.

Modified the nullable vector writer to properly set the lastSet value at

the end of each batch. Added a unit test to verify the value is set

correctly.

Includes a bit of code clean-up.

  1. … 7 more files in changeset.
DRILL-7143: Support default value for empty columns

Modifies the prior work to add default values for columns. The prior work added defaults

when the entire column is missing from a reader (the old Nullable Int column). The Row

Set mechanism now will also "fill empty" slots with the default value.

Added default support for the column writers. The writers automatically obtain the

default value from the column schema. The default can also be set explicitly on

the column writer.

Updated the null column mechanism to use this feature rather than the ad-hoc

implemention in the prior commit.

Semantics changed a bit. Only Required columns take a default. The default value

is ignored or nullable columns since nullable columns already have a file default: NULL.

Other changes:

* Updated the CSV-with-schema tests to illustrate the new behavior.

* Made multiple fixes for Boolean and Decimal columns and added unit tests.

* Upgraded Fremarker to version 2.3.28 to allow use of the continue statement.

* Reimplemented the Bit column reader and writer to use the BitVector directly since this vector is rather special.

* Added get/set Boolean methods for column accessors

* Moved the BooleanType class to the common package

* Added more CSV unit tests to explore decimal types, booleans, and defaults

* Add special handling for blank fields in from-string conversions

* Added options to the conversion factory to specify blank-handling behavior.

CSV uses a mapping of blanks to null (nullable) or default value (non-nullable)

closes #1726

    • -0
    • +210
    ./accessor/convert/AbstractConvertFromString.java
    • -11
    • +15
    ./accessor/convert/ConvertStringToBoolean.java
    • -11
    • +13
    ./accessor/convert/ConvertStringToDecimal.java
    • -11
    • +14
    ./accessor/convert/ConvertStringToDouble.java
    • -11
    • +14
    ./accessor/convert/ConvertStringToInt.java
    • -11
    • +13
    ./accessor/convert/ConvertStringToInterval.java
    • -11
    • +14
    ./accessor/convert/ConvertStringToTimeStamp.java
  1. … 58 more files in changeset.
DRILL-7096: Develop vector for canonical Map<K,V>

- Added new type DICT;

- Created value vectors for the type for single and repeated modes;

- Implemented corresponding FieldReaders and FieldWriters;

- Made changes in EvaluationVisitor to be able to read values from the map by key;

- Made changes to DrillParquetGroupConverter to be able to read Parquet's MAP type;

- Added an option `store.parquet.reader.enable_map_support` to disable reading MAP type as DICT from Parquet files;

- Updated AvroRecordReader to use new DICT type for Avro's MAP;

- Added support of the new type to ParquetRecordWriter.

    • -0
    • +1
    ./accessor/reader/ColumnReaderFactory.java
    • -0
    • +2
    ./accessor/writer/ColumnWriterFactory.java
    • -0
    • +513
    ./complex/AbstractRepeatedMapVector.java
    • -1
    • +50
    ./complex/BaseRepeatedValueVector.java
    • -0
    • +312
    ./complex/DictVector.java
    • -0
    • +165
    ./complex/RepeatedDictVector.java
    • -476
    • +35
    ./complex/RepeatedMapVector.java
    • -0
    • +5
    ./complex/impl/AbstractBaseReader.java
    • -0
    • +136
    ./complex/impl/AbstractRepeatedMapReaderImpl.java
    • -0
    • +11
    ./complex/impl/MapOrListWriterImpl.java
    • -0
    • +143
    ./complex/impl/RepeatedDictReaderImpl.java
    • -1
    • +3
    ./complex/impl/RepeatedListReaderImpl.java
    • -110
    • +6
    ./complex/impl/RepeatedMapReaderImpl.java
  1. … 94 more files in changeset.
DRILL-7011: Support schema in scan framework

* Adds schema support to the row set-based scan framework and to the "V3" text reader based on that framework.

* Adding the schema made clear that passing options as a long list of constructor arguments was not sustainable. Refactored code to use a builder pattern instead.

* Added support for default values in the "null column loader", which required adding a "setValue" method to the column accessors.

* Added unit tests for all new or changed functionality. See TestCsvWithSchema for the overall test of the entire integrated mechanism.

* Added tests for explicit projection with schema

* Better handling of date/time in column accessors

* Converted recent column metadata work from Java 8 date/time to Joda.

* Added more CSV-with-schema unit tests

* Removed the ID fields from "resolved columns", used "instanceof" instead.

* Added wildcard projection with an output schema. Handles both "lenient" and "strict" schemas.

* Tagged projection columns with their output schema, when available.

* Scan projection added modes for wildcard with an output schema. The reader projection added support for merging reader and output schemas.

* Includes refactoring of scan operator tests (the test file grew too large.)

* Renamed some classes to avoid confusing reader schemas with output schemas.

* Added unit tests for the new functionality.

* Added "lenient" wildcard with schema test for CSV

* Added more type conversions: string-to-bit, many-to-string

* Fixed bug in column writer for VarDecimal

* Added missing unit tests, and fixed bugs, in Bit column reader/writer

* Cleaned up a number of unneded "SuppressWarnings"

closes #1711

    • -0
    • +47
    ./accessor/convert/AbstractConvertFromString.java
    • -0
    • +41
    ./accessor/convert/ConvertBooleanToString.java
    • -0
    • +60
    ./accessor/convert/ConvertDateToString.java
    • -0
    • +45
    ./accessor/convert/ConvertDecimalToString.java
    • -0
    • +43
    ./accessor/convert/ConvertDoubleToString.java
    • -0
    • +43
    ./accessor/convert/ConvertIntToString.java
    • -0
    • +48
    ./accessor/convert/ConvertIntervalToString.java
    • -0
    • +43
    ./accessor/convert/ConvertLongToString.java
    • -0
    • +47
    ./accessor/convert/ConvertStringToBoolean.java
    • -0
    • +50
    ./accessor/convert/ConvertStringToDecimal.java
  1. … 210 more files in changeset.
DRILL-7086: Output schema for row set mechanism

Enhances the row set mechanism to take an "output schema" that describes the vectors to

create. The "input schema" describes the type that the reader would like to write. A

conversion mechanism inserts a conversion shim to convert from the input to output type.

Provides a set of implicit type conversions, including string-to-date/time conversions

which use the new format property stored in column metadata. Includes unit tests for

the new functionality.

closes #1690

    • -39
    • +0
    ./accessor/ColumnConversionFactory.java
    • -0
    • +55
    ./accessor/InvalidConversionError.java
    • -1
    • +15
    ./accessor/UnsupportedConversionError.java
    • -0
    • +125
    ./accessor/convert/AbstractWriteConverter.java
    • -0
    • +44
    ./accessor/convert/ColumnConversionFactory.java
    • -0
    • +57
    ./accessor/convert/ConvertStringToDate.java
    • -0
    • +46
    ./accessor/convert/ConvertStringToDouble.java
    • -0
    • +47
    ./accessor/convert/ConvertStringToInt.java
    • -0
    • +49
    ./accessor/convert/ConvertStringToInterval.java
    • -0
    • +46
    ./accessor/convert/ConvertStringToLong.java
    • -0
    • +56
    ./accessor/convert/ConvertStringToTime.java
    • -0
    • +55
    ./accessor/convert/ConvertStringToTimeStamp.java
    • -0
    • +262
    ./accessor/convert/StandardConversions.java
  1. … 51 more files in changeset.
DRILL-6524: Assign holder fields instead of assigning object references in generated code to allow scalar replacement for more cases closes #1686

  1. … 2 more files in changeset.
DRILL-4858: REPEATED_COUNT on an array of maps and an array of arrays is not implemented

- Implemented 'repeated_count' function for repeated MAP and repeated LIST;

- Updated RepeatedListReader and RepeatedMapReader implementations to return correct value from size() method

- Moved repeated_count to freemarker template and added support for more repeated types for the function

closes #1641

    • -11
    • +13
    ./complex/impl/RepeatedListReaderImpl.java
    • -20
    • +15
    ./complex/impl/RepeatedMapReaderImpl.java
  1. … 7 more files in changeset.
DRILL-7019: Add check for redundant imports

close apache/drill#1629

  1. … 23 more files in changeset.
DRILL-7024: Refactor ColumnWriter to simplify type-conversion shim

DRILL-7006 added a type conversion "shim" within the row set framework. Basically, we insert a "shim" column writer that takes data in one form (String, say), and does reader-specific conversions to a target format (INT, say).

The code works fine, but the shim class ends up needing to override a bunch of methods which it then passes along to the base writer. This PR refactors the code so that the conversion shim is simpler.

closes #1633

    • -2
    • +1
    ./accessor/ColumnConversionFactory.java
    • -1
    • +12
    ./accessor/writer/AbstractArrayWriter.java
    • -94
    • +38
    ./accessor/writer/AbstractScalarWriter.java
    • -0
    • +142
    ./accessor/writer/AbstractScalarWriterImpl.java
    • -8
    • +31
    ./accessor/writer/AbstractTupleWriter.java
    • -0
    • +106
    ./accessor/writer/AbstractWriteConverter.java
    • -186
    • +0
    ./accessor/writer/AbstractWriteConvertor.java
    • -1
    • +1
    ./accessor/writer/BaseScalarWriter.java
    • -1
    • +1
    ./accessor/writer/ColumnWriterFactory.java
    • -69
    • +0
    ./accessor/writer/ConcreteWriter.java
  1. … 52 more files in changeset.
DRILL-7006: Add type conversion to row writers

Modifies the column metadata and writer abstractions to allow a type conversion "shim" to be specified as part of the schema, then inserted as part of the row set writer. Allows, say, setting an Int or Date from a string, parsing the string to obtain the proper data type to store in the vector.

Type conversion not yet supported in the result set loader: some additional complexity needs to be resolved.

Adds unit tests for this functionality. Refactors some existing tests to remove rough edges.

closes #1623

    • -0
    • +40
    ./accessor/ColumnConversionFactory.java
    • -1
    • +1
    ./accessor/UnsupportedConversionError.java
    • -41
    • +11
    ./accessor/writer/AbstractScalarWriter.java
    • -0
    • +186
    ./accessor/writer/AbstractWriteConvertor.java
    • -0
    • +69
    ./accessor/writer/ConcreteWriter.java
    • -10
    • +14
    ./accessor/writer/ScalarArrayWriter.java
  1. … 9 more files in changeset.
DRILL-6950: Row set-based scan framework

Adds the "plumbing" that connects the scan operator to the result set loader and the scan projection framework. See the various package-info.java files for the technical datails. Also adds a large number of tests.

This PR does not yet introduce an actual scan operator: that will follow in subsequent PRs.

closes #1618

    • -5
    • +12
    ./accessor/writer/AbstractTupleWriter.java
  1. … 61 more files in changeset.
DRILL-6962: Function coalesce returns an Error when none of the columns in coalesce exist in a parquet file

- Updated UntypedNullVector to hold value count when vector is allocated and transfered to another one;

- Updated RecordBatchLoader and DrillCursor to handle case when only UntypedNull values are present in RecordBatch (special case when data buffer is null but actual values are present);

- Added functions to cast UntypedNull value to other types for use in UDFs;

- Moved UntypedReader, UntypedHolderReaderImpl and UntypedReaderImpl from org.apache.drill.exec.vector.complex.impl to org.apache.drill.exec.vector package.

closes #1614

    • -0
    • +50
    ./UntypedHolderReaderImpl.java
    • -0
    • +28
    ./UntypedReader.java
    • -0
    • +49
    ./UntypedReaderImpl.java
    • -51
    • +0
    ./complex/impl/UntypedHolderReaderImpl.java
    • -50
    • +0
    ./complex/impl/UntypedReaderImpl.java
  1. … 9 more files in changeset.
DRILL-6797: Fix UntypedNull handling for complex types

    • -0
    • +11
    ./complex/impl/AbstractBaseReader.java
    • -0
    • +51
    ./complex/impl/UntypedHolderReaderImpl.java
    • -0
    • +30
    ./complex/impl/UntypedReader.java
    • -0
    • +50
    ./complex/impl/UntypedReaderImpl.java
  1. … 7 more files in changeset.