Clone Tools
  • last updated a few minutes ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
DRILL-7385: Convert PCAP Format Plugin to EVF

    • -0
    • +103
    ./apache/drill/exec/store/pcap/TestPcapEVFReader.java
  1. … 7 more files in changeset.
DRILL-7377: Nested schemas for dynamic EVF columns

The Result Set Loader (part of EVF) allows adding columns up-front

before reading rows (so-called "early schema.") Such schemas allow

nested columns (maps with members, repeated lists with a type, etc.)

The Result Set Loader also allows adding columns dynamically

while loading data (so-called "late schema".) Previously, the code

assumed that columns would be added top-down: first the map, then

the map's contents, etc.

Charles found a need to allow adding a nested column (a repeated

list with a declared list type.)

This patch revises the code to use the same mechanism in both the

early- and late-schema cases, allowing adding nested columns at

any time.

Testing: Added a new unit test case for the repeated list late

schema with content case.

  1. … 5 more files in changeset.
DRILL-7358: Fix COUNT(*) for empty text files

Fixes a subtle error when a text file has a header (and so has a

schema), but is in a COUNT(*) query, so that no columns are

projected. Ensures that, in this case, an empty schema is

treated as a valid result set.

Tests: updated CSV tests to include this case.

closes #1867

  1. … 9 more files in changeset.
DRILL-7357: Expose Drill Metastore data through information_schema

1. Add additional columns to TABLES and COLUMNS tables.

2. Add PARTITIONS table.

3. General refactoring to adjust information_schema data retrieval from multiple sources.

closes #1860

    • -7
    • +17
    ./apache/drill/exec/sql/TestInfoSchema.java
    • -0
    • +401
    ./apache/drill/exec/sql/TestInfoSchemaWithMetastore.java
    • -1
    • +2
    ./apache/drill/exec/sql/TestViewSupport.java
  1. … 28 more files in changeset.
DRILL-7373: Fix problems involving reading from DICT type

- Fixed FieldIdUtil to resolve reading from DICT for some complex cases;

- optimized reading from DICT given a key by passing an appropriate Object type to DictReader#find(...) and DictReader#read(...) methods when schema is known (e.g. when reading from Hive tables) instead of generating it on fly based on int or String path and key type;

- fixed error when accessing value by not existing key value in Avro table.

  1. … 10 more files in changeset.
DRILL-7368: Fix Iceberg Metastore failure when filter column contains nulls

    • -0
    • +6
    ./apache/drill/test/OperatorFixture.java
    • -1
    • +1
    ./apache/drill/test/PhysicalOpUnitTestBase.java
  1. … 9 more files in changeset.
DRILL-7168: Implement ALTER SCHEMA ADD / REMOVE commands

    • -1
    • +414
    ./apache/drill/TestSchemaCommands.java
  1. … 14 more files in changeset.
DRILL-7362: COUNT(*) on JSON with outer list results in JsonParse error

closes #1849

  1. … 3 more files in changeset.
DRILL-7326: Support repeated lists for CTAS parquet format

closes #1844

  1. … 4 more files in changeset.
DRILL-7350: Move RowSet related classes from test folder

  1. … 278 more files in changeset.
DRILL-4517: Support reading empty Parquet files

1. Modified flat and complex parquet readers to output schema only when requested number of records to read is 0. In this case readers are not initialized to improve performance.

2. Allowed reading requested number of rows instead of all rows in the row group (DRILL-6528).

3. Fixed issue with nulls number determination in the row group (fixed IsPredicate#isAllNulls method).

4. Allowed reading empty parquet files via adding empty / fake row group.

5. General refactoring and unit tests.

6. Parquet tests categorization.

closes #1839

    • -0
    • +417
    ./apache/drill/exec/store/parquet/TestEmptyParquet.java
  1. … 34 more files in changeset.
DRILL-7337: Add vararg UDFs support

    • -0
    • +325
    ./apache/drill/exec/fn/impl/TestVarArgFunctions.java
    • -0
    • +42
    ./apache/drill/exec/fn/impl/testing/VarArgAddFunction.java
  1. … 30 more files in changeset.
DRILL-7335: Fix error when reading csv file with headers only

closes #1834

  1. … 1 more file in changeset.
DRILL-7332: Allow parsing empty schema

closes #1828

    • -21
    • +45
    ./apache/drill/TestSchemaCommands.java
  1. … 2 more files in changeset.
DRILL-7327: Log Regex Plugin Won't Recognize Schema

The previous commit revised the plugin config classes to work

with table functions. That caused Jackson to stop working for

the classess. Fixed those issues and added unit tests.

closes #1827

    • -15
    • +65
    ./apache/drill/exec/store/log/TestLogReader.java
  1. … 4 more files in changeset.
DRILL-7205: Drill fails to start when authentication is disabled

closes #1824

  1. … 1 more file in changeset.
DRILL-7314: Use TupleMetadata instead of concrete implementation

1. Add ser / de implementation for TupleMetadata interface based on types.

2. Replace TupleSchema usage where possible.

3. Move patcher classes into commons.

4. Upgrade some dependencies and general refactoring.

  1. … 36 more files in changeset.
DRILL-7315: Revise precision and scale order in the method arguments

    • -5
    • +5
    ./apache/drill/exec/fn/impl/TestCastFunctions.java
  1. … 26 more files in changeset.
DRILL-7307: casthigh for decimal type can lead to the issues with VarDecimalHolder

- Fixed code-gen for VarDecimal type

- Fixed code-gen issue with nullable holders for simple cast functions

with passed constants as arguments.

- Code-gen now honnoring DataType.Optional type defined by UDF for

NULL-IF-NULL functions.

  1. … 9 more files in changeset.
DRILL-7310: Move schema-related classes from exec module to be able to use them in metastore module

closes #1816

    • -2
    • +5
    ./apache/drill/TestFunctionsQuery.java
    • -7
    • +12
    ./apache/drill/TestStarQueries.java
    • -6
    • +13
    ./apache/drill/TestUntypedNull.java
    • -20
    • +41
    ./apache/drill/exec/TestEmptyInputSql.java
    • -17
    • +14
    ./apache/drill/exec/cache/TestBatchSerialization.java
    • -2
    • +5
    ./apache/drill/exec/fn/impl/TestCastFunctions.java
  1. … 88 more files in changeset.
DRILL-7306: Disable schema-only batch for new scan framework

The EVF framework is set up to return a "fast schema" empty batch

with only schema as its first batch because, when the code was

written, it seemed that's how we wanted operators to work. However,

DRILL-7305 notes that many operators cannot handle empty batches.

Since the empty-batch bugs show that Drill does not, in fact,

provide a "fast schema" batch, this ticket asks to disable the

feature in the new scan framework. The feature is disabled with

a config option; it can be re-enabled if ever it is needed.

SQL differentiates between two subtle cases, and both are

supported by this change.

1. Empty results: the query found a schema, but no rows

are returned. If no reader returns any rows, but at

least one reader provides a schema, then the scan

returns an empty batch with the schema.

2. Null results: the query found no schema or rows. No

schema is returned. If no reader returns rows or

schema, then the scan returns no batch: it instead

immediately returns a DONE status.

For CSV, an empty file with headers returns the null result set

(because we don't know the schema.) An empty CSV file without headers

returns an empty result set because we do know the schema: it will

always be the columns array.

Old tests validate the original schema-batch mode, new tests

added to validate the no-schema-batch mode.

    • -9
    • +2
    ./apache/drill/TestSchemaWithTableFunction.java
    • -3
    • +3
    ./apache/drill/exec/TestEmptyInputSql.java
  1. … 30 more files in changeset.
DRILL-7302: Bump Apache Avro to 1.9.0

Apache Avro 1.9.0 brings a lot of new features:

Deprecate Joda-Time in favor of Java8 JSR310 and setting it as default

Remove support for Hadoop 1.x

Move from Jackson 1.x to 2.9

Add ZStandard Codec

Lots of updates on the dependencies to fix CVE's

Remove Jackson classes from public API

Apache Avro is built by default with Java 8

Apache Avro is compiled and tested with Java 11 to guarantee compatibility

Apache Avro MapReduce is compiled and tested with Hadoop 3

Apache Avro is now leaner, multiple dependencies were removed: guava, paranamer, commons-codec, and commons-logging

and many, many more!

close apache/drill#1812

  1. … 3 more files in changeset.
DRILL-7297: Query hangs in planning stage when Error is thrown

close apache/drill#1811

    • -0
    • +8
    ./apache/drill/TestFunctionsQuery.java
    • -0
    • +42
    ./apache/drill/exec/fn/impl/testing/CustomErrorFunction.java
  1. … 1 more file in changeset.
DRILL-7271: Refactor Metadata interfaces and classes to contain all needed information for the File based Metastore

  1. … 119 more files in changeset.
DRILL-7253: Read Hive struct w/o nulls

  1. … 17 more files in changeset.
DRILL-6951: Merge row set based mock data source

The mock data source is used in several tests to generate a large volume

of sample data, such as when testing spilling. The mock data source also

lets us try new plugin featues in a very simple context. During the

development of the row set framework, the mock data source was converted

to use the new framework to verify functionality. This commit upgrades

the mock data source with that work.

The work changes non of the functionality. It does, however, improve

memory usage. Batchs are limited, by default, to 10 MB in size. The row

set framework minimizes internal fragmentation in the largest vector.

(Previously, internal fragmentation averaged 25% but could be as high as

50%.)

As it turns out, the hash aggregate tests depended on the internal

fragmentation: without it, the hash agg no longer spilled for the same

row count. Adjusted the generated row counts to recreate a data volume

that caused spilling.

One test in particular always failed due to assertions in the hash agg

code. These seem true bugs and are described in DRILL-7301. After

multiple failed attempts to get the test to work, it ws disabled until

DRILL-7301 is fixed.

Added a new unit test to sanity check the mock data source. (No test

already existed for this functionality except as verified via other unit

tests.)

    • -0
    • +148
    ./apache/drill/exec/store/mock/TestMockPlugin.java
    • -0
    • +304
    ./apache/drill/exec/store/mock/TestMockRowReader.java
  1. … 17 more files in changeset.
DRILL-7156: Support empty Parquet files creation

closes #1836

  1. … 1 more file in changeset.
DRILL-7293: Convert the regex ("log") plugin to use EVF

Converts the log format plugin (which uses a regex for parsing) to work

with the Extended Vector Format.

User-visible behavior changes added to the README file.

* Use the plugin config object to pass config to the Easy framework.

* Use the EVF scan mechanism in place of the legacy "ScanBatch"

mechanism.

* Minor code and README cleanup.

* Replace ad-hoc type conversion with builtin conversions

The provided schema support in the enhanced vector framework (EVF)

provides automatic conversions from VARCHAR to most types. The log

format plugin was created before EVF was available and provided its own

conversion mechanism. This commit removes the ad-hoc conversion code and

instead uses the log plugin config schema information to create an

"output schema" just as if it was provided by the provided schema

framework.

Because we need the schema in the plugin (rather than the reader), moved

the schema-parsing code out of the reader into the plugin. The plugin

creates two schemas: an "output schema" with the desired output types,

and a "reader schema" that uses only VARCHAR. This causes the EVF to

perform conversions.

* Enable provided schema support

Allows the user to specify types using either the format config (as

previously) or a provided schema. If a schema is provided, it will match

columns using names specified in the format config.

The provided schema can specify both types and modes (nullable or not

null.)

If a schema is provided, then the types specified in the plugin config

are ignored. No attempt is made to merge schemas.

If a schema is provided, but a column is omitted from the schema, the

type defaults to VARCHAR.

* Added ability to specify regex in table properties

Allows the user to specify the regex, and the column schema,

using a CREATE SCHEMA statement. The README file provides the details.

Unit tests demonstrate and verify the functionality.

* Used the custom error context provided by EVF to enhance the log format

reader error messages.

* Added user name to default EVF error context

* Added support for table functions

Can set the regex and maxErrors fields, but not the schema.

Schema will default to "field_0", "field_1", etc. of type

VARCHAR.

* Added unit tests to verify the functionality.

* Added a check, and a test, for a regex with no groups.

* Added columns array support

When the log regex plugin is given no schema, it previously

created a list of columns "field_0", "field_1", etc. After

this change, the plugin instead follows the pattern set by

the text plugin: it will place all fields into the columns

array. (The two special fields are still separate.)

A few adjustments were necessary to the columns array

framework to allow use of the special columns along with

the `columns` column.

Modified unit tests and the README to reflect this change.

The change should be backward compatible because few users

are likely relying on the dummy field names.

Added unit tests to verify that schema-based table

functions work. A test shows that, due to the unforunate

config property name "schema", users of this plugin cannot

combine a config table function with the schema attribute

in the way promised in DRILL-6965.

    • -8
    • +8
    ./apache/drill/TestSchemaWithTableFunction.java
    • -57
    • +412
    ./apache/drill/exec/store/log/TestLogReader.java
  1. … 14 more files in changeset.
DRILL-7292: Remove V1 and V2 text readers

Drill 1.16 introduced the "V2" text reader based on the row set

and provided schema mechanisms. V3 was available by system/session

option as the functionality was considered experimental.

The functionality has now undergone thorough testing. This commit makes

the V3 text reader available by default, and removes the code for the

original "V1" and the "new" (compliant, "V2") text reader.

The system/session options that controlled reader selection are retained

for backward compatibility, but they no longer do anything.

Specific changes:

* Removed the V2 "compliant" text reader.

* Moved the "V3" to replace the "compliant" version.

* Renamed the "complaint" package to "reader."

* Removed the V1 text reader.

* Moved the V1 text writer (still used with the V2 and V3 readers)

into a new "writer" package adjacent to the reader.

* Removed the CSV tests for the V2 reader, including those that

demonstrated bugs in V2.

* V2 did not properly handle the quote escape character. One or two unit

tests depended on the broken behavior. Fixed them for the correct

behavior.

* Behavior of "messy quotes" (those that appear in a non-quoted field)

was undefined for the text reader. Added a test to clearly demonstrate

the (somewhat odd) behavior. The behavior itself was not changed.

Reran all unit tests to ensure that they work with the now-default V3

text reader.

closes #1806

    • -4
    • +9
    ./apache/drill/TestSelectWithOption.java
  1. … 45 more files in changeset.
DRILL-7268: Read Hive array with parquet native reader

1. Fixed preserving of group originalType for projected schema

in DrillParquetReader

2. Added reading of LIST logical type to DrillParquetGroupConverter.

Intermediate noop converter used to skip writing for next nested

repeated field after recognition of parent field as LIST. For this

skipRepeated 'true' passed to child converter's constructor.

close apache/drill#1805

  1. … 6 more files in changeset.