Clone Tools
  • last updated a few seconds ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
HADOOP-16900. Very large files can be truncated when written through the S3A FileSystem.

Contributed by Mukund Thakur and Steve Loughran.

This patch ensures that writes to S3A fail when more than 10,000 blocks are

written. That upper bound still exists. To write massive files, make sure

that the value of fs.s3a.multipart.size is set to a size which is large

enough to upload the files in fewer than 10,000 blocks.

Change-Id: Icec604e2a357ffd38d7ae7bc3f887ff55f2d721a

  1. … 6 more files in changeset.
HADOOP-16900. Very large files can be truncated when written through the S3A FileSystem.

Contributed by Mukund Thakur and Steve Loughran.

This patch ensures that writes to S3A fail when more than 10,000 blocks are

written. That upper bound still exists. To write massive files, make sure

that the value of fs.s3a.multipart.size is set to a size which is large

enough to upload the files in fewer than 10,000 blocks.

Change-Id: Icec604e2a357ffd38d7ae7bc3f887ff55f2d721a

  1. … 6 more files in changeset.
HADOOP-16953. tuning s3guard disabled warnings (#1962)

Contributed by Steve Loughran.

The S3Guard absence warning of HADOOP-16484 has been changed

so that by default the S3A connector only logs at debug

when the connection to the S3 Store does not have S3Guard

enabled.

The option to control this log level is now

fs.s3a.s3guard.disabled.warn.level

and can be one of: silent, inform, warn, fail.

On a failure, an ExitException is raised with exit code 49.

For details on this safety feature, consult the s3guard documentation.

    • -0
    • +29
    ./markdown/tools/hadoop-aws/s3guard.md
  1. … 3 more files in changeset.
HADOOP-16953. tuning s3guard disabled warnings (#1962)

Contributed by Steve Loughran.

The S3Guard absence warning of HADOOP-16484 has been changed

so that by default the S3A connector only logs at debug

when the connection to the S3 Store does not have S3Guard

enabled.

The option to control this log level is now

fs.s3a.s3guard.disabled.warn.level

and can be one of: silent, inform, warn, fail.

On a failure, an ExitException is raised with exit code 49.

For details on this safety feature, consult the s3guard documentation.

Change-Id: If868671c9260977c2b03b3e475b9c9531c98ce79

    • -0
    • +29
    ./markdown/tools/hadoop-aws/s3guard.md
  1. … 3 more files in changeset.
HADOOP-16986. S3A to not need wildfly on the classpath. (#1948)

Contributed by Steve Loughran.

This is a successor to HADOOP-16346, which enabled the S3A connector

to load the native openssl SSL libraries for better HTTPS performance.

That patch required wildfly.jar to be on the classpath. This

update:

* Makes wildfly.jar optional except in the special case that

"fs.s3a.ssl.channel.mode" is set to "openssl"

* Retains the declaration of wildfly.jar as a compile-time

dependency in the hadoop-aws POM. This means that unless

explicitly excluded, applications importing that published

maven artifact will, transitively, add the specified

wildfly JAR into their classpath for compilation/testing/

distribution.

This is done for packaging and to offer that optional

speedup. It is not mandatory: applications importing

the hadoop-aws POM can exclude it if they choose.

Change-Id: I7ed3e5948d1e10ce21276b3508871709347e113d

    • -9
    • +27
    ./markdown/tools/hadoop-aws/performance.md
  1. … 3 more files in changeset.
HADOOP-16986. S3A to not need wildfly on the classpath. (#1948)

HADOOP-16986. S3A to not need wildfly JAR on its classpath.

Contributed by Steve Loughran

This is a successor to HADOOP-16346, which enabled the S3A connector

to load the native openssl SSL libraries for better HTTPS performance.

That patch required wildfly.jar to be on the classpath. This

update:

* Makes wildfly.jar optional except in the special case that

"fs.s3a.ssl.channel.mode" is set to "openssl"

* Retains the declaration of wildfly.jar as a compile-time

dependency in the hadoop-aws POM. This means that unless

explicitly excluded, applications importing that published

maven artifact will, transitively, add the specified

wildfly JAR into their classpath for compilation/testing/

distribution.

This is done for packaging and to offer that optional

speedup. It is not mandatory: applications importing

the hadoop-aws POM can exclude it if they choose.

    • -9
    • +27
    ./markdown/tools/hadoop-aws/performance.md
  1. … 3 more files in changeset.
HADOOP-16988. Remove source code from branch-2. (aajisaka via jhung)

This closes #1959

    • -2204
    • +0
    ./markdown/tools/hadoop-aws/index.md
    • -769
    • +0
    ./markdown/tools/hadoop-aws/s3guard.md
    • -1046
    • +0
    ./markdown/tools/hadoop-aws/testing.md
  1. … 10843 more files in changeset.
HADOOP-16930. Add hadoop-aws documentation for ProfileCredentialsProvider

Contributed by Nicholas Chammas.

    • -0
    • +25
    ./markdown/tools/hadoop-aws/index.md
HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries (#1851). Contributed by Gabor Bota.

Adding a new feature to S3GuardTool's fsck: -fix.

Change-Id: I2cdb6601fea1d859b54370046b827ef06eb1107d

    • -1
    • +7
    ./markdown/tools/hadoop-aws/s3guard.md
  1. … 5 more files in changeset.
HADOOP-16319. S3A Etag tests fail with default encryption enabled on bucket.

Contributed by Ben Roling.

ETag values are unpredictable with some S3 encryption algorithms.

Skip ITestS3AMiscOperations tests which make assertions about etags

when default encryption on a bucket is enabled.

When testing with an AWS an account which lacks the privilege

for a call to getBucketEncryption(), we don't skip the tests.

In the event of failure, developers get to expand the

permissions of the account or relax default encryption settings.

    • -0
    • +7
    ./markdown/tools/hadoop-aws/testing.md
  1. … 1 more file in changeset.
HADOOP-14936. S3Guard: remove experimental from documentation.

Contributed by Gabor Bota.

    • -7
    • +2
    ./markdown/tools/hadoop-aws/s3guard.md
HADOOP-16711.

This adds a new option fs.s3a.bucket.probe, range (0-2) to

control which probe for a bucket existence to perform on startup.

0: no checks

1: v1 check (as has been performend until now)

2: v2 bucket check, which also incudes a permission check. Default.

When set to 0, bucket existence checks won't be done

during initialization thus making it faster.

When the bucket is not available in S3,

or if fs.s3a.endpoint points to the wrong instance of a private S3 store

consecutive calls like listing, read, write etc. will fail with

an UnknownStoreException.

Contributed by:

* Mukund Thakur (main patch and tests)

* Rajesh Balamohan (v0 list and performance tests)

* lqjacklee (HADOOP-15990/v2 list)

* Steve Loughran (UnknownStoreException support)

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java

new file: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/UnknownStoreException.java

new file: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ErrorTranslation.java

modified: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md

modified: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md

modified: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java

new file: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java

modified: hadoop-tools/hadoop-aws/src/test/resources/core-site.xml

Change-Id: Ic174f803e655af172d81c1274ed92b51bdceb384

    • -0
    • +20
    ./markdown/tools/hadoop-aws/index.md
    • -0
    • +16
    ./markdown/tools/hadoop-aws/performance.md
  1. … 13 more files in changeset.
HADOOP-16823. Large DeleteObject requests are their own Thundering Herd.

Contributed by Steve Loughran.

During S3A rename() and delete() calls, the list of objects delete is

built up into batches of a thousand and then POSTed in a single large

DeleteObjects request.

But as the IO capacity allowed on an S3 partition may only be 3500 writes

per second *and* each entry in that POST counts as a single write, then

one of those posts alone can trigger throttling on an already loaded

S3 directory tree. Which can trigger backoff and retry, with the same

thousand entry post, and so recreate the exact same problem.

Fixes

* Page size for delete object requests is set in

fs.s3a.bulk.delete.page.size; the default is 250.

* The property fs.s3a.experimental.aws.s3.throttling (default=true)

can be set to false to disable throttle retry logic in the AWS

client SDK -it is all handled in the S3A client. This

gives more visibility in to when operations are being throttled

* Bulk delete throttling events are logged to the log

org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears

often then choose a smaller page size.

* The metric "store_io_throttled" adds the entire count of delete

requests when a single DeleteObjects request is throttled.

* A new quantile, "store_io_throttle_rate" can track throttling

load over time.

* DynamoDB metastore throttle resilience issues have also been

identified and fixed. Note: the fs.s3a.experimental.aws.s3.throttling

flag does not apply to DDB IO precisely because there may still be

lurking issues there and it safest to rely on the DynamoDB client

SDK.

Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84

    • -0
    • +20
    ./markdown/tools/hadoop-aws/testing.md
  1. … 26 more files in changeset.
HADOOP-16832. S3Guard testing doc: Add required parameters for S3Guard testing in IDE. (#1822). Contributed by Mukund Thakur.

    • -0
    • +25
    ./markdown/tools/hadoop-aws/testing.md
HADOOP-16792: Make S3 client request timeout configurable.

Contributed by Mustafa Iman.

This adds a new configuration option fs.s3a.connection.request.timeout

to declare the time out on HTTP requests to the AWS service;

0 means no timeout.

Measured in seconds; the usual time suffixes are all supported

Important: this is the maximum duration of any AWS service call,

including upload and copy operations. If non-zero, it must be larger

than the time to upload multi-megabyte blocks to S3 from the client,

and to rename many-GB files. Use with care.

Change-Id: I407745341068b702bf8f401fb96450a9f987c51c

    • -0
    • +17
    ./markdown/tools/hadoop-aws/index.md
  1. … 4 more files in changeset.
HADOOP-16732. S3Guard to support encrypted DynamoDB table (#1752). Contributed by Mingliang Liu.

    • -0
    • +56
    ./markdown/tools/hadoop-aws/s3guard.md
    • -0
    • +21
    ./markdown/tools/hadoop-aws/testing.md
  1. … 7 more files in changeset.
HADOOP-16346. Stabilize S3A OpenSSL support.

Introduces `openssl` as an option for `fs.s3a.ssl.channel.mode`.

The new option is documented and marked as experimental.

For details on how to use this, consult the peformance document

in the s3a documentation.

This patch is the successor to HADOOP-16050 "S3A SSL connections

should use OpenSSL" -which was reverted because of

incompatibilities between the wildfly OpenSSL client and the AWS

HTTPS servers (HADOOP-16347). With the Wildfly release moved up

to 1.0.7.Final (HADOOP-16405) everything should now work.

Related issues:

* HADOOP-15669. ABFS: Improve HTTPS Performance

* HADOOP-16050: S3A SSL connections should use OpenSSL

* HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

* HADOOP-16405: Upgrade Wildfly Openssl version to 1.0.7.Final

Contributed by Sahil Takiar

Change-Id: I80a4bc5051519f186b7383b2c1cea140be42444e

    • -10
    • +51
    ./markdown/tools/hadoop-aws/performance.md
  1. … 8 more files in changeset.
HADOOP-16697. Tune/audit S3A authoritative mode.

Contains:

HADOOP-16474. S3Guard ProgressiveRenameTracker to mark destination

dirirectory as authoritative on success.

HADOOP-16684. S3guard bucket info to list a bit more about

authoritative paths.

HADOOP-16722. S3GuardTool to support FilterFileSystem.

This patch improves the marking of newly created/import directory

trees in S3Guard DynamoDB tables as authoritative.

Specific changes:

* Renamed directories are marked as authoritative if the entire

operation succeeded (HADOOP-16474).

* When updating parent table entries as part of any table write,

there's no overwriting of their authoritative flag.

s3guard import changes:

* new -verbose flag to print out what is going on.

* The "s3guard import" command lets you declare that a directory tree

is to be marked as authoritative

hadoop s3guard import -authoritative -verbose s3a://bucket/path

When importing a listing and a file is found, the import tool queries

the metastore and only updates the entry if the file is different from

before, where different == new timestamp, etag, or length. S3Guard can get

timestamp differences due to clock skew in PUT operations.

As the recursive list performed by the import command doesn't retrieve the

versionID, the existing entry may in fact be more complete.

When updating an existing due to clock skew the existing version ID

is propagated to the new entry (note: the etags must match; this is needed

to deal with inconsistent listings).

There is a new s3guard command to audit a s3guard bucket/path's

authoritative state:

hadoop s3guard authoritative -check-config s3a://bucket/path

This is primarily for testing/auditing.

The s3guard bucket-info command also provides some more details on the

authoritative state of a store (HADOOP-16684).

Change-Id: I58001341c04f6f3597fcb4fcb1581ccefeb77d91

    • -6
    • +188
    ./markdown/tools/hadoop-aws/s3guard.md
  1. … 31 more files in changeset.
HADOOP-16758. Refine testing.md to tell user better how to use auth-keys.xml (#1753)

Contributed by Mingliang Liu

    • -3
    • +4
    ./markdown/tools/hadoop-aws/testing.md
HADOOP-16758. Refine testing.md to tell user better how to use auth-keys.xml (#1753)

Contributed by Mingliang Liu

    • -3
    • +4
    ./markdown/tools/hadoop-aws/testing.md
HADOOP-16758. Refine testing.md to tell user better how to use auth-keys.xml (#1753)

Contributed by Mingliang Liu

    • -3
    • +4
    ./markdown/tools/hadoop-aws/testing.md
HADOOP-16758. Refine testing.md to tell user better how to use auth-keys.xml (#1753)

Contributed by Mingliang Liu

    • -3
    • +4
    ./markdown/tools/hadoop-aws/testing.md
HADOOP-16758. Refine testing.md to tell user better how to use auth-keys.xml (#1753)

Contributed by Mingliang Liu

    • -3
    • +4
    ./markdown/tools/hadoop-aws/testing.md
HADOOP-16424. S3Guard fsck: Check internal consistency of the MetadataStore (#1691). Contributed by Gabor Bota.

    • -1
    • +38
    ./markdown/tools/hadoop-aws/s3guard.md
  1. … 5 more files in changeset.
HADOOP-16735. Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN. Contributed by Mingliang Liu

This closes #1733

  1. … 1 more file in changeset.
HADOOP-16735. Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN. Contributed by Mingliang Liu

This closes #1733

  1. … 1 more file in changeset.
HADOOP-16735. Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN. Contributed by Mingliang Liu

This closes #1733

  1. … 1 more file in changeset.
HADOOP-16735. Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN. Contributed by Mingliang Liu

This closes #1733

  1. … 1 more file in changeset.
HADOOP-16735. Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN. Contributed by Mingliang Liu

This closes #1733

  1. … 1 more file in changeset.
HADOOP-16709. S3Guard: Make authoritative mode exclusive for metadata - don't check for expiry for authoritative paths (#1721). Contributed by Gabor Bota.

    • -4
    • +10
    ./markdown/tools/hadoop-aws/s3guard.md
  1. … 7 more files in changeset.