Clone Tools
  • last updated a few minutes ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
HADOOP-16568. S3A FullCredentialsTokenBinding fails if local credentials are unset. (#1441)

Contributed by Steve Loughran.

Move the loading to deployUnbonded (where they are required) and add a safety check when a new DT is requested

Change-Id: I03c69aa2e16accfccddca756b2771ff832e7dd58

HADOOP-16568. S3A FullCredentialsTokenBinding fails if local credentials are unset. (#1441)

Contributed by Steve Loughran.

Move the loading to deployUnbonded (where they are required) and add a safety check when a new DT is requested

HADOOP-16900. Very large files can be truncated when written through the S3A FileSystem.

Contributed by Mukund Thakur and Steve Loughran.

This patch ensures that writes to S3A fail when more than 10,000 blocks are

written. That upper bound still exists. To write massive files, make sure

that the value of fs.s3a.multipart.size is set to a size which is large

enough to upload the files in fewer than 10,000 blocks.

Change-Id: Icec604e2a357ffd38d7ae7bc3f887ff55f2d721a

  1. … 3 more files in changeset.
HADOOP-16900. Very large files can be truncated when written through the S3A FileSystem.

Contributed by Mukund Thakur and Steve Loughran.

This patch ensures that writes to S3A fail when more than 10,000 blocks are

written. That upper bound still exists. To write massive files, make sure

that the value of fs.s3a.multipart.size is set to a size which is large

enough to upload the files in fewer than 10,000 blocks.

Change-Id: Icec604e2a357ffd38d7ae7bc3f887ff55f2d721a

  1. … 3 more files in changeset.
HADOOP-16953. tuning s3guard disabled warnings (#1962)

Contributed by Steve Loughran.

The S3Guard absence warning of HADOOP-16484 has been changed

so that by default the S3A connector only logs at debug

when the connection to the S3 Store does not have S3Guard

enabled.

The option to control this log level is now

fs.s3a.s3guard.disabled.warn.level

and can be one of: silent, inform, warn, fail.

On a failure, an ExitException is raised with exit code 49.

For details on this safety feature, consult the s3guard documentation.

  1. … 2 more files in changeset.
HADOOP-16953. tuning s3guard disabled warnings (#1962)

Contributed by Steve Loughran.

The S3Guard absence warning of HADOOP-16484 has been changed

so that by default the S3A connector only logs at debug

when the connection to the S3 Store does not have S3Guard

enabled.

The option to control this log level is now

fs.s3a.s3guard.disabled.warn.level

and can be one of: silent, inform, warn, fail.

On a failure, an ExitException is raised with exit code 49.

For details on this safety feature, consult the s3guard documentation.

Change-Id: If868671c9260977c2b03b3e475b9c9531c98ce79

  1. … 2 more files in changeset.
HADOOP-16986. S3A to not need wildfly on the classpath. (#1948)

Contributed by Steve Loughran.

This is a successor to HADOOP-16346, which enabled the S3A connector

to load the native openssl SSL libraries for better HTTPS performance.

That patch required wildfly.jar to be on the classpath. This

update:

* Makes wildfly.jar optional except in the special case that

"fs.s3a.ssl.channel.mode" is set to "openssl"

* Retains the declaration of wildfly.jar as a compile-time

dependency in the hadoop-aws POM. This means that unless

explicitly excluded, applications importing that published

maven artifact will, transitively, add the specified

wildfly JAR into their classpath for compilation/testing/

distribution.

This is done for packaging and to offer that optional

speedup. It is not mandatory: applications importing

the hadoop-aws POM can exclude it if they choose.

Change-Id: I7ed3e5948d1e10ce21276b3508871709347e113d

  1. … 4 more files in changeset.
HADOOP-16986. S3A to not need wildfly on the classpath. (#1948)

HADOOP-16986. S3A to not need wildfly JAR on its classpath.

Contributed by Steve Loughran

This is a successor to HADOOP-16346, which enabled the S3A connector

to load the native openssl SSL libraries for better HTTPS performance.

That patch required wildfly.jar to be on the classpath. This

update:

* Makes wildfly.jar optional except in the special case that

"fs.s3a.ssl.channel.mode" is set to "openssl"

* Retains the declaration of wildfly.jar as a compile-time

dependency in the hadoop-aws POM. This means that unless

explicitly excluded, applications importing that published

maven artifact will, transitively, add the specified

wildfly JAR into their classpath for compilation/testing/

distribution.

This is done for packaging and to offer that optional

speedup. It is not mandatory: applications importing

the hadoop-aws POM can exclude it if they choose.

  1. … 4 more files in changeset.
HADOOP-13873. log DNS addresses on s3a initialization.

Contributed by Mukund Thakur.

If you set the log org.apache.hadoop.fs.s3a.impl.NetworkBinding

to DEBUG, then when the S3A bucket probe is made -the DNS address

of the S3 endpoint is calculated and printed.

This is useful to see if a large set of processes are all using

the same IP address from the pool of load balancers to which AWS

directs clients when an AWS S3 endpoint is resolved.

This can have implications for performance: if all clients

access the same load balancer performance may be suboptimal.

Note: if bucket probes are disabled, fs.s3a.bucket.probe = 0,

the DNS logging does not take place.

Change-Id: I21b3ac429dc0b543f03e357fdeb94c2d2a328dd8

HADOOP-13873. log DNS addresses on s3a initialization.

Contributed by Mukund Thakur.

If you set the log org.apache.hadoop.fs.s3a.impl.NetworkBinding

to DEBUG, then when the S3A bucket probe is made -the DNS address

of the S3 endpoint is calculated and printed.

This is useful to see if a large set of processes are all using

the same IP address from the pool of load balancers to which AWS

directs clients when an AWS S3 endpoint is resolved.

This can have implications for performance: if all clients

access the same load balancer performance may be suboptimal.

Note: if bucket probes are disabled, fs.s3a.bucket.probe = 0,

the DNS logging does not take place.

Change-Id: I21b3ac429dc0b543f03e357fdeb94c2d2a328dd8

HADOOP-16988. Remove source code from branch-2. (aajisaka via jhung)

This closes #1959

  1. … 10832 more files in changeset.
HADOOP-16465 listLocatedStatus() optimisation (#1943)

Contributed by Mukund Thakur

Optimize S3AFileSystem.listLocatedStatus() to perform list

operations directly and then fallback to head checks for files

Change-Id: Ia2c0fa6fcc5967c49b914b92f41135d07dab0464

  1. … 1 more file in changeset.
HADOOP-16465 listLocatedStatus() optimisation (#1943)

Contributed by Mukund Thakur

Optimize S3AFileSystem.listLocatedStatus() to perform list

operations directly and then fallback to head checks for files

  1. … 1 more file in changeset.
HADOOP-16939 fs.s3a.authoritative.path should support multiple FS URIs (#1914)

add unit test, new ITest and then fix the issue: different schema, bucket == skip

factored out the underlying logic for unit testing; also moved

maybeAddTrailingSlash to S3AUtils (while retaining/forwarnding existing method

in S3AFS).

tested: london, sole failure is

testListingDelete[auth=true](org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations)

filed HADOOP-16853

Change-Id: I4b8d0024469551eda0ec70b4968cba4abed405ed

  1. … 2 more files in changeset.
HADOOP-14918. Remove the Local Dynamo DB test option (branch-2.10) (#1864). Contributed by Jonathan Hung and Gabor Bota.

    • -0
    • +1
    ./s3a/s3guard/DynamoDBMetadataStore.java
  1. … 7 more files in changeset.
HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries (#1851). Contributed by Gabor Bota.

Adding a new feature to S3GuardTool's fsck: -fix.

Change-Id: I2cdb6601fea1d859b54370046b827ef06eb1107d

  1. … 3 more files in changeset.
HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard (#1646)

Contributed by Steve Loughran

* move qualify logic to S3AFileSystem.makeQualified()

* make S3AFileSystem.qualify() a private redirect to that

* ITestS3GuardFsShell turned off

    • -2
    • +14
    ./s3a/s3guard/PathMetadataDynamoDBTranslation.java
  1. … 4 more files in changeset.
HADOOP-16794. S3A reverts KMS encryption to the bucket's default KMS key in rename/copy.

AreContributed by Mukund Thakur.

This addresses an issue which surfaced with KMS encryption: the wrong

KMS key could be picked up in the S3 COPY operation, so

renamed files, while encrypted, would end up with the

bucket default key.

As well as adding tests in the new suite

ITestS3AEncryptionWithDefaultS3Settings,

AbstractSTestS3AHugeFiles has a new test method to

verify that the encryption settings also work

for large files copied via multipart operations.

  1. … 8 more files in changeset.
HADOOP-16767 Handle non-IO exceptions in reopen()

Contributed by Sergei Poganshev.

Catches Exception instead of IOException in closeStream()

and so handle exceptions such as SdkClientException by

aborting the wrapped stream. This will increase resilience

to failures, as any which occuring during stream closure

will be caught. Furthermore, because the

underlying HTTP connection is aborted, rather than closed,

it will not be recycled to cause problems on subsequent

operations.

HADOOP-16711.

This adds a new option fs.s3a.bucket.probe, range (0-2) to

control which probe for a bucket existence to perform on startup.

0: no checks

1: v1 check (as has been performend until now)

2: v2 bucket check, which also incudes a permission check. Default.

When set to 0, bucket existence checks won't be done

during initialization thus making it faster.

When the bucket is not available in S3,

or if fs.s3a.endpoint points to the wrong instance of a private S3 store

consecutive calls like listing, read, write etc. will fail with

an UnknownStoreException.

Contributed by:

* Mukund Thakur (main patch and tests)

* Rajesh Balamohan (v0 list and performance tests)

* lqjacklee (HADOOP-15990/v2 list)

* Steve Loughran (UnknownStoreException support)

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java

new file: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/UnknownStoreException.java

new file: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ErrorTranslation.java

modified: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md

modified: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md

modified: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java

new file: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java

modified: hadoop-tools/hadoop-aws/src/test/resources/core-site.xml

Change-Id: Ic174f803e655af172d81c1274ed92b51bdceb384

    • -0
    • +57
    ./s3a/UnknownStoreException.java
    • -0
    • +73
    ./s3a/impl/ErrorTranslation.java
  1. … 10 more files in changeset.
HADOOP-15961. S3A committers: make sure there's regular progress() calls.

Contributed by lqjacklee.

Change-Id: I13ca153e1e32b21dbe64d6fb25e260e0ff66154d

    • -1
    • +2
    ./s3a/commit/staging/StagingCommitter.java
  1. … 3 more files in changeset.
HADOOP-16823. Large DeleteObject requests are their own Thundering Herd.

Contributed by Steve Loughran.

During S3A rename() and delete() calls, the list of objects delete is

built up into batches of a thousand and then POSTed in a single large

DeleteObjects request.

But as the IO capacity allowed on an S3 partition may only be 3500 writes

per second *and* each entry in that POST counts as a single write, then

one of those posts alone can trigger throttling on an already loaded

S3 directory tree. Which can trigger backoff and retry, with the same

thousand entry post, and so recreate the exact same problem.

Fixes

* Page size for delete object requests is set in

fs.s3a.bulk.delete.page.size; the default is 250.

* The property fs.s3a.experimental.aws.s3.throttling (default=true)

can be set to false to disable throttle retry logic in the AWS

client SDK -it is all handled in the S3A client. This

gives more visibility in to when operations are being throttled

* Bulk delete throttling events are logged to the log

org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears

often then choose a smaller page size.

* The metric "store_io_throttled" adds the entire count of delete

requests when a single DeleteObjects request is throttled.

* A new quantile, "store_io_throttle_rate" can track throttling

load over time.

* DynamoDB metastore throttle resilience issues have also been

identified and fixed. Note: the fs.s3a.experimental.aws.s3.throttling

flag does not apply to DDB IO precisely because there may still be

lurking issues there and it safest to rely on the DynamoDB client

SDK.

Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84

    • -0
    • +142
    ./s3a/impl/BulkDeleteRetryHandler.java
    • -2
    • +2
    ./s3a/s3guard/DumpS3GuardDynamoTable.java
    • -29
    • +68
    ./s3a/s3guard/DynamoDBMetadataStore.java
    • -3
    • +13
    ./s3a/s3guard/PurgeS3GuardDynamoTable.java
    • -0
    • +126
    ./s3a/s3guard/RetryingCollection.java
    • -0
    • +15
    ./s3a/s3guard/S3GuardTableAccess.java
  1. … 12 more files in changeset.
HADOOP-16801. S3Guard listFiles will not query S3 if all listings are authoritative (#1815). Contributed by Mustafa İman.

    • -2
    • +38
    ./s3a/s3guard/MetadataStoreListFilesIterator.java
  1. … 1 more file in changeset.
HADOOP-16746. mkdirs and s3guard Authoritative mode.

Contributed by Steve Loughran.

This fixes two problems with S3Guard authoritative mode and

the auth directory flags which are stored in DynamoDB.

1. mkdirs was creating dir markers without the auth bit,

forcing needless scans on newly created directories and

files subsequently added; it was only with the first listStatus call

on that directory that the dir would be marked as authoritative -even

though it would be complete already.

2. listStatus(path) would reset the authoritative status bit of all

child directories even if they were already marked as authoritative.

Issue #2 is possibly the most expensive, as any treewalk using listStatus

(e.g globfiles) would clear the auth bit for all child directories before

listing them. And this would happen every single time...

essentially you weren't getting authoritative directory listings.

For the curious, that the major bug was actually found during testing

-we'd all missed it during reviews.

A lesson there: the better the tests the fewer the bugs.

Maybe also: something obvious and significant can get by code reviews.

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/BulkOperationState.java

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/LocalMetadataStore.java

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStore.java

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/NullMetadataStore.java

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardWriteBack.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestRestrictedReadAccess.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/TestPartialDeleteFailures.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreAuthoritativeMode.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardFsck.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java

Change-Id: Ic3ffda13f2af2430afedd50fd657b595c83e90a7

    • -8
    • +25
    ./s3a/s3guard/DynamoDBMetadataStore.java
  1. … 9 more files in changeset.
HADOOP-16792: Make S3 client request timeout configurable.

Contributed by Mustafa Iman.

This adds a new configuration option fs.s3a.connection.request.timeout

to declare the time out on HTTP requests to the AWS service;

0 means no timeout.

Measured in seconds; the usual time suffixes are all supported

Important: this is the maximum duration of any AWS service call,

including upload and copy operations. If non-zero, it must be larger

than the time to upload multi-megabyte blocks to S3 from the client,

and to rename many-GB files. Use with care.

Change-Id: I407745341068b702bf8f401fb96450a9f987c51c

  1. … 4 more files in changeset.
HADOOP-16732. S3Guard to support encrypted DynamoDB table (#1752). Contributed by Mingliang Liu.

    • -1
    • +3
    ./s3a/s3guard/DynamoDBMetadataStore.java
    • -0
    • +44
    ./s3a/s3guard/DynamoDBMetadataStoreTableManager.java
  1. … 5 more files in changeset.
HADOOP-16759. Filesystem openFile() builder to take a FileStatus param (#1761). Contributed by Steve Loughran

* Enhanced builder + FS spec

* s3a FS to use this to skip HEAD on open

* and to use version/etag when opening the file

works with S3AFileStatus FS and S3ALocatedFileStatus

  1. … 17 more files in changeset.
HADOOP-16346. Stabilize S3A OpenSSL support.

Introduces `openssl` as an option for `fs.s3a.ssl.channel.mode`.

The new option is documented and marked as experimental.

For details on how to use this, consult the peformance document

in the s3a documentation.

This patch is the successor to HADOOP-16050 "S3A SSL connections

should use OpenSSL" -which was reverted because of

incompatibilities between the wildfly OpenSSL client and the AWS

HTTPS servers (HADOOP-16347). With the Wildfly release moved up

to 1.0.7.Final (HADOOP-16405) everything should now work.

Related issues:

* HADOOP-15669. ABFS: Improve HTTPS Performance

* HADOOP-16050: S3A SSL connections should use OpenSSL

* HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

* HADOOP-16405: Upgrade Wildfly Openssl version to 1.0.7.Final

Contributed by Sahil Takiar

Change-Id: I80a4bc5051519f186b7383b2c1cea140be42444e

  1. … 8 more files in changeset.
HADOOP-16697. Tune/audit S3A authoritative mode.

Contains:

HADOOP-16474. S3Guard ProgressiveRenameTracker to mark destination

dirirectory as authoritative on success.

HADOOP-16684. S3guard bucket info to list a bit more about

authoritative paths.

HADOOP-16722. S3GuardTool to support FilterFileSystem.

This patch improves the marking of newly created/import directory

trees in S3Guard DynamoDB tables as authoritative.

Specific changes:

* Renamed directories are marked as authoritative if the entire

operation succeeded (HADOOP-16474).

* When updating parent table entries as part of any table write,

there's no overwriting of their authoritative flag.

s3guard import changes:

* new -verbose flag to print out what is going on.

* The "s3guard import" command lets you declare that a directory tree

is to be marked as authoritative

hadoop s3guard import -authoritative -verbose s3a://bucket/path

When importing a listing and a file is found, the import tool queries

the metastore and only updates the entry if the file is different from

before, where different == new timestamp, etag, or length. S3Guard can get

timestamp differences due to clock skew in PUT operations.

As the recursive list performed by the import command doesn't retrieve the

versionID, the existing entry may in fact be more complete.

When updating an existing due to clock skew the existing version ID

is propagated to the new entry (note: the etags must match; this is needed

to deal with inconsistent listings).

There is a new s3guard command to audit a s3guard bucket/path's

authoritative state:

hadoop s3guard authoritative -check-config s3a://bucket/path

This is primarily for testing/auditing.

The s3guard bucket-info command also provides some more details on the

authoritative state of a store (HADOOP-16684).

Change-Id: I58001341c04f6f3597fcb4fcb1581ccefeb77d91

    • -0
    • +255
    ./s3a/s3guard/AuthoritativeAuditOperation.java
    • -2
    • +16
    ./s3a/s3guard/BulkOperationState.java
    • -64
    • +237
    ./s3a/s3guard/DynamoDBMetadataStore.java
    • -0
    • +272
    ./s3a/s3guard/ImportOperation.java
    • -2
    • +10
    ./s3a/s3guard/LocalMetadataStore.java
    • -0
    • +70
    ./s3a/s3guard/MetastoreInstrumentation.java
    • -0
    • +72
    ./s3a/s3guard/MetastoreInstrumentationImpl.java
    • -0
    • +12
    ./s3a/s3guard/PathMetadataDynamoDBTranslation.java
  1. … 17 more files in changeset.
HADOOP-16645. S3A Delegation Token extension point to use StoreContext.

Contributed by Steve Loughran.

This is part of the ongoing refactoring of the S3A codebase, with the

delegation token support (HADOOP-14556) no longer given a direct reference

to the owning S3AFileSystem. Instead it gets a StoreContext and a new

interface, DelegationOperations, to access those operations offered by S3AFS

which are specifically needed by the DT bindings.

The sole operation needed is listAWSPolicyRules(), which is used to allow

S3A FS and the S3Guard metastore to return the AWS policy rules needed to

access their specific services/buckets/tables, allowing the AssumedRole

delegation token to be locked down.

As further restructuring takes place, that interface's implementation

can be moved to wherever the new home for those operations ends up.

Although it changes the API of an extension point, that feature (S3

Delegation Tokens) has not shipped; backwards compatibility is not a

problem except for anyone who has implemented DT support against trunk.

To those developers: sorry.

Change-Id: I770f58b49ff7634a34875ba37b7d51c94d7c21da

    • -0
    • +28
    ./s3a/auth/delegation/DelegationOperations.java
  1. … 3 more files in changeset.