Clone Tools
  • last updated a few minutes ago
Constraints: committers
Constraints: files
Constraints: dates
HADOOP-16823. Large DeleteObject requests are their own Thundering Herd.

Contributed by Steve Loughran.

During S3A rename() and delete() calls, the list of objects delete is

built up into batches of a thousand and then POSTed in a single large

DeleteObjects request.

But as the IO capacity allowed on an S3 partition may only be 3500 writes

per second *and* each entry in that POST counts as a single write, then

one of those posts alone can trigger throttling on an already loaded

S3 directory tree. Which can trigger backoff and retry, with the same

thousand entry post, and so recreate the exact same problem.


* Page size for delete object requests is set in; the default is 250.

* The property (default=true)

can be set to false to disable throttle retry logic in the AWS

client SDK -it is all handled in the S3A client. This

gives more visibility in to when operations are being throttled

* Bulk delete throttling events are logged to the log

org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears

often then choose a smaller page size.

* The metric "store_io_throttled" adds the entire count of delete

requests when a single DeleteObjects request is throttled.

* A new quantile, "store_io_throttle_rate" can track throttling

load over time.

* DynamoDB metastore throttle resilience issues have also been

identified and fixed. Note: the

flag does not apply to DDB IO precisely because there may still be

lurking issues there and it safest to rely on the DynamoDB client


Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84

    • -0
    • +142
    • -2
    • +2
    • -29
    • +68
    • -3
    • +13
    • -0
    • +126
    • -0
    • +15
  1. … 12 more files in changeset.
HADOOP-16801. S3Guard listFiles will not query S3 if all listings are authoritative (#1815). Contributed by Mustafa İman.

    • -2
    • +38
  1. … 1 more file in changeset.
HADOOP-16746. mkdirs and s3guard Authoritative mode.

Contributed by Steve Loughran.

This fixes two problems with S3Guard authoritative mode and

the auth directory flags which are stored in DynamoDB.

1. mkdirs was creating dir markers without the auth bit,

forcing needless scans on newly created directories and

files subsequently added; it was only with the first listStatus call

on that directory that the dir would be marked as authoritative -even

though it would be complete already.

2. listStatus(path) would reset the authoritative status bit of all

child directories even if they were already marked as authoritative.

Issue #2 is possibly the most expensive, as any treewalk using listStatus

(e.g globfiles) would clear the auth bit for all child directories before

listing them. And this would happen every single time...

essentially you weren't getting authoritative directory listings.

For the curious, that the major bug was actually found during testing

-we'd all missed it during reviews.

A lesson there: the better the tests the fewer the bugs.

Maybe also: something obvious and significant can get by code reviews.

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/

modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/

modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/

Change-Id: Ic3ffda13f2af2430afedd50fd657b595c83e90a7

    • -8
    • +25
  1. … 9 more files in changeset.
HADOOP-16792: Make S3 client request timeout configurable.

Contributed by Mustafa Iman.

This adds a new configuration option fs.s3a.connection.request.timeout

to declare the time out on HTTP requests to the AWS service;

0 means no timeout.

Measured in seconds; the usual time suffixes are all supported

Important: this is the maximum duration of any AWS service call,

including upload and copy operations. If non-zero, it must be larger

than the time to upload multi-megabyte blocks to S3 from the client,

and to rename many-GB files. Use with care.

Change-Id: I407745341068b702bf8f401fb96450a9f987c51c

  1. … 4 more files in changeset.
HADOOP-16732. S3Guard to support encrypted DynamoDB table (#1752). Contributed by Mingliang Liu.

    • -1
    • +3
    • -0
    • +44
  1. … 5 more files in changeset.
HADOOP-16759. Filesystem openFile() builder to take a FileStatus param (#1761). Contributed by Steve Loughran

* Enhanced builder + FS spec

* s3a FS to use this to skip HEAD on open

* and to use version/etag when opening the file

works with S3AFileStatus FS and S3ALocatedFileStatus

  1. … 17 more files in changeset.
HADOOP-16346. Stabilize S3A OpenSSL support.

Introduces `openssl` as an option for ``.

The new option is documented and marked as experimental.

For details on how to use this, consult the peformance document

in the s3a documentation.

This patch is the successor to HADOOP-16050 "S3A SSL connections

should use OpenSSL" -which was reverted because of

incompatibilities between the wildfly OpenSSL client and the AWS

HTTPS servers (HADOOP-16347). With the Wildfly release moved up

to 1.0.7.Final (HADOOP-16405) everything should now work.

Related issues:

* HADOOP-15669. ABFS: Improve HTTPS Performance

* HADOOP-16050: S3A SSL connections should use OpenSSL

* HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

* HADOOP-16405: Upgrade Wildfly Openssl version to 1.0.7.Final

Contributed by Sahil Takiar

Change-Id: I80a4bc5051519f186b7383b2c1cea140be42444e

  1. … 8 more files in changeset.
HADOOP-16697. Tune/audit S3A authoritative mode.


HADOOP-16474. S3Guard ProgressiveRenameTracker to mark destination

dirirectory as authoritative on success.

HADOOP-16684. S3guard bucket info to list a bit more about

authoritative paths.

HADOOP-16722. S3GuardTool to support FilterFileSystem.

This patch improves the marking of newly created/import directory

trees in S3Guard DynamoDB tables as authoritative.

Specific changes:

* Renamed directories are marked as authoritative if the entire

operation succeeded (HADOOP-16474).

* When updating parent table entries as part of any table write,

there's no overwriting of their authoritative flag.

s3guard import changes:

* new -verbose flag to print out what is going on.

* The "s3guard import" command lets you declare that a directory tree

is to be marked as authoritative

hadoop s3guard import -authoritative -verbose s3a://bucket/path

When importing a listing and a file is found, the import tool queries

the metastore and only updates the entry if the file is different from

before, where different == new timestamp, etag, or length. S3Guard can get

timestamp differences due to clock skew in PUT operations.

As the recursive list performed by the import command doesn't retrieve the

versionID, the existing entry may in fact be more complete.

When updating an existing due to clock skew the existing version ID

is propagated to the new entry (note: the etags must match; this is needed

to deal with inconsistent listings).

There is a new s3guard command to audit a s3guard bucket/path's

authoritative state:

hadoop s3guard authoritative -check-config s3a://bucket/path

This is primarily for testing/auditing.

The s3guard bucket-info command also provides some more details on the

authoritative state of a store (HADOOP-16684).

Change-Id: I58001341c04f6f3597fcb4fcb1581ccefeb77d91

    • -0
    • +255
    • -2
    • +16
    • -64
    • +237
    • -0
    • +272
    • -2
    • +10
    • -0
    • +70
    • -0
    • +72
    • -0
    • +12
  1. … 17 more files in changeset.
HADOOP-16645. S3A Delegation Token extension point to use StoreContext.

Contributed by Steve Loughran.

This is part of the ongoing refactoring of the S3A codebase, with the

delegation token support (HADOOP-14556) no longer given a direct reference

to the owning S3AFileSystem. Instead it gets a StoreContext and a new

interface, DelegationOperations, to access those operations offered by S3AFS

which are specifically needed by the DT bindings.

The sole operation needed is listAWSPolicyRules(), which is used to allow

S3A FS and the S3Guard metastore to return the AWS policy rules needed to

access their specific services/buckets/tables, allowing the AssumedRole

delegation token to be locked down.

As further restructuring takes place, that interface's implementation

can be moved to wherever the new home for those operations ends up.

Although it changes the API of an extension point, that feature (S3

Delegation Tokens) has not shipped; backwards compatibility is not a

problem except for anyone who has implemented DT support against trunk.

To those developers: sorry.

Change-Id: I770f58b49ff7634a34875ba37b7d51c94d7c21da

    • -0
    • +28
  1. … 3 more files in changeset.
HADOOP-16424. S3Guard fsck: Check internal consistency of the MetadataStore (#1691). Contributed by Gabor Bota.

  1. … 3 more files in changeset.
HADOOP-16709. S3Guard: Make authoritative mode exclusive for metadata - don't check for expiry for authoritative paths (#1721). Contributed by Gabor Bota.

    • -23
    • +4
  1. … 4 more files in changeset.
HADOOP-16484. S3A to warn or fail if S3Guard is disabled - addendum: silent for S3GuardTool (#1714). Contributed by Gabor Bota.

Change-Id: I63b928ef5da425ef982dd4100a426fc23f64bac1

HADOOP-16665. Filesystems to be closed if they failed during initialize().

Contributed by Steve Loughran.

This FileSystem instantiation so if an IOException or RuntimeException is

raised in the invocation of FileSystem.initialize() then a best-effort

attempt is made to close the FS instance; exceptions raised that there

are swallowed.

The S3AFileSystem is also modified to do its own cleanup if an

IOException is raised during its initialize() process, it being the

FS we know has the "potential" to leak threads, especially in

extension points (e.g AWS Authenticators) which spawn threads.

Change-Id: Ib84073a606c9d53bf53cbfca4629876a03894f04

  1. … 8 more files in changeset.
HADOOP-16477. S3A delegation token tests fail if fs.s3a.encryption.key set.

Contributed by Steve Loughran.

Change-Id: I843989f32472bbdefbd4fa504b26c7a614ab1cee

  1. … 12 more files in changeset.
HADOOP-16681. mvn javadoc:javadoc fails in hadoop-aws. Contributed by Xieming Li

HADOOP-16484. S3A to warn or fail if S3Guard is disabled (#1661). Contributed by Gabor Bota.

  1. … 3 more files in changeset.
HADOOP-16653. S3Guard DDB overreacts to no tag access (#1660). Contributed by Gabor Bota.

    • -13
    • +32
  1. … 2 more files in changeset.
HADOOP-16658. S3A connector does not support including the token renewer in the token identifier.

Contributed by Phil Zampino.

Change-Id: Iea9d5028dcf58bda4da985604f5cd3ac283619bd

  1. … 4 more files in changeset.
HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation.

Contributed by Steve Loughran.

Includes HADOOP-16651. S3 getBucketLocation() can return "US" for us-east.

Change-Id: Ifc0dca76e51495ed1a8fc0f077b86bf125deff40

    • -6
    • +5
  1. … 3 more files in changeset.
HADOOP-16635. S3A "directories only" scan still does a HEAD.

Contributed by Steve Loughran.

Change-Id: I5e41d7f721364c392e1f4344db83dfa8c5aa06ce

  1. … 3 more files in changeset.
Revert "HADOOP-15870. S3AInputStream.remainingInFile should use nextReadPos."

This reverts commit 7a4b3d42c4e36e468c2a46fd48036a6fed547853.

The patch broke TestRouterWebHDFSContractSeek as it turns out that

WebHDFSInputStream.available() is always 0.

  1. … 3 more files in changeset.
HADOOP-16520. Race condition in DDB table init and waiting threads. (#1576). Contributed by Gabor Bota.

Fixes HADOOP-16349. DynamoDBMetadataStore.getVersionMarkerItem() to log at info/warn on retry

Change-Id: Ia83e92b9039ccb780090c99c41b4f71ef7539d35

    • -419
    • +31
    • -0
    • +693
    • -1
    • +1
  1. … 6 more files in changeset.
HADOOP-15870. S3AInputStream.remainingInFile should use nextReadPos.

Contributed by lqjacklee.

Change-Id: I32bb00a683102e7ff8ff8ce0b8d9c3195ca7381c

  1. … 3 more files in changeset.
HADOOP-16570. S3A committers encounter scale issues.

Contributed by Steve Loughran.

This addresses two scale issues which has surfaced in large scale benchmarks

of the S3A Committers.

* Thread pools are not cleaned up.

This now happens, with tests.

* OOM on job commit for jobs with many thousands of tasks,

each generating tens of (very large) files.

Instead of loading all pending commits into memory as a single list, the list

of files to load is the sole list which is passed around; .pendingset files are

loaded and processed in isolation -and reloaded if necessary for any

abort/rollback operation.

The parallel commit/abort/revert operations now work at the .pendingset level,

rather than that of individual pending commit files. The existing parallelized

Tasks API is still used to commit those files, but with a null thread pool, so

as to serialize the operations.

Change-Id: I5c8240cd31800eaa83d112358770ca0eb2bca797

    • -69
    • +397
    • -3
    • +5
    • -10
    • +33
  1. … 11 more files in changeset.
HADOOP-16207 Improved S3A MR tests.

Contributed by Steve Loughran.

Replaces the committer-specific terasort and MR test jobs with parameterization

of the (now single tests) and use of file:// over hdfs:// as the cluster FS.

The parameterization ensures that only one of the specific committer tests

run at a time -overloads of the test machines are less likely, and so the

suites can be pulled back into the parallel phase.

There's also more detailed validation of the stage outputs of the terasorting;

if one test fails the rest are all skipped. This and the fact that job

output is stored under target/yarn-${timestamp} means failures should

be more debuggable.

Change-Id: Iefa370ba73c6419496e6e69dd6673d00f37ff095

    • -1
    • +2
  1. … 15 more files in changeset.
HADOOP-16599. Allow a SignerInitializer to be specified along with a Custom Signer

    • -0
    • +53
    • -0
    • +147
    • -0
    • +31
  1. … 4 more files in changeset.
HADOOP-16458. LocatedFileStatusFetcher.getFileStatuses failing intermittently with S3

Contributed by Steve Loughran.


-S3A glob scans don't bother trying to resolve symlinks

-stack traces don't get lost in getFileStatuses() when exceptions are wrapped

-debug level logging of what is up in Globber

-Contains HADOOP-13373. Add S3A implementation of FSMainOperationsBaseTest.

-ITestRestrictedReadAccess tests incomplete read access to files.

This adds a builder API for constructing globbers which other stores can use

so that they too can skip symlink resolution when not needed.

Change-Id: I23bcdb2783d6bd77cf168fdc165b1b4b334d91c7

  1. … 11 more files in changeset.
HADOOP-16602. mvn package fails in hadoop-aws.

Contributed by Xieming Li.

Follow-up to HADOOP-16445

Change-Id: I72c62d55b734a0f67556844f398ef4a50d9ea585

HADOOP-15691 Add PathCapabilities to FileSystem and FileContext.

Contributed by Steve Loughran.

This complements the StreamCapabilities Interface by allowing applications to probe for a specific path on a specific instance of a FileSystem client

to offer a specific capability.

This is intended to allow applications to determine

* Whether a method is implemented before calling it and dealing with UnsupportedOperationException.

* Whether a specific feature is believed to be available in the remote store.

As well as a common set of capabilities defined in CommonPathCapabilities,

file systems are free to add their own capabilities, prefixed with

fs. + schema + .

The plan is to identify and document more capabilities -and for file systems which add new features, for a declaration of the availability of the feature to always be available.


* The remote store is not expected to be checked for the feature;

It is more a check of client API and the client's configuration/knowledge

of the state of the remote system.

* Permissions are not checked.

Change-Id: I80bfebe94f4a8bdad8f3ac055495735b824968f5

  1. … 34 more files in changeset.
HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB (#1332)

    • -0
    • +99
    • -1
    • +3
  1. … 4 more files in changeset.