Clone Tools
  • last updated a few minutes ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
HADOOP-16484. S3A to warn or fail if S3Guard is disabled (#1661). Contributed by Gabor Bota.

  1. … 3 more files in changeset.
HADOOP-16653. S3Guard DDB overreacts to no tag access (#1660). Contributed by Gabor Bota.

    • -13
    • +32
    ./s3a/s3guard/DynamoDBMetadataStoreTableManager.java
  1. … 2 more files in changeset.
HADOOP-16658. S3A connector does not support including the token renewer in the token identifier.

Contributed by Phil Zampino.

Change-Id: Iea9d5028dcf58bda4da985604f5cd3ac283619bd

  1. … 4 more files in changeset.
HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation.

Contributed by Steve Loughran.

Includes HADOOP-16651. S3 getBucketLocation() can return "US" for us-east.

Change-Id: Ifc0dca76e51495ed1a8fc0f077b86bf125deff40

    • -6
    • +5
    ./s3a/s3guard/DynamoDBMetadataStore.java
  1. … 3 more files in changeset.
HADOOP-16635. S3A "directories only" scan still does a HEAD.

Contributed by Steve Loughran.

Change-Id: I5e41d7f721364c392e1f4344db83dfa8c5aa06ce

  1. … 3 more files in changeset.
Revert "HADOOP-15870. S3AInputStream.remainingInFile should use nextReadPos."

This reverts commit 7a4b3d42c4e36e468c2a46fd48036a6fed547853.

The patch broke TestRouterWebHDFSContractSeek as it turns out that

WebHDFSInputStream.available() is always 0.

  1. … 3 more files in changeset.
HADOOP-16520. Race condition in DDB table init and waiting threads. (#1576). Contributed by Gabor Bota.

Fixes HADOOP-16349. DynamoDBMetadataStore.getVersionMarkerItem() to log at info/warn on retry

Change-Id: Ia83e92b9039ccb780090c99c41b4f71ef7539d35

    • -419
    • +31
    ./s3a/s3guard/DynamoDBMetadataStore.java
    • -0
    • +693
    ./s3a/s3guard/DynamoDBMetadataStoreTableManager.java
    • -1
    • +1
    ./s3a/s3guard/PathMetadataDynamoDBTranslation.java
  1. … 6 more files in changeset.
HADOOP-15870. S3AInputStream.remainingInFile should use nextReadPos.

Contributed by lqjacklee.

Change-Id: I32bb00a683102e7ff8ff8ce0b8d9c3195ca7381c

  1. … 3 more files in changeset.
HADOOP-16570. S3A committers encounter scale issues.

Contributed by Steve Loughran.

This addresses two scale issues which has surfaced in large scale benchmarks

of the S3A Committers.

* Thread pools are not cleaned up.

This now happens, with tests.

* OOM on job commit for jobs with many thousands of tasks,

each generating tens of (very large) files.

Instead of loading all pending commits into memory as a single list, the list

of files to load is the sole list which is passed around; .pendingset files are

loaded and processed in isolation -and reloaded if necessary for any

abort/rollback operation.

The parallel commit/abort/revert operations now work at the .pendingset level,

rather than that of individual pending commit files. The existing parallelized

Tasks API is still used to commit those files, but with a null thread pool, so

as to serialize the operations.

Change-Id: I5c8240cd31800eaa83d112358770ca0eb2bca797

    • -69
    • +397
    ./s3a/commit/AbstractS3ACommitter.java
    • -3
    • +5
    ./s3a/commit/magic/MagicS3GuardCommitter.java
    • -10
    • +33
    ./s3a/commit/staging/StagingCommitter.java
  1. … 11 more files in changeset.
HADOOP-16207 Improved S3A MR tests.

Contributed by Steve Loughran.

Replaces the committer-specific terasort and MR test jobs with parameterization

of the (now single tests) and use of file:// over hdfs:// as the cluster FS.

The parameterization ensures that only one of the specific committer tests

run at a time -overloads of the test machines are less likely, and so the

suites can be pulled back into the parallel phase.

There's also more detailed validation of the stage outputs of the terasorting;

if one test fails the rest are all skipped. This and the fact that job

output is stored under target/yarn-${timestamp} means failures should

be more debuggable.

Change-Id: Iefa370ba73c6419496e6e69dd6673d00f37ff095

    • -1
    • +2
    ./s3a/commit/staging/StagingCommitter.java
  1. … 15 more files in changeset.
HADOOP-16599. Allow a SignerInitializer to be specified along with a Custom Signer

    • -0
    • +53
    ./s3a/auth/AwsSignerInitializer.java
    • -0
    • +147
    ./s3a/auth/SignerManager.java
    • -0
    • +31
    ./s3a/auth/delegation/DelegationTokenProvider.java
  1. … 4 more files in changeset.
HADOOP-16458. LocatedFileStatusFetcher.getFileStatuses failing intermittently with S3

Contributed by Steve Loughran.

Includes

-S3A glob scans don't bother trying to resolve symlinks

-stack traces don't get lost in getFileStatuses() when exceptions are wrapped

-debug level logging of what is up in Globber

-Contains HADOOP-13373. Add S3A implementation of FSMainOperationsBaseTest.

-ITestRestrictedReadAccess tests incomplete read access to files.

This adds a builder API for constructing globbers which other stores can use

so that they too can skip symlink resolution when not needed.

Change-Id: I23bcdb2783d6bd77cf168fdc165b1b4b334d91c7

  1. … 11 more files in changeset.
HADOOP-16602. mvn package fails in hadoop-aws.

Contributed by Xieming Li.

Follow-up to HADOOP-16445

Change-Id: I72c62d55b734a0f67556844f398ef4a50d9ea585

HADOOP-15691 Add PathCapabilities to FileSystem and FileContext.

Contributed by Steve Loughran.

This complements the StreamCapabilities Interface by allowing applications to probe for a specific path on a specific instance of a FileSystem client

to offer a specific capability.

This is intended to allow applications to determine

* Whether a method is implemented before calling it and dealing with UnsupportedOperationException.

* Whether a specific feature is believed to be available in the remote store.

As well as a common set of capabilities defined in CommonPathCapabilities,

file systems are free to add their own capabilities, prefixed with

fs. + schema + .

The plan is to identify and document more capabilities -and for file systems which add new features, for a declaration of the availability of the feature to always be available.

Note

* The remote store is not expected to be checked for the feature;

It is more a check of client API and the client's configuration/knowledge

of the state of the remote system.

* Permissions are not checked.

Change-Id: I80bfebe94f4a8bdad8f3ac055495735b824968f5

  1. … 34 more files in changeset.
HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB (#1332)

    • -0
    • +99
    ./s3a/SignerManager.java
    • -1
    • +3
    ./s3a/s3guard/DynamoDBClientFactory.java
  1. … 4 more files in changeset.
HADOOP-16547. make sure that s3guard prune sets up the FS (#1402). Contributed by Steve Loughran.

Change-Id: Iaf71561cef6c797a3c66fed110faf08da6cac361

  1. … 1 more file in changeset.
HADOOP-16565. Region must be provided when requesting session credentials or SdkClientException will be thrown (#1454). Contributed by Gabor Bota.

    • -9
    • +23
    ./s3a/auth/MarshalledCredentialBinding.java
  1. … 1 more file in changeset.
HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8.

Contributed by Sahil Takiar.

This moves the SSLSocketFactoryEx class from hadoop-azure into hadoop-common

as the DelegatingSSLSocketFactory and binds the S3A connector to it so that

it can avoid using those HTTPS algorithms which are underperformant on Java 8.

Change-Id: Ie9e6ac24deac1aa05e136e08899620efa7d22abd

    • -0
    • +113
    ./s3a/impl/NetworkBinding.java
  1. … 13 more files in changeset.
HADOOP-16566. S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch (#1433). Contributed by Gabor Bota.

Change-Id: Ied43ef1522dfc6a1210d6fc58c38d8208824931b

HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) (#1208). Contributed by Gabor Bota.

Change-Id: I6bbb331b6c0a41c61043e482b95504fda8a50596

    • -0
    • +483
    ./s3a/s3guard/S3GuardFsck.java
    • -0
    • +346
    ./s3a/s3guard/S3GuardFsckViolationHandler.java
  1. … 5 more files in changeset.
HADOOP-16490. Avoid/handle cached 404s during S3A file creation.

Contributed by Steve Loughran.

This patch avoids issuing any HEAD path request when creating a file with overwrite=true,

so 404s will not end up in the S3 load balancers unless someone calls getFileStatus/exists/isFile

in their own code.

The Hadoop FsShell CommandWithDestination class is modified to not register uncreated files

for deleteOnExit(), because that calls exists() and so can place the 404 in the cache, even

after S3A is patched to not do it itself.

Because S3Guard knows when a file should be present, it adds a special FileNotFound retry policy

independently configurable from other retry policies; it is also exponential, but with

different parameters. This is because every HEAD request will refresh any 404 cached in

the S3 Load Balancers. It's not enough to retry: we have to have a suitable gap between

attempts to (hopefully) ensure any cached entry wil be gone.

The options and values are:

fs.s3a.s3guard.consistency.retry.interval: 2s

fs.s3a.s3guard.consistency.retry.limit: 7

The S3A copy() method used during rename() raises a RemoteFileChangedException which is not caught

so not downgraded to false. Thus: when a rename is unrecoverable, this fact is propagated.

Copy operations without S3Guard lack the confidence that the file exists, so don't retry the same way:

it will fail fast with a different error message. However, because create(path, overwrite=false) no

longer does HEAD path, we can at least be confident that S3A itself is not creating those cached

404 markers.

Change-Id: Ia7807faad8b9a8546836cb19f816cccf17cca26d

    • -1
    • +15
    ./s3a/RemoteFileChangedException.java
    • -2
    • +30
    ./s3a/S3GuardExistsRetryPolicy.java
    • -2
    • +21
    ./s3a/impl/ChangeDetectionPolicy.java
    • -0
    • +44
    ./s3a/impl/StatusProbeEnum.java
  1. … 16 more files in changeset.
HADOOP-16554. mvn javadoc:javadoc fails in hadoop-aws.

Contributed by Xieming Li.

Change-Id: I78e88b5b1ae4702446d2bdd3e2faa3e10b45aef0

HADOOP-16430. S3AFilesystem.delete to incrementally update s3guard with deletions

Contributed by Steve Loughran.

This overlaps the scanning for directory entries with batched calls to S3 DELETE and updates of the S3Guard tables.

It also uses S3Guard to list the files to delete, so find newly created files even when S3 listings are not use consistent.

For path which the client considers S3Guard to be authoritative, we also do a recursive LIST of the store and delete files; this is to find unindexed files and do guarantee that the delete(path, true) call really does delete everything underneath.

Change-Id: Ice2f6e940c506e0b3a78fa534a99721b1698708e

    • -10
    • +22
    ./s3a/InconsistentAmazonS3Client.java
    • -0
    • +577
    ./s3a/impl/DeleteOperation.java
    • -0
    • +69
    ./s3a/impl/ExecutingStoreOperation.java
    • -2
    • +8
    ./s3a/impl/MultiObjectDeleteSupport.java
    • -0
    • +198
    ./s3a/impl/OperationCallbacks.java
    • -139
    • +25
    ./s3a/impl/RenameOperation.java
  1. … 28 more files in changeset.
HADOOP-16416. mark DynamoDBMetadataStore.deleteTrackingValueMap as final. Contributed by kevin su.

Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>

    • -2
    • +2
    ./s3a/s3guard/DynamoDBMetadataStore.java
HADOOP-16470. Make last AWS credential provider in default auth chain EC2ContainerCredentialsProviderWrapper.

Contributed by Steve Loughran.

Contains HADOOP-16471. Restore (documented) fs.s3a.SharedInstanceProfileCredentialsProvider.

Change-Id: I06b99b57459cac80bf743c5c54f04e59bb54c2f8

    • -0
    • +44
    ./s3a/SharedInstanceCredentialProvider.java
    • -14
    • +20
    ./s3a/auth/IAMInstanceCredentialsProvider.java
  1. … 2 more files in changeset.
HADOOP-16500 S3ADelegationTokens to only log at debug on startup (#1269). Contributed by Steve Loughran.

Change-Id: Ifafc15f32791911976d7ebc36fb6e8853f59ed41

HADOOP-16499. S3A retry policy to be exponential (#1246). Contributed by Steve Loughran.

  1. … 10 more files in changeset.
HADOOP-16472. findbugs warning on LocalMetadataStore.ttlTimeProvider sync

Contributed by Steve Loughran.

Moved the setter and addAncestors to synchronized

Change-Id: Ib362c66d1b8c9124eca7db9a44274ac08d0b3be6

HADOOP-16433. S3Guard: Filter expired entries and tombstones when listing with MetadataStore.listChildren().

Contributed by Gabor Bota.

This pulls the tracking of the lastUpdated timestamp of metadata entries up from the DDB metastore into all s3guard stores, and then uses this to filter out expired tombstones from listings.

Change-Id: I80f121236b49c75a024116f65a3ef29d3580b462

    • -10
    • +31
    ./s3a/s3guard/DirListingMetadata.java
    • -8
    • +15
    ./s3a/s3guard/DynamoDBMetadataStore.java
    • -21
    • +29
    ./s3a/s3guard/LocalMetadataStore.java
  1. … 6 more files in changeset.
HADOOP-16380. S3Guard to determine empty directory status for all non-root directories.

Contributed by Steve Loughran and Gabor Bota.

This

* Asks S3Guard to determine the empty directory status.

* Has S3A's root directory rm("/") command to always return false (as abfs does)

* Documents that object stores MAY do this

* Overloads ContractTestUtils.assertDeleted to let assertions declare that the source directory does not need to exist. This stops inconsistencies in directory listings failing a root test.

It avoids a recent regression (HADOOP-16279) where if there was a tombstone above the first element found in a directory listing, the directory would be considered empty, when in fact there were child entries. That could downgrade an rm(path, recursive) to a no-op, while also confusing rename(src, dest), as dest could be mistaken for an empty directory and so permit the copy above it, rather than reject it "destination path exists and is not empty".

Change-Id: I136a3d1a5a48a67e6155d790a40ff558d0d2c108

  1. … 8 more files in changeset.