Clone Tools
  • last updated 23 mins ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
HDFS-15394. Add all available fs.viewfs.overload.scheme.target.<scheme>.impl classes in core-default.xml bydefault. Contributed by Uma Maheswara Rao G.

  1. … 2 more files in changeset.
HADOOP-16886. Add hadoop.http.idle_timeout.ms to core-default.xml. Contributed by Lisheng Sun.

  1. … 1 more file in changeset.
HADOOP-16886. Add hadoop.http.idle_timeout.ms to core-default.xml. Contributed by Lisheng Sun.

  1. … 1 more file in changeset.
HADOOP-16886. Add hadoop.http.idle_timeout.ms to core-default.xml. Contributed by Lisheng Sun.

  1. … 1 more file in changeset.
HADOOP-16886. Add hadoop.http.idle_timeout.ms to core-default.xml. Contributed by Lisheng Sun.

  1. … 1 more file in changeset.
HADOOP-16988. Remove source code from branch-2. (aajisaka via jhung)

This closes #1959

  1. … 10846 more files in changeset.
HADOOP-16661. Support TLS 1.3 (#1880)

  1. … 1 more file in changeset.
HADOOP-16841. The description of hadoop.http.authentication.signature.secret.file contains outdated information. Contributed by Xieming Li.

(cherry picked from commit 5cbc4c54611f062690c3bfcf78d09d37c78ffd12)

HADOOP-16841. The description of hadoop.http.authentication.signature.secret.file contains outdated information. Contributed by Xieming Li.

HADOOP-16841. The description of hadoop.http.authentication.signature.secret.file contains outdated information. Contributed by Xieming Li.

(cherry picked from commit 5cbc4c54611f062690c3bfcf78d09d37c78ffd12)

HADOOP-16850. Support getting thread info from thread group for JvmMetrics to improve the performance. Contributed by Tao Yang.

  1. … 3 more files in changeset.
HADOOP-16823. Large DeleteObject requests are their own Thundering Herd.

Contributed by Steve Loughran.

During S3A rename() and delete() calls, the list of objects delete is

built up into batches of a thousand and then POSTed in a single large

DeleteObjects request.

But as the IO capacity allowed on an S3 partition may only be 3500 writes

per second *and* each entry in that POST counts as a single write, then

one of those posts alone can trigger throttling on an already loaded

S3 directory tree. Which can trigger backoff and retry, with the same

thousand entry post, and so recreate the exact same problem.

Fixes

* Page size for delete object requests is set in

fs.s3a.bulk.delete.page.size; the default is 250.

* The property fs.s3a.experimental.aws.s3.throttling (default=true)

can be set to false to disable throttle retry logic in the AWS

client SDK -it is all handled in the S3A client. This

gives more visibility in to when operations are being throttled

* Bulk delete throttling events are logged to the log

org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears

often then choose a smaller page size.

* The metric "store_io_throttled" adds the entire count of delete

requests when a single DeleteObjects request is throttled.

* A new quantile, "store_io_throttle_rate" can track throttling

load over time.

* DynamoDB metastore throttle resilience issues have also been

identified and fixed. Note: the fs.s3a.experimental.aws.s3.throttling

flag does not apply to DDB IO precisely because there may still be

lurking issues there and it safest to rely on the DynamoDB client

SDK.

Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84

  1. … 26 more files in changeset.
HADOOP-16792: Make S3 client request timeout configurable.

Contributed by Mustafa Iman.

This adds a new configuration option fs.s3a.connection.request.timeout

to declare the time out on HTTP requests to the AWS service;

0 means no timeout.

Measured in seconds; the usual time suffixes are all supported

Important: this is the maximum duration of any AWS service call,

including upload and copy operations. If non-zero, it must be larger

than the time to upload multi-megabyte blocks to S3 from the client,

and to rename many-GB files. Use with care.

Change-Id: I407745341068b702bf8f401fb96450a9f987c51c

  1. … 5 more files in changeset.
HADOOP-16732. S3Guard to support encrypted DynamoDB table (#1752). Contributed by Mingliang Liu.

  1. … 8 more files in changeset.
HADOOP-16346. Stabilize S3A OpenSSL support.

Introduces `openssl` as an option for `fs.s3a.ssl.channel.mode`.

The new option is documented and marked as experimental.

For details on how to use this, consult the peformance document

in the s3a documentation.

This patch is the successor to HADOOP-16050 "S3A SSL connections

should use OpenSSL" -which was reverted because of

incompatibilities between the wildfly OpenSSL client and the AWS

HTTPS servers (HADOOP-16347). With the Wildfly release moved up

to 1.0.7.Final (HADOOP-16405) everything should now work.

Related issues:

* HADOOP-15669. ABFS: Improve HTTPS Performance

* HADOOP-16050: S3A SSL connections should use OpenSSL

* HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8

* HADOOP-16405: Upgrade Wildfly Openssl version to 1.0.7.Final

Contributed by Sahil Takiar

Change-Id: I80a4bc5051519f186b7383b2c1cea140be42444e

  1. … 8 more files in changeset.
HADOOP-16735. Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN. Contributed by Mingliang Liu

This closes #1733

  1. … 1 more file in changeset.
HADOOP-16735. Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN. Contributed by Mingliang Liu

This closes #1733

  1. … 1 more file in changeset.
HADOOP-16735. Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN. Contributed by Mingliang Liu

This closes #1733

  1. … 1 more file in changeset.
HADOOP-16735. Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN. Contributed by Mingliang Liu

This closes #1733

  1. … 1 more file in changeset.
HADOOP-16735. Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN. Contributed by Mingliang Liu

This closes #1733

  1. … 1 more file in changeset.
HADOOP-16718. Allow disabling Server Name Indication (SNI) for Jetty. Contributed by Aravindan Vijayan.

Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>

Reviewed-by: Siyao Meng <smeng@cloudera.com>

  1. … 2 more files in changeset.
HADOOP-16718. Allow disabling Server Name Indication (SNI) for Jetty. Contributed by Aravindan Vijayan.

Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>

Reviewed-by: Siyao Meng <smeng@cloudera.com>

(cherry picked from commit f1ab7f18c423a9cfc59292d25fa178e73715b85b)

  1. … 2 more files in changeset.
HADOOP-16718. Allow disabling Server Name Indication (SNI) for Jetty. Contributed by Aravindan Vijayan.

Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>

Reviewed-by: Siyao Meng <smeng@cloudera.com>

(cherry picked from commit f1ab7f18c423a9cfc59292d25fa178e73715b85b)

(cherry picked from commit f0c1403ec382a7a8c25b0311db0c88749576c308)

  1. … 2 more files in changeset.
HADOOP-16712. Config ha.failover-controller.active-standby-elector.zk.op.retries is not in core-default.xml. Contributed by Xieming Li.

HADOOP-16712. Config ha.failover-controller.active-standby-elector.zk.op.retries is not in core-default.xml. Contributed by Xieming Li.

HADOOP-16712. Config ha.failover-controller.active-standby-elector.zk.op.retries is not in core-default.xml. Contributed by Xieming Li.

HDFS-14802. The feature of protect directories should be used in RenameOp (#1669)

  1. … 6 more files in changeset.
HADOOP-16656. Document FairCallQueue configs in core-default.xml. Contributed by Siyao Meng.

Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>

  1. … 1 more file in changeset.
Revert "HADOOP-16656. Document FairCallQueue configs in core-default.xml. Contributed by Siyao Meng."

This reverts commit f9b99d2f24db5faae3ded11c3b74240943b1e49e.

HADOOP-16484. S3A to warn or fail if S3Guard is disabled (#1661). Contributed by Gabor Bota.

  1. … 5 more files in changeset.