hadoop

Clone Tools
  • last updated a few minutes ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
HDFS-12459. Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API. Contributed by Weiwei Yang.

Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>

(cherry picked from commit 3ead525c71cba068e7abf1c76ad629bfeec10852)

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java

hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md

Revert "HDFS-11156. Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API. Contributed by Weiwei Yang."

This reverts commit d4ca1c5226521c4f9c609bb8ec9f64a63bd8bef1.

-JsonUtil not reverted. A later commit in httpfs uses the new methods.

-jackson databind was introduced in pom.xml. This is not reverted as

later changes are using it.

HADOOP-13666. Supporting rack exclusion in countNumOfAvailableNodes in NetworkTopology. Contributed by Inigo Goiri.

HDFS-15173. RBF: Delete repeated configuration 'dfs.federation.router.metrics.enable' (#1849)

(cherry picked from commit 439d935e1df601ed998521443fbe6752040e7a84)

HDFS-15173. RBF: Delete repeated configuration 'dfs.federation.router.metrics.enable' (#1849)

(cherry picked from commit 439d935e1df601ed998521443fbe6752040e7a84)

HDFS-15173. RBF: Delete repeated configuration 'dfs.federation.router.metrics.enable' (#1849)

HDFS-15135. EC : ArrayIndexOutOfBoundsException in BlockRecoveryWorker#RecoveryTaskStriped. Contributed by Ravuri Sushma sree.

HDFS-15164. Fix TestDelegationTokensWithHA. Contributed by Ayush Saxena.

HDFS-15164. Fix TestDelegationTokensWithHA. Contributed by Ayush Saxena.

HDFS-15164. Fix TestDelegationTokensWithHA. Contributed by Ayush Saxena.

HDFS-15164. Fix TestDelegationTokensWithHA. Contributed by Ayush Saxena.

YARN-10136. [Router] : Application metrics are hardcode as N/A in UI. Contributed by Bilwa S T.

HADOOP-16850. Support getting thread info from thread group for JvmMetrics to improve the performance. Contributed by Tao Yang.

HADOOP-16823. Large DeleteObject requests are their own Thundering Herd.

Contributed by Steve Loughran.

During S3A rename() and delete() calls, the list of objects delete is

built up into batches of a thousand and then POSTed in a single large

DeleteObjects request.

But as the IO capacity allowed on an S3 partition may only be 3500 writes

per second *and* each entry in that POST counts as a single write, then

one of those posts alone can trigger throttling on an already loaded

S3 directory tree. Which can trigger backoff and retry, with the same

thousand entry post, and so recreate the exact same problem.

Fixes

* Page size for delete object requests is set in

fs.s3a.bulk.delete.page.size; the default is 250.

* The property fs.s3a.experimental.aws.s3.throttling (default=true)

can be set to false to disable throttle retry logic in the AWS

client SDK -it is all handled in the S3A client. This

gives more visibility in to when operations are being throttled

* Bulk delete throttling events are logged to the log

org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears

often then choose a smaller page size.

* The metric "store_io_throttled" adds the entire count of delete

requests when a single DeleteObjects request is throttled.

* A new quantile, "store_io_throttle_rate" can track throttling

load over time.

* DynamoDB metastore throttle resilience issues have also been

identified and fixed. Note: the fs.s3a.experimental.aws.s3.throttling

flag does not apply to DDB IO precisely because there may still be

lurking issues there and it safest to rely on the DynamoDB client

SDK.

Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84

  1. … 12 more files in changeset.
YARN-10137. UIv2 build is broken in trunk. Contributed by Adam Antal

YARN-10029. Add option to UIv2 to get container logs from the new JHS API. Contributed by Adam Antal

  1. … 6 more files in changeset.
HDFS-15086. Block scheduled counter never get decremet if the block got deleted before replication. Contributed by hemanthboyina.

HDFS-15086. Block scheduled counter never get decremet if the block got deleted before replication. Contributed by hemanthboyina.

HDFS-15086. Block scheduled counter never get decremet if the block got deleted before replication. Contributed by hemanthboyina.

YARN-9521. Handle FileSystem close in ApiServiceClient

Contributed by kyungwan nam. Reviewed by Eric Yang.

HDFS-13989. RBF: Add FSCK to the Router (#1832)

Co-authored-by: Inigo Goiri <inigoiri@apache.org>

HDFS-15161. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException in ShortCircuitCache#close(). Contributed by Lisheng Sun

HDFS-15161. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException in ShortCircuitCache#close(). Contributed by Lisheng Sun

HDFS-15161. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException in ShortCircuitCache#close(). Contributed by Lisheng Sun

HDFS-15161. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException in ShortCircuitCache#close(). Contributed by Lisheng Sun

HDFS-15161. When evictableMmapped or evictable size is zero, do not throw NoSuchElementException in ShortCircuitCache#close(). Contributed by Lisheng Sun

MAPREDUCE-7263. Remove obsolete validateTargetPath() from FrameworkUploader. Contributed by Marton Hudaky

HDFS-15127. RBF: Do not allow writes when a subcluster is unavailable for HASH_ALL mount points. Contributed by Inigo Goiri

HADOOP-16856. cmake is missing in the CentOS 8 section of BUILDING.txt. (#1841)

HADOOP-16849. start-build-env.sh behaves incorrectly when username is numeric only. Contributed by Jihyun Cho.

(cherry picked from commit 9709afe67d8ed45c3dfb53e45fe1efdc0814ac6c)

Conflicts:

start-build-env.sh