hadoop

Clone Tools
  • last updated 13 mins ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
YARN-9825. Changes for initializing placement rules with ResourceScheduler in branch-2. Contributed by Jonathan Hung.

HDDS-2122. Broken logo image on category sub-pages (#1437)

Signed-off-by: Nanda kumar <nanda@apache.org>

(cherry picked from commit 4a9a6a21b8ebe6c762b1050a802cb7dd80f004da)

HDDS-2122. Broken logo image on category sub-pages (#1437)

Signed-off-by: Nanda kumar <nanda@apache.org>

HADOOP-16562. [pb-upgrade] Update docker image to have 3.7.1 protoc executable (#1429).

Addendum patch. Moved protobuf-3.7.1 installation within YETUS marker.

HDDS-2089: Add createPipeline CLI. (#1418)

(cherry picked from commit 326b5acd4a63fe46821919322867f5daff30750c)

HDDS-2089: Add createPipeline CLI. (#1418)

HDFS-14846: libhdfs tests are failing on trunk due to jni usage bugs

Signed-off-by: Anu Engineer <aengineer@apache.org>

HDFS-14754. Erasure Coding : The number of Under-Replicated Blocks never reduced. Contributed by hemanthboyina.

(cherry picked from commit 4852a90e4b077ece2d68595210e62959a9923683)

HDFS-14754. Erasure Coding : The number of Under-Replicated Blocks never reduced. Contributed by hemanthboyina.

(cherry picked from commit 4852a90e4b077ece2d68595210e62959a9923683)

HDFS-14754. Erasure Coding : The number of Under-Replicated Blocks never reduced. Contributed by hemanthboyina.

HADOOP-16566. S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch (#1433). Contributed by Gabor Bota.

Change-Id: Ied43ef1522dfc6a1210d6fc58c38d8208824931b

HDDS-2076. Read fails because the block cannot be located in the container (#1410)

Signed-off-by: Nanda kumar <nanda@apache.org>

HDDS-2076. Read fails because the block cannot be located in the container (#1410)

Signed-off-by: Nanda kumar <nanda@apache.org>

(cherry picked from commit fe8cdf0ab846df9c2f3f59d1d4875185633a27ea)

HDFS-14798. Synchronize invalidateBlocks in DatanodeDescriptor. Contributed by hemanthboyina.

HDFS-14699. Erasure Coding: Storage not considered in live replica when replication streams hard limit reached to threshold. Contributed by Zhao Yi Ming.

(cherry picked from commit d1c303a49763029fffa5164295034af8e81e74a0)

(cherry picked from commit eb1ddcd04c9b0457e19fcc3b320d5b86cc1fda64)

HDFS-14699. Erasure Coding: Storage not considered in live replica when replication streams hard limit reached to threshold. Contributed by Zhao Yi Ming.

(cherry picked from commit d1c303a49763029fffa5164295034af8e81e74a0)

HDFS-14699. Erasure Coding: Storage not considered in live replica when replication streams hard limit reached to threshold. Contributed by Zhao Yi Ming.

HADOOP-16562. [pb-upgrade] Update docker image to have 3.7.1 protoc executable (#1429). Contributed by Vinayakumar B.

HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) (#1208). Contributed by Gabor Bota.

Change-Id: I6bbb331b6c0a41c61043e482b95504fda8a50596

YARN-9816. EntityGroupFSTimelineStore#scanActiveLogs fails when undesired files are present under /ats/active. Contribued by Prabhu Joseph.

HDFS-14840. Use Java Conccurent Instead of Synchronization in BlockPoolTokenSecretManager. Contributed by David Mollitor.

YARN-9819. Make TestOpportunisticContainerAllocatorAMService more resilient. Contribued by Abhishek Modi

HDDS-2106. Avoid usage of hadoop projects as parent of hdds/ozone

Closes #1423

YARN-9815 ReservationACLsTestBase fails with NPE. Contributed by Ahmed Hussein

HDDS-2075. Tracing in OzoneManager call is propagated with wrong parent (#1415)

(cherry picked from commit 64ed6b177d6b00b22d45576a8517432dc6c03348)

HDDS-2075. Tracing in OzoneManager call is propagated with wrong parent (#1415)

HADOOP-16490. Avoid/handle cached 404s during S3A file creation.

Contributed by Steve Loughran.

This patch avoids issuing any HEAD path request when creating a file with overwrite=true,

so 404s will not end up in the S3 load balancers unless someone calls getFileStatus/exists/isFile

in their own code.

The Hadoop FsShell CommandWithDestination class is modified to not register uncreated files

for deleteOnExit(), because that calls exists() and so can place the 404 in the cache, even

after S3A is patched to not do it itself.

Because S3Guard knows when a file should be present, it adds a special FileNotFound retry policy

independently configurable from other retry policies; it is also exponential, but with

different parameters. This is because every HEAD request will refresh any 404 cached in

the S3 Load Balancers. It's not enough to retry: we have to have a suitable gap between

attempts to (hopefully) ensure any cached entry wil be gone.

The options and values are:

fs.s3a.s3guard.consistency.retry.interval: 2s

fs.s3a.s3guard.consistency.retry.limit: 7

The S3A copy() method used during rename() raises a RemoteFileChangedException which is not caught

so not downgraded to false. Thus: when a rename is unrecoverable, this fact is propagated.

Copy operations without S3Guard lack the confidence that the file exists, so don't retry the same way:

it will fail fast with a different error message. However, because create(path, overwrite=false) no

longer does HEAD path, we can at least be confident that S3A itself is not creating those cached

404 markers.

Change-Id: Ia7807faad8b9a8546836cb19f816cccf17cca26d

  1. … 10 more files in changeset.
HDDS-2103. TestContainerReplication fails due to unhealthy container (#1421)

HDFS-14838. RBF: Display RPC (instead of HTTP) Port Number in RBF web UI. Contributed by Xieming Li

HDFS-14838. RBF: Display RPC (instead of HTTP) Port Number in RBF web UI. Contributed by Xieming Li

(cherry picked from commit c255333e20c9af6166db5931d70151011d540359)