hadoop

Clone Tools
  • last updated 29 mins ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
HDDS-1353 : Metrics scm_pipeline_metrics_num_pipeline_creation_failed keeps increasing because of BackgroundPipelineCreator. (#681)

HDFS-14389. getAclStatus returns incorrect permissions and owner when an iNodeAttributeProvider is configured. Contributed by Stephen O'Donnell.

Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>

(cherry picked from commit c528e427aa6745434672b1c1850738795ad1d6d2)

(cherry picked from commit 388f445dde577999b2d81f809adcfca8f0958499)

HDFS-14389. getAclStatus returns incorrect permissions and owner when an iNodeAttributeProvider is configured. Contributed by Stephen O'Donnell.

Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>

HDFS-14389. getAclStatus returns incorrect permissions and owner when an iNodeAttributeProvider is configured. Contributed by Stephen O'Donnell.

Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>

(cherry picked from commit c528e427aa6745434672b1c1850738795ad1d6d2)

(cherry picked from commit 388f445dde577999b2d81f809adcfca8f0958499)

(cherry picked from commit d9899015ebf8a27e9ac339d8a8b3c9d88bcbacb9)

HDFS-14389. getAclStatus returns incorrect permissions and owner when an iNodeAttributeProvider is configured. Contributed by Stephen O'Donnell.

Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>

(cherry picked from commit c528e427aa6745434672b1c1850738795ad1d6d2)

HDDS-1207. Refactor Container Report Processing logic and plugin new Replication Manager. (#662)

  1. … 4 more files in changeset.
HDDS-1349. Remove watchClient from XceiverClientRatis. Contributed by Shashikant Banerjee.

YARN-9394. Use new API of RackResolver to get better performance. Contributed by Lantao Jin.

YARN-9394. Use new API of RackResolver to get better performance. Contributed by Lantao Jin.

(cherry picked from commit 945e8c60640ceb938ad8d27767d44eec53a15038)

HDDS-1189. Recon Aggregate DB schema and ORM. Contributed by Siddharth Wagle.

    • -0
    • +58
    /hadoop-ozone/ozone-recon-codegen/pom.xml
    • -30
    • +117
    /hadoop-ozone/ozone-recon/pom.xml
  1. … 8 more files in changeset.
YARN-9303 Username splits won't help timelineservice.app_flow table. Contributed by Prabhu Joseph.

YARN-9303 Username splits won't help timelineservice.app_flow table. Contributed by Prabhu Joseph.

HDDS-1339. Implement ratis snapshots on OM (#651)

HDFS-14397. Backport HADOOP-15684 to branch-2. Contributed by Chao Sun.

HDFS-14327. Using FQDN instead of IP to access servers with DNS resolving. Contributed by Fengnan Li.

HDDS-1324. TestOzoneManagerHA tests are flaky (#676)

HDDS-1211. Test SCMChillMode failing randomly in Jenkins run (#543)

HDDS-1358 : Recon Server REST API not working as expected. (#668)

HADOOP-16233. S3AFileStatus to declare that isEncrypted() is always true (#685)

This is needed to fix up some confusion about caching of job.addCache() handling of S3A paths; all parent dirs -the files are downloaded by the NM without using the DTs of the user submitting the job. This means that when you submit jobs to an EC2 cluster with lower IAM permissions than the user, cached resources don't get downloaded and the job doesn't start.

Production code changes:

* S3AFileStatus Adds "true" to the superclass's encrypted flag during construction.

Tests

* Base AbstractContractOpenTest can control whether zero byte files created in tests are encrypted. Not done via an XML attribute, just a subclass point. Thoughts?

* Verify that the filecache considers paths to not have the permissions which trigger reduce-privilege downloads

* And extend ITestDelegatedMRJob to test a completely different bucket (open street map), to verify that cached resources do get their tokens picked up

Docs:

* Advise FS developers to say all files are encrypted. It's otherwise harmless and it'll stop other people seeing impossible to debug error messages on app launch.

Contributed by Steve Loughran.

Change-Id: Ifaae4c9d735ccc5eafeebd2584b65daf2d4e5da3

HADOOP-16011. OsSecureRandom very slow compared to other SecureRandom implementations. Contributed by Siyao Meng.

Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>

HADOOP-16233. S3AFileStatus to declare that isEncrypted() is always true (#685)

This is needed to fix up some confusion about caching of job.addCache() handling of S3A paths; all parent dirs -the files are downloaded by the NM without using the DTs of the user submitting the job. This means that when you submit jobs to an EC2 cluster with lower IAM permissions than the user, cached resources don't get downloaded and the job doesn't start.

Production code changes:

* S3AFileStatus Adds "true" to the superclass's encrypted flag during construction.

Tests

* Base AbstractContractOpenTest can control whether zero byte files created in tests are encrypted. Not done via an XML attribute, just a subclass point. Thoughts?

* Verify that the filecache considers paths to not have the permissions which trigger reduce-privilege downloads

* And extend ITestDelegatedMRJob to test a completely different bucket (open street map), to verify that cached resources do get their tokens picked up

Docs:

* Advise FS developers to say all files are encrypted. It's otherwise harmless and it'll stop other people seeing impossible to debug error messages on app launch.

Contributed by Steve Loughran.

Change-Id: Ifaae4c9d735ccc5eafeebd2584b65daf2d4e5da3

HADOOP-16233. S3AFileStatus to declare that isEncrypted() is always true (#685)

This is needed to fix up some confusion about caching of job.addCache() handling of S3A paths; all parent dirs -the files are downloaded by the NM without using the DTs of the user submitting the job. This means that when you submit jobs to an EC2 cluster with lower IAM permissions than the user, cached resources don't get downloaded and the job doesn't start.

Production code changes:

* S3AFileStatus Adds "true" to the superclass's encrypted flag during construction.

Tests

* Base AbstractContractOpenTest can control whether zero byte files created in tests are encrypted. Not done via an XML attribute, just a subclass point. Thoughts?

* Verify that the filecache considers paths to not have the permissions which trigger reduce-privilege downloads

* And extend ITestDelegatedMRJob to test a completely different bucket (open street map), to verify that cached resources do get their tokens picked up

Docs:

* Advise FS developers to say all files are encrypted. It's otherwise harmless and it'll stop other people seeing impossible to debug error messages on app launch.

Contributed by Steve Loughran.

Change-Id: Ifaae4c9d735ccc5eafeebd2584b65daf2d4e5da3

(cherry picked from commit 366186d9990ef9059b6ac9a19ad24310d6f36d04)

HADOOP-16233. S3AFileStatus to declare that isEncrypted() is always true (#685)

This is needed to fix up some confusion about caching of job.addCache() handling of S3A paths; all parent dirs -the files are downloaded by the NM without using the DTs of the user submitting the job. This means that when you submit jobs to an EC2 cluster with lower IAM permissions than the user, cached resources don't get downloaded and the job doesn't start.

Production code changes:

* S3AFileStatus Adds "true" to the superclass's encrypted flag during construction.

Tests

* Base AbstractContractOpenTest can control whether zero byte files created in tests are encrypted. Not done via an XML attribute, just a subclass point. Thoughts?

* Verify that the filecache considers paths to not have the permissions which trigger reduce-privilege downloads

* And extend ITestDelegatedMRJob to test a completely different bucket (open street map), to verify that cached resources do get their tokens picked up

Docs:

* Advise FS developers to say all files are encrypted. It's otherwise harmless and it'll stop other people seeing impossible to debug error messages on app launch.

Contributed by Steve Loughran.

Change-Id: Ifaae4c9d735ccc5eafeebd2584b65daf2d4e5da3

HDDS-1330 : Add a docker compose for Ozone deployment with Recon. (#669)

    • -0
    • +17
    /hadoop-ozone/dist/src/main/compose/ozone-recon/.env
    • -0
    • +80
    /hadoop-ozone/dist/src/main/compose/ozone-recon/docker-config
HDDS-1377. OM failed to start with incorrect hostname set as ip address in CSR. (#683)

HDDS-1377. OM failed to start with incorrect hostname set as ip address in CSR. (#683)

(cherry picked from commit d6c233fce67104c1c4da802eb695526e60058536)

HDFS-10477. Stop decommission a rack of DataNodes caused NameNode fail over to standby. Contributed by yunjiong zhao and Wei-Chiu Chuang.

(cherry picked from commit be488b6070a124234c77f16193ee925d32ca9a20)

(cherry picked from commit c8703dda0727e17d759d7ad27f0caee88103a530)

(cherry picked from commit 2a94603ae66d9000c0bb07df0d592279339af103)

HDFS-10477. Stop decommission a rack of DataNodes caused NameNode fail over to standby. Contributed by yunjiong zhao and Wei-Chiu Chuang.

(cherry picked from commit be488b6070a124234c77f16193ee925d32ca9a20)

HDFS-10477. Stop decommission a rack of DataNodes caused NameNode fail over to standby. Contributed by yunjiong zhao and Wei-Chiu Chuang.

HDFS-10477. Stop decommission a rack of DataNodes caused NameNode fail over to standby. Contributed by yunjiong zhao and Wei-Chiu Chuang.

(cherry picked from commit be488b6070a124234c77f16193ee925d32ca9a20)

(cherry picked from commit c8703dda0727e17d759d7ad27f0caee88103a530)