Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NC | NSFS | bucket_namespace_cache Is Not Updated Between Forks #8391

Open
shirady opened this issue Sep 22, 2024 · 2 comments · May be fixed by #8527
Open

NC | NSFS | bucket_namespace_cache Is Not Updated Between Forks #8391

shirady opened this issue Sep 22, 2024 · 2 comments · May be fixed by #8527
Assignees
Labels

Comments

@shirady
Copy link
Contributor

shirady commented Sep 22, 2024

Environment info

  • NooBaa Version: 5.18
  • Platform: NC

Actual behavior

  1. When using child process configuration (ENDPOINT_FORKS in the config.json) the bucket_namespace_cache is not shared between the forks. Currently, we need to wait for one minute before the item is cleaned from the cache.

Expected behavior

  1. The cache will be updated and we will not have to wait.

Steps to reproduce

This issue was raised as part of issue #8368

  1. Change the configuration to have more than 1 fork, for example: sudo vi /etc/noobaa.conf.d/config.json and {"ENDPOINT_FORKS": 2}).
  2. Change the bucket configuration (for example change the existing bucket policy from: list object versions of a versioned bucket of all principals on the bucket and its objects to: allow get-object of a specific key with principal of account2).
  3. Use the configuration as it was changed (try to get-object of this key by account2).

more details about the steps can be found in this comment.
It is not guaranteed that you will reproduce it since it might be that all the changes would be from the same child process.

More information - Screenshots / Logs / Other output

Example of policies:
from policy1:

{
 "Version": "2012-10-17",
 "Statement": [ 
   { 
    "Effect": "Allow", 
    "Principal": { "*" }, 
    "Action": [ "s3:ListBucketVersions" ], 
    "Resource": [ "arn:aws:s3:::<bucket-name>/*", "arn:aws:s3:::<bucket-name>" ] 
   }
 ]
}

to policy2:

{
  "Version": "2012-10-17",
  "Statement": [ 
    { 
     "Effect": "Allow", 
     "Principal": { "AWS": [ "<account name-2" ] }, 
     "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], 
     "Resource": [ "arn:aws:s3:::<bucket-name>/<key-name>" ] 
    }
  ]
}
@shirady
Copy link
Contributor Author

shirady commented Oct 13, 2024

Hi,
Please notice that after adding a fix for this issue we need to go back to a couple of closed issues and make sure that they are fixed (otherwise reopen):

  1. NSFS | S3 | Versioning: AccessDenied on GetObject in a version-enabled bucket despite bucket policy allowing #8368 (related to test_s3cever11 in GPFS in @hseipp repo).
  2. warp list --versions failing with NamespaceFS: Version dir not found #8429 (related to warp run in GPFS based on @muenchp1 test as described in the comments: /warp list --versions=5 --host=192.168.17.15{0...3}:6443 --access-key=<access-key> --secret-key=<secret-key> --insecure --tls --bucket=warpiobucket --objects=2 --obj.size=4MiB --obj.randsize --disable-multipart --concurrent=1 --duration=1m without creating the bucket and setting the versioning configuration ahead).
  3. NSFS | S3 | Versioning: VersionId corruption after version-enabled and version-suspended mode operations #8443 (related to Ceph test test_versioning_obj_suspend_version in GPFS machine).
  4. NSFS | S3 | Versioning: InternalError on PutObject during concurrent operations #8470 concurrent change of versioning mode and running s3 commands (put and delete)
  5. NSFS | S3 | Versioning: Internal Error on DeleteObject in version-enabled mode, test_versioning_obj_suspended_copy #8469 should run test_versioning_obj_suspended_copy ceph test and enable it

Note: This comment might be edited in case we will refer to this issue again.

cc: @nadavMiz @romayalon

@guymguym
Copy link
Member

@shirady @romayalon @nimrod-becker
I think this can be solved easily in object_sdk _validate_bucket_namespace.

We can add a new (optional?) namespace method that will check for validity of the current cache item in the bucketspace before checking the valid_until time. This will allow bucketspace_fs to quickly stat the bucket.json file and check for mtime/ctime change to quickly invalidate the cache.

The same approach can be done for account_cache and accountspace.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants