-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache byte range requests #215
Conversation
95da3a5
to
4f8836f
Compare
/dev_build |
/build_dev |
Build and Push Dev Preview Image You can find the workflow here: |
The general gist of the PR looks good to me! Some thoughts re your open ended questions --
I see the sliced cache as an additional feature of the main cache so as a user, I don't think you'd need to control the sliced cache separately. In so far as an NGINX implementation detail goes, a separate cache might help fine tune the necessary slice cache settings (but I would still consider using the same cache until such a point comes).
My go to would be to use a default value at which
I think this should be the default. I can see slicing back together the data on disk being useful for some use cases, but doing so should be an optional toggle. |
Sorry @alessfg for the thumbs down. I have a rampant bot that I had to fix. Thank you for your comments, I'm take them into account as we work on this feature more. |
4f8836f
to
f091808
Compare
What
A potential fix for #188
When the
Range
header is supplied:PROXY_CACHE_SLICE_SIZE
until the requested range is satisfiedPROXY_CACHE_SLICE_SIZE
.When the
Range
header is not supplied:proxy_cache_lock
ensures that multiple requests for the same file are not cached multiple times. Requests received after the initialMISS
will queue until they can be served from the cache (the initial request cache write is complete).I think it's good to have the ability to serve and cache byte range requests efficiently by default. Although we could turn this on and off with a config option, the overhead is low and makes the gateway more flexible.
Implementation Details
This solution takes advantage of the existing redirectToS3 function to change the target NGINX conf location based on the presence of the
Range
headerThe main configuration for the s3 proxy action has been broken out into
common/etc/nginx/templates/gateway/s3_location_common.conf.template
A separate cache is defined for the slice-based caching
In the slice caching location, the http_slice_module is configured and other caching options overridden as necessary.
Open questions:
@s3_sliced
location?Examples
Normal Request
A single cache file is created
The size of the cache file is equal to the requested file:
Byte Range Request
In this example, I'm requesting a 5mb file, and the
PROXY_CACHE_SLICE_SIZE
option has been set to1000k
(1000 kilobytes)Cache files are created in chunks:
The size of each cache file is roughly equal to the requested file the chunk size: