-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Research "Good" default timeouts #126
Comments
While testing a device update using rf_update.py on a small IoT device, I noticed that for large files (over 50MB), the multipart request times out after only 70% of the file is uploaded. To resolve this, I had to manually adjust the timeout in the script. |
Sure, that's certainly something we can add as an option. Out of curiosity so I can better understand the scale, about how fast is the transfer to this device? We do try to make a "best guess" calculation specifically for the update script based on the file size (I think a 50MB file should have a timeout of 100 seconds). |
I ran some tests with dummy files, and here are the results:
|
Recent additions start specifying timeouts to apply either on a specific request or in general at the script entry point.
Original times were a bit more strict (5 seconds for all scripts, and approximately 1 second for every 3 MB for a push update).
15 seconds could be a bit aggressive for most usage and we could likely bring this back down to 5 seconds. However, the log entry reading could easily go past this in some cases, so maybe we could push the 30 seconds down to the log entry retrieval itself.
I don't have a good sense of a "right" answer for the multipart update one; file sizes can be large, and should we penalize fast networks for being accommodating of slower networks? Is there a better solution? In Ansible, a user has to specify the timeout, but I prefer to avoid adding more options.
The text was updated successfully, but these errors were encountered: