-
Notifications
You must be signed in to change notification settings - Fork 801
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Big file uploads chug every few hundreds of MB, Nextcloud client not respecting max chunk size #4288
Comments
This bug report did not receive an update in the last 4 weeks. Please take a look again and update the issue with new details, otherwise the issue will be automatically closed in 2 weeks. Thank you! |
@github-actions Stop being silly. This issue should be automatically closed just because noone triaged it yet.. It's a legitimate issue, probably a bug, and needs to be investigated :) |
This bug report did not receive an update in the last 4 weeks. Please take a look again and update the issue with new details, otherwise the issue will be automatically closed in 2 weeks. Thank you! |
@plantroon @szaimen I and many others appreciate the work and time you put into NextCloud. But seriously, what is the point of closing 100% legitimate issues, which also has very good information about the symptoms of the problem, without even having looked into them? If you (or the bot, rather) keep doing that, then what point is there to open an issue in the first place, when there's a bug or similar problem with NC? Seriously, please configure your bot so that it doesn't automatically close issues like this. The way it works now is not only utterly stupid, but also highly counter-productive to improving the quality of the software. |
In the meantime I upgraded my server to one with AES acceleration which helped a little, but the bug report still stands as the Nextcloud client is not forced by the server to respect max chunk size (I think this should be fixed by the server to not allow malicious clients). If this worked correctly, the consequent issue with high server load wouldn't be as bad. A note for the consequent high load issue: when uploading a 4 GB file, over 20 GB of data are written to disk on the server side. That's why it completely kills the performance, the file operations are just too heavy: client -> php upload tmp dir -> nextcloud data/user/upload dir -> assembling the chunks, done. Both apache and fpm version of docker image are affected. Btw, uploading a 4 GB file to the same server via sftp or even http post upload will only write 4 GB of data (obviously) and won't send the server's load avg through the roof xD |
This bug report did not receive an update in the last 4 weeks. Please take a look again and update the issue with new details, otherwise the issue will be automatically closed in 2 weeks. Thank you! |
@github-actions Well, what other details does your braindead software wish to receive? @plantroon Not sure what "update the issue with new details" actually entails, but to be on the safe side, perhaps you should edit the initial post in this issue thread, adding some text just for kicks. |
Checked with the latest updates (nextcloud 24) and updated the initial post. It is still happening. I do not test with different client version because this should simply not be allowed from server-side if it's set up to allow a certain max chunk size, no bigger size should be allowed in such case. |
This bug report did not receive an update in the last 4 weeks. Please take a look again and update the issue with new details, otherwise the issue will be automatically closed in 2 weeks. Thank you! |
@github-actions still not resolved |
This bug report did not receive an update in the last 4 weeks. Please take a look again and update the issue with new details, otherwise the issue will be automatically closed in 2 weeks. Thank you! |
update: still not fixed ;( |
This bug report did not receive an update in the last 4 weeks. Please take a look again and update the issue with new details, otherwise the issue will be automatically closed in 2 weeks. Thank you! |
Here we go again. When will someone from Nextcloud even look at this bug report? |
there is this: #4826 |
Hey 👋, thanks for your bug report 👍
You are referring to this part of the documentation right? I agree this gives the impression it would change the chunk size for the clients unfortunately it's only implemented in our web ui: https://github.com/nextcloud/server/search?q=max_chunk_size As workaround for now I expect that the client configuration options targetChunkUploadDuration = 0 and chunkSize = 10000000 (https://github.com/nextcloud/desktop/blob/master/doc/conffile.rst) should make it work for you. When #4826 is merged it should be possible to force the client to a given size by the webserver (e.g. client_max_body_size for nginx). |
Due to many changes on my server this is not a problem anymore, but thanks for letting me know about the possible fix in #4826 . Not exactly what I had in mind but it seems good enough. I also realized that maybe this would have better been reported to server side, not to desktop client side. I am fine with closing this issue at this point. |
Bug description
When uploading a large file (say, 10 GB) to Nextcloud, the desktop client paues every few hundreds MB (not in a regular interval). Server load is high throughout the process. This has been occuring since I started using Nextcloud in August 2021 and no explanation was ever given for this problem until I investigated it myself today.
The corresponding setting on client-side is
targetChunkUploadDuration
(the default value of 6000 and dynamic chunk sizes is problematic) - this one enforces chunking no matter what I set on the server side - even if I set max chunk size to be 10 MB, the chunks created on the server-side look like this:This brings a low-end server to a high load situation and the client pauses every few hundred MBs.
Some background info: The disks being used are SSDs, the server is a Thinkpad X200 laptop.
I am reporting this as server issue, as it seems like the server's own limits are not enforced against the desktop client?
Steps to reproduce
Expected behavior
The client respects server's max chunk size or the upload is otherwise made smoother, without extreme load situation on server side.
Installation method
Official Docker image
Operating system
Debian/Ubuntu
PHP engine version
PHP 8.0
Web server
Apache (supported)
Database engine version
MySQL
Is this bug present after an update or on a fresh install?
No response
Are you using the Nextcloud Server Encryption module?
No response
What user-backends are you using?
Configuration report
List of activated Apps
Nextcloud Signing status
Nextcloud Logs
No response
Additional info
No response
The text was updated successfully, but these errors were encountered: