Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IOError: [Errno 24] Too many open files #29

Open
3gr4s opened this issue Mar 30, 2023 · 0 comments
Open

IOError: [Errno 24] Too many open files #29

3gr4s opened this issue Mar 30, 2023 · 0 comments

Comments

@3gr4s
Copy link

3gr4s commented Mar 30, 2023

Describe the bug
We're getting fatal errors that ends in timeouts problems, with too many processes on our nagios server and making it freeze.
Before the issue, the log says:
IOError: [Errno 24] Too many open files
Than, a gunicorn thead goes in timeout, get killed and recreated.

Additional context
We used the gunicorn "version" of the systemd service. At the moment, we set gunicorn with 8 worker and 8 thread. We tried with less (5) and more (10), but after some time the service still showed the same error.
The last time, I created a new local user, increased the open file limit to 16384 (adding a row in /etc/security/limits.conf file) and edited the systemd unit file to use the new user. I've also tested the access with the new user, and after the change it shows the new limit.
However, after some time, the service still reach the limit and nagios wmi check start freezing.
Do you think the limit is still too low and it should be raised?
On nagios there are around 4-5000 wmi checks, which are often requested in parallel.
I don't have a more detailed log at the moment, but I can get it the next time.
Do you have any advice?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant