Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support max-requests and max-requests-jitter config parameters #34

Open
sidmitra opened this issue Jan 16, 2023 · 8 comments
Open

Support max-requests and max-requests-jitter config parameters #34

sidmitra opened this issue Jan 16, 2023 · 8 comments
Labels
enhancement New feature or request polar

Comments

@sidmitra
Copy link

sidmitra commented Jan 16, 2023

One of the common problems with Django apps is memory leaks. The most common suggestion is to allow the webserver to cycle processes after serving some fixed number of requests. This bypasses memory issues by just returning used up memory and start fresh.

Perhaps you can consider supporting something similar to gunicorn early in the design.

gunicorn --max-requests 1000 --max-requests-jitter 50 app.wsgi

This is also supported in uwsgi, one of the popular alternatives to gunicorn.
Similar issues are solved in celery worker_max_tasks_per_child since i think it's fairly hard to track down and fix memory issues in larger Python projects with hundreds of dependencies.

References

Funding

  • You can sponsor this specific effort via a Polar.sh pledge below
  • We receive the pledge once the issue is completed & verified
Fund with Polar
@gi0baro
Copy link
Member

gi0baro commented Jan 16, 2023

This will require quite an effort, as the main process – which right now is quite simple – should behave like a real arbiter and effectively manage child process continuously.

Probably I can take a look at what gunicorn does, but this for sure will require some time.

@gi0baro gi0baro added the enhancement New feature or request label Jan 16, 2023
@snuderl
Copy link

snuderl commented Jan 30, 2023

Similar/related thing is having a request timeout after which to kill a process if it did not respond

https://docs.gunicorn.org/en/stable/settings.html#timeout

@gi0baro
Copy link
Member

gi0baro commented Jan 30, 2023

I'm not sure Granian should handle all of this complexities.
If the aim would be having 1:1 feature comparison with Gunicorn, probably it would make more sense to implement Granian workers for Gunicorn..

@sidmitra
Copy link
Author

sidmitra commented Feb 2, 2023

probably it would make more sense to implement Granian workers for Gunicorn..

This also makes sense, but i'm not aware of the tradeoffs.

I created the ticket thinking of the feature that has made me choose gunicorn over others like Daphne even. In any reasonably complex web app(like enterprise!) memory leaks seem to be a big issue and hence this seemed like an must-have. This is obviously not true for every project, and perhaps Granian works perfectly fine for other use cases.
It might be possible to keep the design open in a way that these can be implemented by the community perhaps in the future if needed.


Further stream of thought

As @snuderl pointed out, request timeout is a thing also because as the team grows devs tend to add all kinds of things like network requests without timeout. At that point stray endpoints making these requests will just hang! and hence you're forced to want a timeout or hunt down each network request and hack/patch your way sometimes to add a timeout.

At work(team of ~100 devs) these are the flags that keep the app running smoothly without too many problems.

--timeout=28 --workers=X --max-requests=250 --max-requests-jitter=10

The timeout of 28 seconds is just to keep it lower than 30(which was the Heroku hard limit), so we can receive Python traceback errors. The initial idea was to progressively lower it and force all endpoints to "improve" performance over time till reaching a sane value.

@cirospaciari
Copy link
Contributor

I'm not sure Granian should handle all of this complexities. If the aim would be having 1:1 feature comparison with Gunicorn, probably it would make more sense to implement Granian workers for Gunicorn..

timeouts, and backpressure (max req) sound like features for Emmett, not for Granian.

@mindflayer
Copy link

mindflayer commented Oct 23, 2024

This also prevents autoscalers to scale down when the peak is over.

@gi0baro
Copy link
Member

gi0baro commented Oct 23, 2024

This also prevents autoscalers to scale down when the peak is over.

🤔 can you elaborate on this? 'case I can't see any correlations between an autoscaler behaviour and how a process manages its own children ones.

@mindflayer
Copy link

If workers don't release the memory they took, and the autoscaler spawned more pods because of memory consumption, the number of needed pods will never go down.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request polar
Projects
None yet
Development

No branches or pull requests

5 participants