-
Notifications
You must be signed in to change notification settings - Fork 147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
http/web.py
performance issues
#305
Comments
I decided to investigate on #304. I know this may be controversial, but a) major culprits seem to be in things that didn't change much, b) since that revision is split into more funcitons, the flamegraph contains more interesting information. It's also easier to choose what to measure manually since code is refactored into more functions. Workstation: 16GB DDR4 (~50% free), NVMe, i5-8250U (most cores were idle). I used two methods: CProfile and flamegraphsElixir modification that uses cProfile for profiling can be seen here. Note that /usr/local/elixir/profiles/ directory needs to be writeable by www-data for this to work. Flamegraphs can be created from prof files using flameprof and FlameGraph. I created three flamegraphs, one for the biggest identifier, one for a relatively big source file and one for a big directory tree: In all three, a lot happens before web.py - I think these are imports. In the source framegraph, guess_lexer_for_filename takes the majority of time spent in web.py. My own profilerI wrote my own simple profiler that logs execution time of selected blocks and functions. Statistical methodology is probably dubious (results are averages of percentages of total request time), but I mainly needed this as a sanity check for the flamegraphs (some information in there was suspicious). Elixir version that uses this profiler can be seen here. Results from a short, pseudo-random ("clicking around") request sample can be seen here Results are divided into categories based on route (and file type in case of the ConclusionsWe could of course dig deeper and make more accurate measurements, but I think the conclusions are clear and actionable:
|
Ah, so even the |
Thanks for all the investigation. What is holding us from switching over to WSGI ? |
Elixir still uses global state and we need to get rid of that first. Most, if not all, should be gone with recent web.py refactorings. |
Amazing! Then I'd say keep up the good work in this direction, so that we can move to WSGI in the near future. |
I'd say WSGI and the follow-up perf improvements (00a160a, d946818, 7df3543) solved this issue. The prod server hasn't crashed since then, and response times are much lower than before (to the point that it can be hard to guess if pages come from the cache or are live generated when using the website). |
Issue for tracking
http/web.py
performance work.On the official instance, a directory load (without caching) is about 500ms. A small file load (without caching) is about 750ms. That is a lot of time spent for only reading data. I expect low-hanging fruits.
http/web.py
to see where time is spent.The text was updated successfully, but these errors were encountered: