Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mod_http2: TLS throughput/record sizes #98

Open
icing opened this issue Mar 31, 2016 · 2 comments
Open

mod_http2: TLS throughput/record sizes #98

icing opened this issue Mar 31, 2016 · 2 comments

Comments

@icing
Copy link
Owner

icing commented Mar 31, 2016

Current implementation of mod_http2 buffers output to manage desired TLS record sizes. This achieves much better throughput than passing frame meta data and resource data down the connection filters unchanged.

To illustrate the problem: serving response bodies using Apache's bucket brigades gives a small 9/10 bytes frame header bucket, followed by a larger bucket with the frame data (often 16 KB) and an optional bucket with padding bytes. Rinse and repeat.

Existing mod_sslconnection filters try to coalesce small buckets into larger ones for more efficiency, but are unable to manage a chain of 9+16K+9+16k+... But small TLS record sizes produce too much overhead. Copying all data into larger buffers and sending those give currently better performance. But the copying should be optimized to look at the bucket patterns and only copy data when necessary for better TLS record sizes.

This should be implemented as part of a new connection output filter in mod_ssl and replace the implementation in mod_http2. This would then also benefit http/1.1 connections.

@notroj
Copy link
Contributor

notroj commented Apr 29, 2020

Existing mod_sslconnection filters try to coalesce small buckets into larger ones for more efficiency, but are unable to manage a chain of 9+16K+9+16k+... But small TLS record sizes produce too much overhead. Copying all data into larger buffers and sending those give currently better performance. But the copying should be optimized to look at the bucket patterns and only copy data when necessary for better TLS record sizes.

I wasn't aware this was a problem for mod_http2 as well; the buffering done in h2_conn_io.c exists to mitigate this problem?

The mod_ssl coalesce filter was made smarter on trunk but still won't cope with cases like 9+16K+9+16K. Will ponder this one.

@icing
Copy link
Owner Author

icing commented Apr 29, 2020

Yes, I measured it and it was quite a difference. However I was not certain about changing the ssl filters and thus introduced the buffering.

This is a very h2 specific pattern, so if you do not find a good way to do it in a generic way, it should be fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants