You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have observed the bandwidth and latency improvements in benchmark applications like sockperf and iperf with use of libvma for small TCP message sizes. Hence, I am trying to evaluate the performance improvement of haproxy with libvma for analysing use of libvma in layer 7 load balancing. I have a setup of two machines running nginx servers and haproxy is configured in http mode with round robin load balancing. I am using wrk as a load generator from another machine on the network to benchmark the haproxy setup. Without libvma I have results for the given test from wrk as
Both average latency and total bandwidth stats are lesser with libvma. I have tried following the tuning guide to bind the process to same NUMA node as the NIC and to cores but the results are still worse. This behaviour is strange as vma_stats shows all the packets as offloaded.
Are there any tips to tuning libvma paramters to increase performance for this particular workload?
haproxy config file for reference.
global
user root
group root
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# An alternative list with additional directives can be obtained from
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
nosplice
# TUNING
#tune.h2.initial-window-size 1048576
defaults
timeout connect 50000
timeout client 500000
timeout server 500000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
http-reuse safe
# My Configuration
frontend fe
mode http
bind *:8025
default_backend be
backend be
mode http
balance roundrobin
#option http-keep-alive
server s0 10.48.34.122:80
server s2 10.48.34.125:80
Hello @LeaflessMelospiza,
Any networking benchmarks can not always allow to reconstruct real world application behavior.
haproxy has own specific and it has not been studied well to have recommended optimal VMA configuration.
You can try to compile VMA using --enable-tso configuration option.
I have observed the bandwidth and latency improvements in benchmark applications like sockperf and iperf with use of libvma for small TCP message sizes. Hence, I am trying to evaluate the performance improvement of haproxy with libvma for analysing use of libvma in layer 7 load balancing. I have a setup of two machines running nginx servers and haproxy is configured in http mode with round robin load balancing. I am using wrk as a load generator from another machine on the network to benchmark the haproxy setup. Without libvma I have results for the given test from wrk as
Running with libvma
Also running with VMA_SPEC=latency
Both average latency and total bandwidth stats are lesser with libvma. I have tried following the tuning guide to bind the process to same NUMA node as the NIC and to cores but the results are still worse. This behaviour is strange as vma_stats shows all the packets as offloaded.
Are there any tips to tuning libvma paramters to increase performance for this particular workload?
haproxy config file for reference.
Config:
VMA_VERSION: 8.9.5-0
OFED Version: MLNX_OFED_LINUX-4.7-3.2.9.0
System: 4.9.0-9-amd64
Architecture: x86_64
NIC: ConnectX-5 EN network interface card
The text was updated successfully, but these errors were encountered: