You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pretty much the same discussion as OTel JS #3940 but for Python.
The original issue statement follows:
While every application is different and there are many factors to consider when measuring performance, it would be useful to give users some idea of the performance characteristics of the SDK. This issue is intended as a discussion of what type of benchmark tests would be useful, how often to run them etc.
This spec describes performance benchmark testing to measure the overhead of OTel SDKs. Specifically, it describes measuring
throughput - how many spans can be created and exported in 1s
instrumentation cost - CPU overhead of generating and exporting X number of spans per second
The second one is of particular importance because it translates to scaling and computing costs of running a service in the cloud.
I am planning to do some testing based on the spec and provide the numbers here. Other outcomes that I think might be useful
add a tool that allows anyone to run these tests
add a github action that runs perf tests automatically (e.g. per main commit or release)
provide guideline to instrumentation authors to quantify overhead of their instrumentation
Looking back at the history of the JS SDK, I see that there used to be a basic benchmarking tool. I am curious why it was removed
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Pretty much the same discussion as OTel JS #3940 but for Python.
The original issue statement follows:
Beta Was this translation helpful? Give feedback.
All reactions