You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just to mention that we should probably be a bit selective with which benchmarks to run. e.g., I don't think we need to run the benchmarks for other libraries, and probably just the jsoniter JSON benchmark will do.
Regarding the Nested benchmarks, I think running just the 100 par / batched variants should suffice.
We should probably also reduce warmup and measurement time to 1 second (3 seconds seems like an overkill) to reduce execution time of benchmarks. At the same time, we might want to use @Fork(2) or @Fork(4) as I've noticed that in some cases benchmarks will be significantly slower or faster. I think this has to do with infamous JDK-8180450 issue, and their 12% occurance seems consistent with my observations
Finally, we probably want to check how we're doing comparitively when running multi-threaded vs single-threaded, so we we might want to run
It would be nice to run benchmarks:
Then, the job should gather results and upload them somewhere (gist?) so that we could visualize them like this: https://jmh.morethan.io/?source=https://gist.githubusercontent.com/fwbrasil/27c8abec86e947e9719d41a859deb5d2/raw/814ca4ebb3a1294f8dd7bbec9b54ea0957b92434/jmh-result.json
We can probably copy Kyo's CI: https://github.com/getkyo/kyo/blob/main/.github/workflows/bench.yml
The text was updated successfully, but these errors were encountered: