-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Script to Calculate Summary Information for Benchmark Results #271
Conversation
…r user-specified dtype and metrics, and abstract input generator.
…AULT_SHAPES_2D_ONLY shapes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest adding a shell script to ensure the order of operator benchmarks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. But can we also show speedups over input sizes?
Since the command for generating benchmarks is quite flexible (it might specify op_name, a specific file, or simply run all operations without any specification), it would be challenging to pre-sort the operations to be benchmarked. However, sorting the operations in the benchmark results after the run is a great suggestion. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Purpose
This PR introduces a new script to calculate summary information for benchmark results, aimed at providing insights into the average speedup for each operation categorized by data type.
Changes Made
summary_for_plot.py
that processes benchmark log files.Usage
To generate summary information from benchmark logs, run the following command: