You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 4, 2018. It is now read-only.
Given that Opesci-FD can provide PAPI measurements we should now be able to automate the (re-)generation of roofline models. I think the roadmap for this roughly looks like this:
Move the benchmark/plotting scripts in the current benchmarks directory to tests and use the built-in compilation/execution tools to run the eigenwave3d test
Provide a utility script to run the Stream benchmark for upper bandwidth limit
Use and verify the built-in utilities to compute arithmetic intensity (AI)
Return an object with performance measurements from grid.execute(), pass this back to benchmark script and store using pybench package
Use matplotlib and pybench to plot rooflines from stored performance data
This should allows us to automate this for models of different order and hopefully show how we move the AI line across the roofline plot. I'm of course open for alternative approaches, so please feel free to discuss.
The text was updated successfully, but these errors were encountered:
Were will this be called from (a separated folder)? Is it just for us to do our rooflines, or will it be a user feature integrated with TJ's codegen script, like a parameter to generate roofline together with the code?
While this will mainly be a tool for our roofline plots, it should be fully visible and accessible for users to quickly and easily evaluate their own models. We won't be able to auto-generate the plot directly from the demo script itself though, since usually the benchmarking platform is not the one you generate the plot on. But an automated two-step process should be possible.
Ideally we want to simply extend the functionality of the initial scripts in the benchmarks directory, although we might want to move them to tests first. Then the work flow will be something like:
The first script will execute the model with the given parameters, measure runtime, achieved MFlops/s and AI values and store them. The second script will read these values, alongside a (for now) user-given max. bandwidth value for the architecture and generate the roofline plot. Later on, we might also want to provide some utility to run and store Stream results.
Given that Opesci-FD can provide PAPI measurements we should now be able to automate the (re-)generation of roofline models. I think the roadmap for this roughly looks like this:
benchmarks
directory totests
and use the built-in compilation/execution tools to run theeigenwave3d
testgrid.execute()
, pass this back to benchmark script and store using pybench packageThis should allows us to automate this for models of different order and hopefully show how we move the AI line across the roofline plot. I'm of course open for alternative approaches, so please feel free to discuss.
The text was updated successfully, but these errors were encountered: