Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code coverage intel analysis #298

Merged
merged 1 commit into from
Jul 15, 2024
Merged

Code coverage intel analysis #298

merged 1 commit into from
Jul 15, 2024

Conversation

abhaasgoyal
Copy link

@abhaasgoyal abhaasgoyal commented Jul 5, 2024

Resolves #91

Description

  • Add CLI option - benchcab gen_codecov for running code coverage analysis for all runs grouped by each realisation's Intel build. Runs only if codecov: true in config.yaml. Added as a workflow step in fluxsite runs.
  • Add new config.yaml keyword (codecov) - which provides the compiler build flags to run gen_codecov later on

Note: As of now, codecov analysis is only supported w.r.t. fluxsite tasks

Testing

On the following config.yaml file, run benchcab:

realisations:
  - repo:
      git:
        branch: main
    patch:
      cable:
        check:
         ranges: 1

fluxsite:
 experiment: AU-Tum

codecov: true

modules: [
  intel-compiler/2021.1.1,
  netcdf/4.7.4,
  openmpi/4.1.0
]
$ benchcab run -v

After the job has completed, run the code coverage utility:

$ benchcab codecov

On inspection, it generates the files as expected at runs/codecov/R0/.

Copy link

codecov bot commented Jul 5, 2024

Codecov Report

Attention: Patch coverage is 58.00000% with 42 lines in your changes missing coverage. Please review.

Project coverage is 70.21%. Comparing base (bb0ab3d) to head (1dcd270).
Report is 5 commits behind head on main.

Files Patch % Lines
src/benchcab/benchcab.py 10.00% 18 Missing ⚠️
src/benchcab/model.py 55.55% 16 Missing ⚠️
src/benchcab/coverage.py 78.94% 8 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #298      +/-   ##
==========================================
+ Coverage   69.97%   70.21%   +0.23%     
==========================================
  Files          18       20       +2     
  Lines         986     1128     +142     
==========================================
+ Hits          690      792     +102     
- Misses        296      336      +40     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@abhaasgoyal abhaasgoyal force-pushed the 91-code-coverage-analysis branch 3 times, most recently from ef27b5b to 1b543fd Compare July 9, 2024 05:11
@abhaasgoyal abhaasgoyal marked this pull request as ready for review July 9, 2024 05:15
@abhaasgoyal abhaasgoyal requested review from a team and SeanBryan51 and removed request for a team July 9, 2024 05:19
@abhaasgoyal abhaasgoyal self-assigned this Jul 9, 2024
Copy link
Collaborator

@ccarouge ccarouge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only had a quick look but there is something in the design I don't like (if I got it right).

I don't think we need both the "build_option" entry in the config.yaml and the codecov command.

I can't think of a test case to run benchcab tests with a code compiled with Debug option (the runs are too long to do with Debug). So the build_option entry is only here to ensure we can compile with code coverage on when running the coverage command. But setting build_option and the coverage command are independent of each other.

So what happens if I forget to set build_option before running the coverage command? Shouldn't the coverage command force building with the code coverage options instead of being dependent on an independent entry in the config file?

Actually, I'm not sure how the codecov command is to be used. Are we supposed to have run benchcab first (benchcab run) with code coverage options on for the compilation and then run benchcab codecov?
Are you assuming we aren't going to check the code coverage manually?

@abhaasgoyal
Copy link
Author

abhaasgoyal commented Jul 9, 2024

I can't think of a test case to run benchcab tests with a code compiled with Debug option

Oh I see, so I thought we wouldn't have to rebuild benchcab again, but if the tests take too long, we can leave debug as an option (we still need debug-codecov right?). Although I was thinking we could have additional options in the future, like release-profile, which benchmarks the code in Release builds and needs different flags.

But setting build_option and the coverage command are independent of each other.

benchcab coverage is run after the fluxsite jobs are completed (similar to benchcab fluxsite-bitwise-cmp), however build_option is set in config.yaml to pass in the flags before the build (which has additional binary instrumentation), before the tests are run. So, there is a dependency of benchcab coverage being run only if build_option has coverage.

Are we supposed to have run benchcab first (benchcab run) with code coverage options on for the compilation and then run benchcab codecov?

Yes, after the jobs are completed

Are you assuming we aren't going to check the code coverage manually?

I thought benchcab codecov could make things simpler by providing a central utility function to run commands across all realisations. Now that I think about it, we can have designs like:

  1. Manual running of benchcab codecov: Provide an error if config.yaml doesn't have build_option set as debug-codecov
  2. Automated running of benchcab codecov: After the PBS job if config.yaml has debug-codecov, unless skip flag has it.

Both options above make the relationship explicit.

Happy to hear alternative designs/requirements.

Copy link
Collaborator

@SeanBryan51 SeanBryan51 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for implementing the core functionality Abhaas.

I think adding a benchcab codecov subcommand is how I would approach the code coverage generation step as well.

I think I agree with Claire that specifying build_option isn't required for code coverage and is a bit outside the scope of the PR. I also recommend trying to keep changes minimal in the Model.build method as that functionality will eventually be replaced with spack.

Instead of enabling code coverage via build_option, have you considered introducing a global codecov option? This could either be a configuration file option (e.g. codecov: true) or a command line option for the relevant subcommands (e.g. benchcab run --codecov where --codecov is propagated to other commands like build, fluxsite-submit-job, etc).

src/benchcab/internal.py Show resolved Hide resolved
Comment on lines 98 to 105
if compiler_id != "ifort":
msg = f"""For code coverage, the only supported compiler is `ifort`
User has {compiler_id} in their environment"""
raise RuntimeError(msg)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this necessary? The compiler will throw an error if it cannot recognise the given compiler flags. This would also exit with an error when trying to run code coverage using Intels ifx compiler which isn't desired.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is true, although it was decided that it would be clearer if benchcab gave the error message before passing in to the compiler (I can also add check for ifx). @ccarouge thoughts?

src/benchcab/model.py Outdated Show resolved Hide resolved
src/benchcab/model.py Outdated Show resolved Hide resolved
Comment on lines 155 to 170
with self.modules_handler.load([internal.DEFAULT_MODULES["cmake"], *modules]):

# $FC is loaded after compiler module is loaded,
# but we need runs/ dir relative to project rootdir
env_fc = os.environ.get("FC", "")
self.logger.debug(
f"Getting environment variable for compiler $FC = {env_fc}"
)
build_flags = self.get_build_flags(mpi, build_option, env_fc)
env_fc = None

with chdir(path_to_repo):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice if we didn't have to split the with statement in two. If you remove the inspection of the FC environment variable you should be able to query the CMake arguments outside the with statement entirely.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, will make the with statement in 1 line if compiler_id isn't required.

src/benchcab/coverage.py Outdated Show resolved Hide resolved
src/benchcab/benchcab.py Outdated Show resolved Hide resolved
@abhaasgoyal
Copy link
Author

Instead of enabling code coverage via build_option, have you considered introducing a global codecov option? This could either be a configuration file option (e.g. codecov: true) or a command line option for the relevant subcommands (e.g. benchcab run --codecov where --codecov is propagated to other commands like build, fluxsite-submit-job, etc).

I think it would be better to have codecov: true at the top level, since if we run benchcab gen_codecov at the end, we can check if config.yaml had already set it.

Copy link
Collaborator

@SeanBryan51 SeanBryan51 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made a few minor suggestions but overall I'm happy with it. Approving

src/benchcab/cli.py Outdated Show resolved Hide resolved
src/benchcab/model.py Show resolved Hide resolved
tests/test_cli.py Outdated Show resolved Hide resolved
tests/test_model.py Outdated Show resolved Hide resolved
src/benchcab/utils/pbs.py Outdated Show resolved Hide resolved
src/benchcab/model.py Outdated Show resolved Hide resolved
src/benchcab/coverage.py Show resolved Hide resolved
@abhaasgoyal
Copy link
Author

I see (implemented the respective changes) and tested. I'll squash the commits once ready to be merged.

@abhaasgoyal abhaasgoyal added the priority:high High priority issues that should be included in the next release. label Jul 11, 2024
Copy link
Collaborator

@SeanBryan51 SeanBryan51 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@abhaasgoyal abhaasgoyal removed the request for review from ccarouge July 11, 2024 04:37
Copy link
Collaborator

@ccarouge ccarouge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to go. Just a minor change if you want to add it before merging.

src/benchcab/data/test/pbs_jobscript_skip_bitwise.sh Outdated Show resolved Hide resolved
@abhaasgoyal abhaasgoyal merged commit ab1d22d into main Jul 15, 2024
2 of 3 checks passed
@abhaasgoyal abhaasgoyal deleted the 91-code-coverage-analysis branch July 15, 2024 01:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority:high High priority issues that should be included in the next release.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add Fortran code coverage analysis
3 participants