Skip to content

Latest commit

 

History

History
126 lines (98 loc) · 5.15 KB

README.md

File metadata and controls

126 lines (98 loc) · 5.15 KB

Table of Contents

Autotest

Autotest provides the test suite for pywatershed. The tests include both

  • stand alone tests, and
  • domain tests.

Stand alone tests do not require any input files or output files besides what is supplied in the testing framework. These tests generally run quickly (e.g. testing basic logic of a class or type requirements or results).

Domain tests require input or output files for an NHM domain. These tests scale with the size or number of HRUs in a domain. The test suite takes arguments related to domain tests which are described in sthe usage section below.

Test data

The majority of tests are domain tests (as of writing this). And the vast majority of the domain test data must be generated by running binaries included in the repository BEFORE running the autotests.

To generate the test data, run the following command from autotest/:

python generate_test_data.py -n=auto

This command should generally be sufficient but more options are available, including passing of options to pytest. Run python generate_test_data.py --help for more details. For example, if you are in a hurry to run and test a single domain you may restrict (or expand) the domains processed with the --domains option. To get verbose output from pytest, pass -vv.

Using the above command allows the autotests to provide checks that the test data in test_data/ have been generated and have been generated by the current version of pywatershed. These checks are meant to avoid gross errors with autotesting but will not cover ever possible testing situation. When in doubt about tests, it's always best practice to start over by re-generating the test data.

There are temporary situations where errors caused by checking for test data are unwarranted and in those cases, disabling the errors by editing autotest/conftest.py is the temporary solution.

Please see test_data/README.md for additional details on how to generate the test data.

Usage

cd autotest
pytest -n=auto -vv

Pytest options can be explored via pytest --help. Custom options for pywatershed are buried down in the output of this call for help, these are:

Custom options:
  --domain_yaml=DOMAIN_YAML
                        YAML file(s) for indiv domain tests. You can pass multiples of this
                        argument. Default value (not shown here) is
                        --domain_yaml=../test_data/drb_2yr/drb_2yr.yaml
  --print_ans           Print results and assert False for all domain tests
  --all_domains         Run all test domains

The default domain tested is drb_2yr. All domains present in test_data/ can be tested using --all_domains. Requesting a specific domain or multiple domains is done by passing one or more --domain_yaml arguments.

domain_yaml details

This section is about creating or working with the domain_yaml file when writing tests.

The domain_yaml file provides information to the autotests. The domain_yaml files for a domain directory somewhere/ will be somewhere/somewhere.yaml. This YAML file includes paths to static domain inputs (e.g. CBH forcing files, parameter files), paths to static or reference model output (from PRMS/NHM), and the answers to domain tests. Examples of domain_yaml files can be found in, each domain.

Some details on the contents of the YAML file are given below.

Path resolution

The test configuration for autotest (autotest/conftest.py) provides special path resolution relative to the domain_yaml file for paths specified in a list in that file:

    ["param_file", "control_file", "cbh_nc", "prms_run_dir", "prms_output_dir",]

Additional fields can be added to this list to provide path resolution for new fields in the YAML file.

Answers for domain tests

Certain tests have answers stored in the domain YAML. This "answer key" is stored in the top-level key test_ans. This can be seen in the examples listed above. Generally, for a autotest test_x.py the key "x" will be provided below test_ans and below it will be what ever data is used in that test.

These tests answers for a given test are typically a summary statistic performed on model data in memory. The answers vary with domain and are most easily collected by running the tests themselves while verifying accuracy of the test through expert judgement. The answers enshrined in the domain YAML indicate when test results change. Because test results change from time to time, a convenience utility function, assert_or_print is provided in utils.py. Using this function allows the option --print_ans -s (note that -s prints output to the terminal) to be passed at run time to print all the new values that should be updated into the domain YAML.