Skip to content

Commit

Permalink
Merge pull request #392 from MetOffice/docs_update
Browse files Browse the repository at this point in the history
Docs update
  • Loading branch information
jfrost-mo authored Feb 23, 2024
2 parents 099a562 + b9de2b6 commit 1722be2
Show file tree
Hide file tree
Showing 10 changed files with 88 additions and 82 deletions.
12 changes: 6 additions & 6 deletions docs/source/background/why-cset.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@ Why use CSET?
=============

When evaluating weather and climate models we are trying to understand the
characteristics of our model configurations, the physical processes that lead
to biases, and how they compares to other models (physical and machine learned),
model configurations and observations. This is an iterative process, and
each step of evaluation unveils more questions that need investigations.
Evaluation often follows an individual approach by researchers spending
significant resource on scientific and technical development.
characteristics of our model configurations, the physical processes that lead to
biases, and how they compares to other models (physical and machine learned),
model configurations and observations. This is an iterative process, and each
step of evaluation unveils more questions that need investigations. Evaluation
often follows an individual approach by researchers spending significant
resource on scientific and technical development.

CSET aids in this by providing a flexible way to interrogate model data, using
diagnostics that can be quickly created by the combination of operators in
Expand Down
36 changes: 20 additions & 16 deletions docs/source/contributing/releases.rst
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
Release Management
==================

This will be some way off for now, but it is useful to have a policy/documented
process for making a release. Making stable releases is important as it gives
everyone something to rally around, whether developers wanting to get in a
certain feature, or users wanting to find out what has changed.
Making stable releases is important as it gives everyone something to rally
around, whether developers wanting to get in a certain feature, or users wanting
to find out what has changed.

Scientists like having stable versions to be able to finish their paper with, or
otherwise do their work without things changing.
Expand All @@ -20,8 +19,13 @@ effectively frozen. The relevant commit is also tagged with the release number.
Ideally releases should be mostly automated, as that helps prevent accidents
(like publishing a broken build) happening.

Part of this will be considering our versioning strategy. I'm leaning towards
`CalVer <https://calver.org/>`_.
Version numbers are based on `CalVer`_. Specifically they
follow the ``YY.MM.patch`` format, so the first release in February 2024 would
be ``v24.2.0``. Patch releases should only contain bugfixes, and may be released
for older versions, (e.g: ``v24.2.5`` could be released after February). We
should target one feature release a month, so things are not stuck on the trunk
for too long, though quiet periods (e.g: Summer, Christmas) may see a release
missed.

Backwards Compatibility Policy
------------------------------
Expand All @@ -35,19 +39,17 @@ policy that sets expectations about the way backwards incompatible (AKA
Some things to consider:

* How quickly backwards incompatible changes can be made.
* How long depreciation periods should be for different sizes of change.
* How long deprecation periods should be for different sizes of change.
* How the changes will be communicated with users.
* Guidance on avoiding making backwards incompatible changes where possible.

Making a Release
----------------

Making a release is mostly automated. The only thing that needs to be done in
the code is to ensure that the version number in ``pyproject.toml`` has been
incremented since the last release.

To create a release you should use the GitHub web UI. Go to the `Releases`_
page, and press `Draft a new release`_.
Making a release is mostly automated. With the use of `setuptools_scm`_ you
don't even need to increment a version number. To create a release you should
use the GitHub web UI. Go to the `Releases`_ page, and press `Draft a new
release`_.

.. image:: release_page.png
:alt: The GitHub release making page.
Expand All @@ -58,14 +60,16 @@ On this page you will need to add several things.
* The target branch to create the release from. (This might be ``main`` most of
the time.)
* A tag, which should be the version number prefixed with the letter ``v``. For
example version 1.2.3 should have the tag ``v1.2.3``.
example version 24.2.3 should have the tag ``v24.2.3``.
* A description of the changes in the release. Pressing the "Generate release
notes" button will include the titles of all merged pull requests, which is a
good starting point. It is especially important to highlight any changes that
might break backwards compatibility.
good starting point, though automated PRs should be removed. It is especially
important to highlight any changes that might break backwards compatibility.

Once that is all written you simply need to press "Publish release". A release
will be automatically made, and the package will be pushed to PyPI and beyond.

.. _CalVer: https://calver.org/
.. _Releases: https://github.com/MetOffice/CSET/releases
.. _Draft a new release: https://github.com/MetOffice/CSET/releases/new
.. _setuptools_scm: https://setuptools-scm.readthedocs.io/en/latest/
28 changes: 13 additions & 15 deletions docs/source/getting-started/create-first-recipe.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,8 @@ Recipe Steps

Just as in baking you would follow a recipe step-by-step, so does CSET. The
steps of the recipe are all under the ``steps`` key. Each block prefixed with a
``-`` (which makes a list in YAML) is a step, and they are run in order from top to bottom.
``-`` (which makes a list in YAML) is a step, and they are run in order from top
to bottom.

Each step has an ``operator`` key, which specifies which operator to use. A
`complete list of operators is in the documentation`_, but for this tutorial we
Expand All @@ -93,10 +94,9 @@ to the input data as its implicit input.
steps:
- operator: read.read_cubes
Once we have read the data, we need to filter them down to the data we require for our computations.
``filter.filter_cubes`` is the operator for that. It also ensures that the
CubeList returned by ``read.read_cubes`` is turned into a
Cube.
Once we have read the data, we need to filter them down to the data we require
for our computations. ``filter.filter_cubes`` is the operator for that. It also
ensures that the CubeList returned by ``read.read_cubes`` is turned into a Cube.

.. code-block:: yaml
Expand All @@ -113,18 +113,16 @@ Cube.
Unlike the ``read.read_cubes`` operator, we have many key-value pairs in this
step. The other keys in the step are the named arguments that operator takes.
Each operator implicitly takes its first argument from the previous step, but this
can be overridden by explicitly providing it.
Each operator implicitly takes its first argument from the previous step, but
this can be overridden by explicitly providing it.

Note that arguments of operators can themselves be
operators. This allows nesting operators to use their output as arguments to
other operators.
Note that arguments of operators can themselves be operators. This allows
nesting operators to use their output as arguments to other operators.


Next we reduce the dimensionality of the data ahead of plotting. In this
case we chose the mean of the time coordinate. The ``collapse.collapse`` operator
allows us to do this, and takes as parameters the coordinate to collapse, and
the method by which it is done.
Next we reduce the dimensionality of the data ahead of plotting. In this case we
chose the mean of the time coordinate. The ``collapse.collapse`` operator allows
us to do this, and takes as parameters the coordinate to collapse, and the
method by which it is done.

.. code-block:: yaml
Expand Down
8 changes: 4 additions & 4 deletions docs/source/getting-started/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@ Installation

.. Tutorial saying how to install CSET. For edge cases should link elsewhere.
For a user of CSET the recommended way to install CSET is via conda_. It is packaged on
`conda-forge`_ in the ``cset`` package. The following command will install CSET
into its own conda environment, which is recommended to avoid possible package
conflicts.
For a user of CSET the recommended way to install CSET is via conda_. It is
packaged on `conda-forge`_ in the ``cset`` package. The following command will
install CSET into its own conda environment, which is recommended to avoid
possible package conflicts.

.. code-block:: bash
Expand Down
18 changes: 9 additions & 9 deletions docs/source/getting-started/visualise-recipe.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,12 @@ To see more detail about each individual operator running we can use the
.. image:: recipe-graph-details.svg
:alt: Graph visualisation of a CSET recipe generated by cset graph, showing the operator details.

Now we can see the structure of the recipe graphically we can delve into what each operator
is doing. The ellipses represent the operators, and the arrows between them show where
they pass their output to the next operators.
Now we can see the structure of the recipe graphically, we can delve into what
each operator is doing. The ellipses represent the operators, and the arrows
between them show where they pass their output to the next operators.

The first operator in the recipe is ``read.read_cubes``, however it
takes a constraint on a STASH code, which is itself created by another operator,
The first operator in the recipe is ``read.read_cubes``, however it takes a
constraint on a STASH code, which is itself created by another operator,
``constraints.generate_stash_constraint``.

This operators-running-operators behaviour is further used in the next step,
Expand All @@ -48,10 +48,10 @@ are two constraints used here, the STASH code, and the cell methods. These are
combined into a single constraint by the ``constraints.combine_constraints``
operator before being used by the ``filters.filter_cubes`` operator.

Afterwards the cube has its time dimension removed by the mean method applied by the ``collapse.collapse``
operator, so it becomes two-dimensional. Then it passes to the
``plot.spatial_contour_plot`` and ``write.write_cube_to_nc`` operators to be
plotted and saved.
Afterwards the cube has its time dimension removed by the mean method applied by
the ``collapse.collapse`` operator, so it becomes two-dimensional. Then it
passes to the ``plot.spatial_contour_plot`` and ``write.write_cube_to_nc``
operators to be plotted and saved.

You now know how to visualise a recipe, and a little about the operators it is
made up of. In the next tutorial you will learn to make your own.
31 changes: 31 additions & 0 deletions docs/source/reference/recipe-format.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,3 +57,34 @@ the :doc:`/reference/operators` page.
cell_methods: []
.. _YAML 1.2: https://yaml.org/

Using Recipe Variables
----------------------

A CSET recipe may contain variables. These are values filled in at runtime. They
allow making generic recipes that can handle multiple cases. This prevents the
need to have hundreds of recipes for very similar tasks where only minor changes
are required such as switching from mean to median or iterating over a number of
variable names.

A variable can be added to a recipe by setting a parameter's value to the
variable name, prefixed with a dollar sign. This name may only contain upper
case letters and underscores. For example:

.. code-block:: yaml
parameter: $MY_VARIABLE
When the recipe is run with ``cset bake`` the variable is replaced with a value
given on the command line. This is done using the variable name as an option,
for example:

.. code-block:: bash
cset bake -i input -o output -r recipe.yaml --MY_VARIABLE='value'
The given value will be templated into the parameter so what runs is actually:

.. code-block:: yaml
parameter: value
6 changes: 3 additions & 3 deletions docs/source/usage/add-diagnostic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ Add a new diagnostic
In CSET diagnostics are defined as a collections of steps in a `recipe file`_.

New built-in recipes should be added to the CSET CLI, then added as include
files into the includes/ directory of the workflow. They should then
be added to meta/rose-meta.conf so they appear in rose edit, and flow.cylc, so
they are included in the workflow.
files into the includes/ directory of the workflow. They should then be added to
meta/rose-meta.conf so they appear in rose edit, and flow.cylc, so they are
included in the workflow.

Custom recipes should be directly included in include files. They should be
saved to the environment variable ``CSET_RECIPE``. Similarly they should then be
Expand Down
1 change: 0 additions & 1 deletion docs/source/usage/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,3 @@ This section contains guides on how to do specific things with CSET.
operator-recipes
workflow-installation
add-diagnostic
recipe-variables
2 changes: 2 additions & 0 deletions docs/source/usage/operator-recipes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,5 +20,7 @@ from a python script.
Path("/path/to/output_file.nc")
)
The format of recipe files is described in :doc:`/reference/recipe-format`.

There are number of included recipe files you can use before having to create
your own. These can be retrieved with the :ref:`cset-cookbook-command` command.
28 changes: 0 additions & 28 deletions docs/source/usage/recipe-variables.rst

This file was deleted.

0 comments on commit 1722be2

Please sign in to comment.