Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Letting users define parameters and additional dimensions in YAML files #642

Open
irm-codebase opened this issue Jul 19, 2024 · 12 comments
Open
Assignees
Labels
enhancement v0.7 (upcoming) version 0.7
Milestone

Comments

@irm-codebase
Copy link
Contributor

irm-codebase commented Jul 19, 2024

What can be improved?

Opening this issue after discussions with @sjpfenninger and @brynpickering

Currently, the math data and schemas are a bit 'mixed'. In particular, model_def_schema.yaml contains parameter defaults, which makes looking for them difficult.

Similarly, the current parsing of yaml files is a bit difficult in where/how new dimensions and parameters are declared.

For parameters:

  • Declare current parameters in base.yaml instead.
  • Make users explicitly declare new parameters in yaml files, with at least the default value. We can just add a new params section for it.
  • Parameters must at least have "default" declared (with some additional logic to convert 0 to null, or warning users of it).

For dimensions:

  • Make users declare new dimensions in yaml files. Possible important settings are "ordered" / "numeric" for stuff like years and timesteps were the distance between them might matter.
  • Ditto for groups (see Easily define group constraints with custom math #604 for discussion).

The idea is to make model definition files less ambiguous. We probably should also think on how this affects schema evaluation, and some of the parsing.

Version

v0.7

@irm-codebase irm-codebase added enhancement v0.7 (upcoming) version 0.7 labels Jul 19, 2024
@irm-codebase irm-codebase changed the title Defining parameters and additional dimensions in model.yaml Letting users define parameters and additional dimensions in YAML files Jul 20, 2024
@irm-codebase irm-codebase added this to the 0.7.0 milestone Aug 9, 2024
@brynpickering
Copy link
Member

I've had a look at this and think it could work pretty well most of the time. However, there is the issue that config for parameters is then only analysed on calling calliope.Model.build. This is problematic for time resampling since one of the options is about how to resample a given parameter (taking the average or the sum).

One thing we could do is resample the data on-the-fly when calling calliope.Model.build (something I considered in the past), so the timeseries isn't resampled at instatiation. This would probably work fine but would come with the disadvantage that either results have to be in their own xarray dataset (with a different timesteps dimension) or the results are returned with a sparse timesteps dimension (e.g. every other timestep has a value with a 2h resolution).

If we continue with resampling at instatiation, there needs to be a way of defining parameter config that is separate from the math - does this get messy?

@irm-codebase
Copy link
Contributor Author

irm-codebase commented Aug 14, 2024

I think it's better to go with the option that is (long term) easier to manage, which would be resampling during build.

Functionality-wise, it makes sense: you are telling the model to build/prepare itself. If need be, we can split backend steps and just wrap around them:

  1. Model init
  2. Model build
    a. build.math
    b. build.timeseries
    c. build.whatever
  3. Model solve...

For now, let us assume that we control the build process (i.e., model.build will run everything).
If we want users to be able to build specifics, we'll need an attribute that lets us know the state machine's status.

@brynpickering
Copy link
Member

I agree that it is easier to manage on our side, but how about the data structures that come out of the other end? If you have hourly input data and then resample to monthly data, you'll get a timeseries of 8760 elements on the inputs and only 12 elements on the outputs... Should resampled inputs be available somewhere? If yes, then we risk bloating the model if we resample to close to the input resolution (e.g. 2hrs). If no, visualising input and output timeseries data will be a pain.

@irm-codebase
Copy link
Contributor Author

irm-codebase commented Aug 14, 2024

That's where the separation of post-processing (#638) and non-conflicting configurations (#626) come in!

If done right, the selected "mode" might activate some post-processing step that makes data cleaner (in the case of re-sampling, you can activate/deactivate a "de-sampler"?).
Although to be honest I do not see this as an issue in the case of re-sampling... you request monthly data, you get monthly data. Otherwise we'd bloat the model output unnecessarily...

Also, post-processing stuff should only support "official" modes. If users define their own math, is up to them to post-treat it (like any other piece of software, really).

@brynpickering
Copy link
Member

I think this is separate to either of those. It's instead about the storage of data indexed over the time dimension. We would either need to split the inputs and results into two separate xarray datasets with different length timesteps or have them in one dataset with lots of empty data. It's perhaps more linked to #634.

If two separate datasets, on saving to file we'd merge those two and still end up with sparse data in the time dimension.

@irm-codebase
Copy link
Contributor Author

irm-codebase commented Aug 14, 2024

@brynpickering just to confirm:
sparse data in this case would mean that xarray will not fill in those spaces with nan?

Because otherwise we have an issue in our hands, since nan is not sparse.

@brynpickering
Copy link
Member

It will fill in those data with NaN. I mean sparse in terms of it being a sparse matrix (more empty than non-empty entries) with NaN being used to describe empty entries.

@irm-codebase
Copy link
Contributor Author

irm-codebase commented Aug 16, 2024

In that case, I would avoid doing this unless we guarantee data sparsity through sparse (or something similar), because each of those nan values will take 64 bits (the same as a float). Given the size of our models, this will result in very large datasets that are mostly empty.

Keeping inputs and results separate is better since it saves us from significant bloat, data wise, I think...

@brynpickering
Copy link
Member

The size of our multi-dimensional arrays is a fraction of the memory required to solve a model. So I really wouldn't worry about very large, mostly empty datasets. For a recent test I ran, we're talking ~240MB of input data in memory for 25-60GB of peak memory use (depending on the backend) to optimise.

@irm-codebase
Copy link
Contributor Author

Hmm... You are right. We should go with the solution that is most understandable to users...
I'd say that is loading and resampling data on the same step, to avoid this situation.

To summarize:

  • The current implementation loads and re-samples all files (including timeseries) during init, but loads math during build. This gets messy if params are part of the stuff we have to parse.
  • The new approaches could be either of these
    • Load parameters and data_tables at the same time during build (users can no longer check loaded data before build, rendering init kind of pointless...)
    • Split the reading of files into two stages: do params, data_tables and re-sampling during init, and the rest during build (variables, expressions, etc).

The second might be better from a user perspective, but it requires care to not tangle stuff too much (hopefully the math rework aids in keeping it relatively clean?).

@brynpickering
Copy link
Member

brynpickering commented Sep 30, 2024

The second option is more like:

  • Split the reading of files into two stages: do loading of math dict, data_tables and re-sampling during init, and the rest during build (converting variables, expressions, etc. into backend objects).

As in, the user needs all their own math content to be ready to go for init. This would then need to be stored as an intermediate CalliopeMath object. The final math passed to the backend is collated at build based on any ad-hoc math dict provided as a kwarg and whether to include the mode-specific "base" math.

The advantage of this is that you can define your own math under a math key in the model definition, rather than defining math files as a list of file references at the build stage. It's an advantage as it is more in-line with how everything else is defined in YAML.

A third option is to store the dims and parameters definitions in a separate sub-heading of the model definition (i.e., not linked to math), so that they are more clearly defining metadata of dims and params that are loaded at init. Then we revert to a user defining all the other math components as inputs at the build stage, as it is in #639.

@irm-codebase
Copy link
Contributor Author

I like the second option the most. It keeps everything neatly in one place, which is generally better for maintainability...
The third one could confuse users in my opinion.

Regarding software, if we have everything under math: and don't want to change things too much, would this be what you have in mind?

  • CalliopeMath.data now holds additional parameters and dims dictionaries (perhaps turning into a typed dict or dataclass). This is read from math:
  • Model.inputs / Model._model_data is pre-filled using dims and parameters.
  • The backend is generated using Model._model_data.inputs and the remaining CalliopeMath.data (variables, global_expressions, constraints, piecewise_constraints, objectives).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement v0.7 (upcoming) version 0.7
Projects
None yet
Development

No branches or pull requests

2 participants