Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(still under development) add NSE function #217

Open
wants to merge 15 commits into
base: develop
Choose a base branch
from
Open

Conversation

engrmahadi
Copy link
Collaborator

A new score or metric should be developed on a separate feature branch, rebased against the main branch. Each merge request should include:

The implementation of the new metric or score in xarray, ideally with support for pandas and dask
100% unit test coverage
A tutorial notebook showcasing the use of that metric or score, ideally based on the standard sample data
API documentation (docstrings) using Napoleon (google) style, making sure to clearly explain the use of the metrics
A reference to the paper which described the metrics, added to the API documentation
For metrics which do not have a paper reference, an online source or reference should be provided
For metrics which are still under development or which have not yet had an academic publication, they will be placed in a holding area within the API until the method has been properly published and peer reviewed (i.e. scores.emerging). The 'emerging' area of the API is subject to rapid change, still of sufficient community interest to include, similar to a 'preprint' of a score or metric.
Add your score to summary_table_of_scores.md in the documentation

All merge requests should comply with the coding standards outlined in this document. Merge requests will undergo both a code review and a science review. The code review will focus on coding style, performance and test coverage. The science review will focus on the mathematical correctness of the implementation and the suitability of the method for inclusion within 'scores'.

A github ticket should be created explaining the metric which is being implemented and why it is useful.

added NSE function

Copy link
Collaborator

@nicholasloveday nicholasloveday left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making a start on this @engrmahadi

A few general comments:

  1. Can you please update it so that it works with n-dimensional arrays and appropriately preserves dimensions.
  2. Can you please add some unit tests. This will make it easier for you to check that it is working.
  3. Can you delete the .DS_Store file. consider adding it to the .gitignore
  4. Support for lists as inputs hasn't been done anywhere else in scores. Do you have a need for it. If so we should discuss whether scores supports lists or if we should just let the users convert their lists to xarray or pandas objects (which is what I would prefer) (@tennlee).

Please reach out if you want any help, particularly with setting up the unit tests.

src/scores/continuous/standard_impl.py Show resolved Hide resolved
return fcst


def nse(fcst, obs, reduce_dims=None, preserve_dims=None, weights=None, angular=False):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add type hints?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also add a "*" after obs, to force the keyword arguments to be keyword-only.

Comment on lines 410 to 411
# if not isinstance(obs, xr.DataArray):
# obs = xr.DataArray(obs)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete commented out code


def lst_to_array(fcst):
# Convert lists to xarray DataArrays
if not isinstance(fcst, xr.DataArray):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is problematic if the input is an xr.Dataset

return fcst


def nse(fcst, obs, reduce_dims=None, preserve_dims=None, weights=None, angular=False):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you make all args (apart from fcst and obs) to be keyword args?

return fcst


def nse(fcst, obs, reduce_dims=None, preserve_dims=None, weights=None, angular=False):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are now using is_angular instead of angular in the other metrics. Can you update it to be is_angular?

# add NSE code


def lst_to_array(fcst):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is run on obs data too, perhaps the arg name should be updated to something more general.

Can you also add a docstring and typehint?

We need to consider if we want to support lists as inputs to scores functions. Do you have a use case for this? If you don't have a use case for this, then I consider dropping support for lists and allow the user to convert their data to xarray or pandas. @tennlee do you have an opinion on scores supporting lists as inputs?

reduce_dims (FlexibleDimensionTypes, optional): Dimensions to reduce along. Defaults to None.
preserve_dims (FlexibleDimensionTypes, optional): Dimensions to preserve. Defaults to None.
weights (xr.DataArray, optional): Weights to apply to error calculation. Defaults to None.
angular (bool, optional): Whether to treat data as angular (circular). Defaults to False.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
angular (bool, optional): Whether to treat data as angular (circular). Defaults to False.
is_angular (bool, optional): Whether to treat data as angular (circular). Defaults to False.

mean_obs = np.mean(obs)
diff_mean = obs - mean_obs
# calculate nse
nse = 1 - np.sum((error) ** 2) / np.sum((diff_mean) ** 2)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are redefining the name from out of scope (this object has the same name as the function)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, this is not handling dimensions correctly. If you run this code trying to preserve a dimension, it will produce an error later on when scores.utils.gather_dimensions is called.

# Apply weights
if weights is not None:
error = error * weights
diff_mean = diff_mean * weights
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This weighting isn't doing anything as diff_mean isn't used anywhere

@tennlee
Copy link
Collaborator

tennlee commented Apr 24, 2024

I'm sure you're already aware of this, but obviously unit test coverage will be needed.

@tennlee tennlee changed the title add NSE function (still under development) add NSE function May 14, 2024
@tennlee tennlee added this to the Wishlist milestone May 14, 2024
@engrmahadi
Copy link
Collaborator Author

All checks have passed . Please check @nicholasloveday @tennlee .
Thank you.

@tennlee
Copy link
Collaborator

tennlee commented May 25, 2024

Thanks for the update Mahadi. I will try and review the changes this week - I have not been able to get to it before now. I just wanted to acknowledge your update and let you know that I will come and look through this soon.

Copy link
Collaborator

@nicholasloveday nicholasloveday left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've had a look through this again Hasan. Thanks for some of those updates. Once you've addressed all this feedback, I'll review the tutorial notebook.

Please reach out to me if you have any questions so that we can get this pull request over the line.

@@ -108,3 +108,5 @@ dmypy.json
# Cython debug symbols
cython_debug/

#ignore directory
.DS_Store
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this. It looks like this pull request is still trying to add that file. Can you please remove the file?

@@ -9,7 +9,7 @@

project = "scores"
copyright = "Licensed under Apache 2.0 - https://www.apache.org/licenses/LICENSE-2.0"
release = "0.9"
release = "0.8.1"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this change is needed.

@@ -13,7 +13,7 @@
import scores.sample_data
import scores.stats.statistical_tests # noqa: F401

__version__ = "0.9"
__version__ = "0.8.1"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this change.

Comment on lines +394 to +399
fcst (FlexibleArrayType or list): Forecast or predicted variables.
obs (FlexibleArrayType or list): Observed variables.
reduce_dims (FlexibleDimensionTypes, optional): Dimensions to reduce along. Defaults to None.
preserve_dims (FlexibleDimensionTypes, optional): Dimensions to preserve. Defaults to None.
weights (xr.DataArray, optional): Weights to apply to error calculation. Defaults to None.
angular (bool, optional): Whether to treat data as angular (circular). Defaults to False.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you add in type hints, you can remove the types from the docstring.

Comment on lines +421 to +423
if isinstance(fcst, xr.Dataset):
data_variable_name = list(fcst.data_vars.keys())[0] # Get the name of the first data variable
fcst = fcst.to_array(dim=data_variable_name)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that we want to convert xr.Datasets to xr.DataArrays. I think this will only take out the first data_var anyway.

The code needs to be updated to work on Datasets with all data_vars

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran across a similar thing in another area of scores, whereby the calculation was really only valid on a DataArray. In that case, it would be possible to split out each variable from a DataSet and then do the calculation on each variable, then reconstruct the results. Is the same kind of thing going on here?

Comment on lines +20 to +21
fcst_data = np.array([[3, 4, 5, 6, 7], [3, 4, 5, 6, 7], [3, 4, 5, 6, 7]])
obs_data = np.array([[2, 3, 4, 5, 6], [2, 3, 4, 5, 6], [2, 3, 4, 5, 6]])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add some NaNs in to ensure the function correctly handles missing data

@@ -0,0 +1,71 @@
import numpy as np
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also add tests that check that the behaviour is correct, if the denominator is zero?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add tests that check that the preserve_dims and reduce_dims work as expected?

obs = np.array([2, 3, 4, 5, 6])
weights = np.array([1, 2, 3, 2, 1])
nse_value = nse(fcst, obs, weights=weights)
assert nse_value == 0.5
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just making a note to check this output since the weights aren't doing anything in the current implementation of this function. I wouldn't expect the value to be 0.5, but I'd need to work it out with pen and paper.

ValueError: If the input arrays are of different lengths or incompatible types.

References:
- references
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- references

References:
- references
- https://en.wikipedia.org/wiki/Nash–Sutcliffe_model_efficiency_coefficient
- https://hess.copernicus.org/articles/26/4801/2022/
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This paper doesn't really focus on NSE - it uses it in an applied sense. I suggest that you remove this reference and use the original paper
Nash, J.E. and Sutcliffe, J.V., 1970. River flow forecasting through conceptual models part I—A discussion of principles. Journal of hydrology, 10(3), pp.282-290.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants