-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Add support for NaN and mixed types in infer_labels() #13
base: main
Are you sure you want to change the base?
Conversation
Codecov Report
|
I see the use-case for supporting |
Btw: how do we treat |
We have not really thought much about it. For some it just works: >>> audmetric.unweighted_average_recall(['a', 'b'], ['a', 'a'])
0.5
>>> audmetric.unweighted_average_recall(['a', 'b'], ['a', np.NaN])
0.5 Others did strange stuff in >>> audmetric.precision_per_class([0, 0, 2, 1], [0, 1, np.NaN, np.NaN])
{0: 1.0, 1: 0.0, 2: 0.0, nan: 0.0} Now it would look like this >>> audmetric.precision_per_class([0, 0, 2, 1], [0, 1, np.NaN, np.NaN])
{0: 1.0, 1: 0.0, 2: 0.0} |
I'm not so sure about this. If you have |
But how do you map between strings and integers? Or do you assume they represent different classes? Why would a user do that? I still see the risk that integer and strings are accidentally mixed and the user will not notice it if allow that. |
I guess we either have to come up with a proper solution or otherwise I think it would be safer to raise an error if we encounter |
OK, I created #14 and set this pull request to WIP. |
Closes #12
This adds support for having
NaN
in truth or prediction and for having mixed types like['a', 0]
in truth and prediction.