-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tf.keras.metrics.F1Score produces ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32 #33
Comments
As an update, adjusting the update_state call to the following is a workaround:
However, there is additionally an issue when there is only a single output class as in a Binary Classification problem using Binary CrossEntropy:
|
@meekus-fischer,
And you are facing the above error, the input shapes which were provided are not compatible for the shape(batch_size, output_dim). Could you please try to provide correct shapes and try to execute. |
@tilakrayal
This type of workaround shouldn't be required for a binary classification problem. I should be able to use a model like this regardless of the type of classification problem. If I used this for a Multiclass or multlabel problem, it would now break without additional logic to not expand_dims in those cases. |
As an update to the above. This solution does not allow TensorBoard to be utilized.
results in the following error
|
I am dealing with the same problem with |
@tilakrayal I reproduced a very similar error on TF 2.14.0 (see below). Please have a look at the full notebook gist.
|
I have the same problem with the training phase in the case of binary classification. |
This is reproducible with a small change to the tutorial example at https://www.tensorflow.org/tutorials/keras/text_classification with TF 2.15.0 Simply adding an F1Score as a metric to model.compile(optimizer=optimizer,
loss=tf.losses.BinaryCrossentropy(from_logits=True),
metrics=[tf.metrics.BinaryAccuracy(threshold=0.0, name='accuracy'),
tf.keras.metrics.F1Score()
]) Crashes with A workaround to add F1Score to this particular tutorial is to change train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"],
batch_size=-1, as_supervised=True)
train_examples, train_labels = tfds.as_numpy(train_data)
test_examples, test_labels = tfds.as_numpy(test_data) to train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"],
batch_size=-1, as_supervised=True)
train_examples, train_labels = tfds.as_numpy(train_data)
test_examples, test_labels = tfds.as_numpy(test_data)
train_labels = train_labels.astype(np.float64)
test_labels = test_labels.astype(np.float64) |
System information.
Describe the problem.
During training, model is monitoring tf.keras.metrics.F1Score; however, when F1Score.update_state is called, a Value Error is thrown.
ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32: <tf.Tensor 'cond/Identity_4:0' shape=(None,) dtype=int32>
which is the result of the following line of code in the FBetaScore Class:
Describe the current behavior.
F1Score metric unable to update_state. Error thrown. Unable to train model.
Describe the expected behavior.
I would expect F1Score to update_state based on a y_true tensor with an int32 datatype and a y_pred tensor of float32 datatype without throwing an error.
In the tfa.metrics.FBetaScore code, the corresponding line is:
Is it possible that the new tf.keras.metric code should be using a tf.cast(...) vice a tf.convert_to_tensor(...)?
Standalone code to reproduce the issue.
Cannot share full code. Can share custom model init / train_step which causes the error.
Source code / logs.
The text was updated successfully, but these errors were encountered: