You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently utilising the RetinaNet pretrained model which we are then transfer learning onto a custom dataset. In this dataset we have an adapted jittered resize layer that checks after performing the crop if at least X% of the area of the original bounding box is left in the image otherwise it will then exclude this bounding box (we are using #2484 to achieve this). This works great in isolation when we are testing with the jittered resize demo in Keras CV, however, during training (specifically at the end of the first epoch when it goes to run the validation) it then fails with an out of bounds exception e.g. indices[1,53275] = 0 is not in [0, 0) [[{{node retina_net_label_encoder_1/GatherV2_1}}]] (I've attached the full stack trace below)
So I think somehow this gather function might not work if the input has zero elements as would be the case here. If I set the minimum_box_area_ratio to 0% (so it doesn't exclude anything) then it trains normally as before but it just seems to be that setting this to anything non-zero will then prune some boxes but if there is any training example with zero then it causes this exception.
Expected Behavior:
Should be able to pass in training examples to the RetinaNet model with zero boxes in and it should continue training regardless. Or maybe there is then a mechanism to skip training examples without any labelled boxes present
Steps To Reproduce:
Apply the changes linked for the adapted jittered resize (its a very minor change that adds to the bounding_box.clip_to_image function)
Create a jittered resize layer and set the minimum_box_area_ratio to say 0.5
Then attempt to train the RetinaNet model with this JitteredResize acting on the training dataset
Observe the same exception
Version:
Latest off of master
Anything else:
The text was updated successfully, but these errors were encountered:
Just to add to this I've tried the following alternative approach too:
We could drop / filter any inputs that do not have any bounding boxes at the end of the augmentations, however, you could then end up with a completely empty batch or just an odd sized batch for each training step
Additionally, there doesn't seem to be any nice way with Tensorflow to filter these out - maybe some kind of tf.reduce_any and a tf.binary_mask could be used to remove the images and bounding boxes (boxes+labels) from the input dict
Current Behavior:
We are currently utilising the
RetinaNet
pretrained model which we are then transfer learning onto a custom dataset. In this dataset we have an adapted jittered resize layer that checks after performing the crop if at least X% of the area of the original bounding box is left in the image otherwise it will then exclude this bounding box (we are using #2484 to achieve this). This works great in isolation when we are testing with the jittered resize demo in Keras CV, however, during training (specifically at the end of the first epoch when it goes to run the validation) it then fails with an out of bounds exception e.g.indices[1,53275] = 0 is not in [0, 0) [[{{node retina_net_label_encoder_1/GatherV2_1}}]]
(I've attached the full stack trace below)stacktrace.txt
From what I can understand of the stack trace it specifically fails here:
keras-cv/keras_cv/src/models/object_detection/retinanet/retinanet_label_encoder.py
Line 138 in ba2556c
So I think somehow this gather function might not work if the input has zero elements as would be the case here. If I set the
minimum_box_area_ratio
to 0% (so it doesn't exclude anything) then it trains normally as before but it just seems to be that setting this to anything non-zero will then prune some boxes but if there is any training example with zero then it causes this exception.Expected Behavior:
Should be able to pass in training examples to the RetinaNet model with zero boxes in and it should continue training regardless. Or maybe there is then a mechanism to skip training examples without any labelled boxes present
Steps To Reproduce:
bounding_box.clip_to_image
function)minimum_box_area_ratio
to say 0.5Version:
Latest off of
master
Anything else:
The text was updated successfully, but these errors were encountered: