Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training loss increases while fine-tuning ! #203

Open
Debolena7 opened this issue Feb 7, 2023 · 0 comments
Open

Training loss increases while fine-tuning ! #203

Debolena7 opened this issue Feb 7, 2023 · 0 comments

Comments

@Debolena7
Copy link

Debolena7 commented Feb 7, 2023

I have tried to fine-tune the author's model on object detection using the COCO dataset using everything the same as the authors. I have used /configs/swin/mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_adamw_3x_coco.py and configs/swin/mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_adamw_1x_coco.py and their corresponding pre-trained checkpoints to initialize the model and fine-tuned for 10 and 5 epochs respectively. The training loss for the first case shows an increasing trend till epoch 6 and then is somewhat stagnant, while for the second case, the training loss increases till epoch 3, then slightly decreased. Why is this happening? Why should the training loss increase?

But the training loss decreases if I randomly initialize the model, which is correct.

with checkpoint - mask_rcnn_swin_tiny_patch4_window7.pth:
image

with checkpoint - mask_rcnn_swin_tiny_patch4_window7_1x.pth:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant