Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about automatic hyper-parameter tuning toolkit #3

Open
ZhangYuanhan-AI opened this issue Sep 28, 2022 · 4 comments
Open

Question about automatic hyper-parameter tuning toolkit #3

ZhangYuanhan-AI opened this issue Sep 28, 2022 · 4 comments

Comments

@ZhangYuanhan-AI
Copy link

Hi, thanks for this great benchmark.

I have a question about the hyper-parameter tuning.
see,
image
the training accuracy and validation accuracy are good at the hyper-parameter sweeping stage. And toolkit chooses "Learning rate 0.01, L2 lambda 0.0001" as the best one for the final 50 epochs.

However, the performance of the model with the selected hyper-parameter is extremely bad.
see,
image
.

Have you ever faced this problem? this problem mainly shows in dtd, fer2013, and resics45 datasets. Usually, this problem occurs when a relatively large LR (like 0.01) is selected in the sweeping stage.

I don't think this problem comes from the gap between the validation set and testing set, because you can see training accuracy is also bad for the final 50 epochs of training.

@haotian-liu
Copy link
Collaborator

Hi @ZhangYuanhan-AI, thank you for your interest in our work, (and sorry for the late response).

We have encountered a similar issue in our early development stage. We found one cause is the disparity between the order of the sample sequence between different hyperparameter search runs and the final run, so that some searched hyperparameter might not be stable, and gradient explodes when it encounters some specific combination of the samples in a batch.

We fixed the sample sequence order in the latest released toolkit, and the issue has been largely alleviated, while the case can be different for each single checkpoint and its model architecture.

You can select a gradient clipping value at TRAIN.CLIP_GRAD_NORM, that is large enough so that it is only used to deal with the gradient explosion for some extremely bad batches, and for other batches, it can train normally. This has also been shown useful when we were developing our toolkit.

Thank you! And it'd be great if anyone who has a different solution for the similar issue can share and we are happy to incorporate them into our doc or toolkit :)

@ZhangYuanhan-AI
Copy link
Author

Hi @ZhangYuanhan-AI, thank you for your interest in our work, (and sorry for the late response).

We have encountered a similar issue in our early development stage. We found one cause is the disparity between the order of the sample sequence between different hyperparameter search runs and the final run, so that some searched hyperparameter might not be stable, and gradient explodes when it encounters some specific combination of the samples in a batch.

We fixed the sample sequence order in the latest released toolkit, and the issue has been largely alleviated, while the case can be different for each single checkpoint and its model architecture.

You can select a gradient clipping value at TRAIN.CLIP_GRAD_NORM, that is large enough so that it is only used to deal with the gradient explosion for some extremely bad batches, and for other batches, it can train normally. This has also been shown useful when we were developing our toolkit.

Thank you! And it'd be great if anyone who has a different solution for the similar issue can share and we are happy to incorporate them into our doc or toolkit :)

Hi Haotian,

Thanks for your suggestion. I'll try gradient_clip first and see whether it can help.

@Luodian
Copy link

Luodian commented Oct 17, 2022

Hi we are trying to run our models on Elevator. Do you have a recommended default value for TRAIN.CLIP_GRAD_NORM to avoid explosion? like 1.0 or?

Thanks!

@haotian-liu
Copy link
Collaborator

Hi @Luodian, different models have different statistics of the gradient norms depending on model architecture, pretraining approach, etc.

You can select a gradient clipping value at TRAIN.CLIP_GRAD_NORM, that is large enough so that it is only used to deal with the gradient explosion for some extremely bad batches, and for other batches, it can train normally.

To do this, we'd recommend first looking at the general statistics of your model running at first one or two epochs on several datasets, and then choose a gradient clipping value that is similar to or slightly larger than these (so that most of the parameter updates are unaffected by the gradient clipping).

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants