Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maybe a bug in lr_scheduler when resuming #68

Open
Outliers1106 opened this issue Mar 13, 2020 · 0 comments
Open

Maybe a bug in lr_scheduler when resuming #68

Outliers1106 opened this issue Mar 13, 2020 · 0 comments

Comments

@Outliers1106
Copy link

In trainer.py, at line 71 self.lr_scheduler.step() should be replaced by self.lr_scheduler.step(epoch)
the source code of the function step is

def step(self, epoch=None):
        if epoch is None:
            epoch = self.last_epoch + 1
        self.last_epoch = epoch
        for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()):
            param_group['lr'] = lr

Because when resuming from a checkpoint, the lr_scheduler will init from scratch, the default value of parameterlast_epoch in lr_scheduler will be -1 rather than the last epoch in checkpoint.

SunQpark added a commit to SunQpark/pytorch-template that referenced this issue Jun 12, 2020
fix issue victoresque#68 of victoresque/pytorch-template
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant