Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reporting a minor bug #71

Open
hubert0527 opened this issue Jan 22, 2022 · 1 comment
Open

Reporting a minor bug #71

hubert0527 opened this issue Jan 22, 2022 · 1 comment

Comments

@hubert0527
Copy link

Hi, thanks for making the codes public!
I found a minor bug here. The variable self.pos_embed keeps the CPU version of the positional embedding. This is the root cause of why you need to call .to() during forward pass. To fix it, you can instead call x = x + self.pos_embed_1, which self.pos_embed_1 is the correct GPU copy auto-created by PyTorch.

This bug causes additional CPU-GPU communication time during training, but I am not quite sure how much does this costs in reality.

@yifanjiang19
Copy link
Contributor

Thanks so much! I'll fix it soon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants