Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch: Speed up get_log_probs function #5

Open
binhvq opened this issue May 3, 2019 · 2 comments
Open

Pytorch: Speed up get_log_probs function #5

binhvq opened this issue May 3, 2019 · 2 comments

Comments

@binhvq
Copy link

binhvq commented May 3, 2019

Hi lopuhin.
Thanks for your sharing.
I used your model to predict the next word, but I found the forecast speed relatively slow, probably because of the lm.inference.get_log_probs function to predict the probability of all both words in sentences. Meanwhile, the problem of predicting the next word only requires the probability of the last word.
Screenshot from 2019-05-03 12-48-25

@lopuhin
Copy link
Owner

lopuhin commented May 3, 2019

I found the forecast speed relatively slow

Thanks for feedback. Did you find it slow compared to other similar models / implementations?

Meanwhile, the problem of predicting the next word only requires the probability of the last word.

Right. But we still need to process all previous words. I see that we could avoid doing softmax for all but the last word, not sure how much difference will it bring.

@lopuhin
Copy link
Owner

lopuhin commented Jul 20, 2020

FWIW there is a big speedup in text generation here 4c18649 - this speeds up generation of multiple tokens, while the single token generation has the same speed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants