Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix lattice length of rnnt_decode #1089

Closed
wants to merge 1 commit into from

Conversation

glynpu
Copy link
Contributor

@glynpu glynpu commented Nov 3, 2022

The logprobs of final frame does not contribute lattice generation.
This pr is trying to fix this issue.

With original code.

if num_frames = 1

image

if num_frames = 10
image

Current pr

if num_frames = 1
image

if num_frames = 10
image

states_.TotSize(1),
config_.vocab_size,
0);
Advance(dummy_logprobs);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current implementation will cause an error when decoding chunk by chunk, I think you need to delete the states gennerated by dummy advance before flushing back to streams.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will say, a dummy advance is a good idea to fix this issue.

Copy link
Collaborator

@pkufool pkufool Nov 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, we'd better make this dummy advance run only once per sequence (i.e. the last chunk), If we do it for every chunk, it will be a big overhead.

@glynpu glynpu mentioned this pull request Jan 4, 2023
@glynpu
Copy link
Contributor Author

glynpu commented Jan 4, 2023

Close this since we figure out a faster verson. #1134

@glynpu glynpu closed this Jan 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants