Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Not for merge] Add full-chunk mode CTC decoding for models from WeNet #872

Open
wants to merge 1 commit into
base: v2.0-pre
Choose a base branch
from

Conversation

csukuangfj
Copy link
Collaborator

@csukuangfj csukuangfj commented Nov 11, 2021

This PR shows that we can also use k2 for decoding with models from other frameworks.

For n-gram LM rescoring and attention decoder rescoring, it shares a lot of code with the following files and is very easy to implement.


It's for demonstration only and not ready for merge.

If others have a need for full-chunk batch decoding on GPU with models from WeNet or other frameworks, we can support that.

@pzelasko
Copy link
Contributor

Just for clarification: does "full-chunk decoding" mean "offline decoding"?

@csukuangfj
Copy link
Collaborator Author

csukuangfj commented Nov 11, 2021

Just for clarification: does "full-chunk decoding" mean "offline decoding"?

Yes, you are right. The name full-chunk decoding is from WeNet, which is misleading at the first sight.
I think it is equivalent to full utterance or full context.

@csukuangfj csukuangfj mentioned this pull request Nov 1, 2022
13 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants