Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can we just use FastSpeech for inference as baseline result #9

Open
Maoshuiyang opened this issue Jun 15, 2022 · 1 comment
Open

Can we just use FastSpeech for inference as baseline result #9

Maoshuiyang opened this issue Jun 15, 2022 · 1 comment

Comments

@Maoshuiyang
Copy link

Hi Keon, thanks so much for sharing this wonderful project. I am wondering can we just use the FastSpeech part for inference? Looking forward to your reply

@keonlee9420
Copy link
Owner

Hi @Maoshuiyang , thanks for your attention. Of course you can use it as you said, by modifying some part of code, but the better way is to check out this repo: https://github.com/keonlee9420/Comprehensive-Transformer-TTS. It contains the same model architecture of FastSpeech used in this repo and the model is only aimed to generate speech by itself, rather than used the output as an auxiliary input to another model. Hope it helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants