You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi
Thanks for this great project.
I trained a multi speaker Tacotron using this repo in new dataset and results was good. Then I test hifigan as vocoder from this repo (that is a official implementation) but result was very noisy so I think some parameters are different in hifigan and my synthesizer.
Then I used hifigan from your repo and result is very good!
What is the difference between these two hfigan implementations?
Another question that there are 3 versions of hifigan with different speed and number of parameters (v1, v2 and v3). The implemented hifigan in this repo is which one?
The text was updated successfully, but these errors were encountered:
Another question that there are 3 versions of hifigan with different speed and number of parameters (v1, v2 and v3). The implemented hifigan in this repo is which one?
Existing config and pretrained models are based on v1. But v2 and v3 can be reproduced by editing config file.
Hi
Thanks for this great project.
I trained a multi speaker Tacotron using this repo in new dataset and results was good. Then I test hifigan as vocoder from this repo (that is a official implementation) but result was very noisy so I think some parameters are different in hifigan and my synthesizer.
Then I used hifigan from your repo and result is very good!
What is the difference between these two hfigan implementations?
Another question that there are 3 versions of hifigan with different speed and number of parameters (v1, v2 and v3). The implemented hifigan in this repo is which one?
The text was updated successfully, but these errors were encountered: