Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the approach for generating new target voice samples (Voice conversion) using the pretrained models? #6

Open
nischal-sanil opened this issue Jan 15, 2021 · 0 comments

Comments

@nischal-sanil
Copy link

nischal-sanil commented Jan 15, 2021

Given a new input and a target sample, is it possible to make use of a pretrained models to do voice conversion.

As the embedding in the vocoder has to be learnt, I was considering to just train the vocoder to learn embeddings for the new target speaker and then use the convert.py to get the voice converted output. Can it be done this way? If not, please do suggest ways on how to do it.

Thanks,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant