You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, it's important to note that the current fine-tuning code is still in an unusable state.
Regarding your question about the dataset format, your understanding is correct. The configuration you described is appropriate.
As for the dataset size, there's no precise limitation or recommended size. Modern TTS models are complex with multiple trainable modules, each potentially requiring different amounts of data and configurations. For example, simple embedding fine-tuning might only need 10 voice samples, but for fine-tuning the GPT module, the amount of data needed depends on your training objective. If you're just adding a new voice, 100 samples should be sufficient. However, if you need to train instructional capabilities or enhance prompt following, you might need more.
A simple suggestion would be: if the dataset quality is poor, it's better to have more data. If the quality is high, then even a small amount of data (less than 30 samples) could be enough.
By the way, almost all of the training code in this repository comes from this PR: 2noise/ChatTTS#680. I've only made simple modifications to adapt it and pre-test the entire forge inference system (because we've made some changes to ChatTTS and have an internal .spkv1.json speaker file format).
确认清单
你的issues
Hi,
I am planning to fine-tune ChatTTS using my own dataset, and I would like to confirm a few details regarding the data format and requirements.
1. Data Structure and .list File Format
Based on the documentation and examples, I have organized my data as follows:
File Structure
.list File Format
Each line in the
.list
file is formatted asfilepath|speaker|lang|text
, where:filepath
: Relative path to the audio file (relative to the directory containing the.list
file).speaker
: Name of the speaker.lang
: Language code (e.g.,ZH
for Chinese,EN
for English).text
: Transcription of the audio content.Example:
Could you please confirm if this structure and format are correct?
2. Audio Data Specifications
I am planning to use 100 audio files, each approximately 10 seconds long, with a sampling rate of 24000 Hz for training.
Is this a suitable setup for fine-tuning the model? Are there any specific recommendations or requirements?
Thank you for your assistance!
The text was updated successfully, but these errors were encountered: