Reproducible implementation for Talking Head Anime From a Single Image .
Currently supports:
- Processing 3D Models to Talking Head Anime trainable dataset.
- Training Talking Head Anime.
- Talking Head Anime inference.
Supports Docker
and Conda
environment.
See Environment.md
.
You can train Talking Head Anime with two different type of datasets:
- Images Dataset (recommended)
- 3D-models Dataset
Check dataset.ipynb
for details. You can generate your own dataset when following dataset.ipynb
.
python train_morpher.py --train
python train_rotator.py --train
TODO
Trainer configs are at train_morpher.yaml
, train_rotator.yaml
.
General training configs are at logging
part from train_*.yaml
file.
logging:
log_dir: "./logs/"
seed: "16" # use your seed for each training
nepochs: 10000 # maximum epochs
device: cuda
save_optimizer_state: False
freq: 500 # logging frequency(step)
save_files: [
'*.py',
'*.sh',
'configs/*.*',
'configs/dataset/*.*',
'datasets/*.*',
'models/*.*',
'utils/*.*',
]
Model configs are at models
part from train_*.yaml
file.
- Change model class if needed.
- Change optimizer class, lr and betas if needed.
models:
FaceMorpher:
class: models.tha1.FaceMorpher
optim:
class: torch.optim.Adam
kwargs:
lr: 1e-4
betas: [ 0.5, 0.999 ]
- Each dataset should have corresponding config
.yaml
file. - Each corresponding
.yaml
file should be indatasets.*.datasets
list in general train config file.
datasets:
train: # configs for training dataset
class: datasets.base.MultiDataset
datasets: [
'configs/datasets/custom.yaml', # add path to your dataset config file here
]
mode: train
batch_size: 25
shuffle: True
num_workers: 8
eval: # configs for eval dataset
class: datasets.base.MultiDataset
datasets: [
'configs/datasets/custom.yaml', # add path to your dataset config file here
]
mode: eval
batch_size: 25
shuffle: False
num_workers: 2
Training logs are logged with tensorboard.
Run tensorboard --logdir ./logs/<YOUR SEED> --bind_all
to check logs.
Check Inference.ipynb
.
NEEDS WORK
Special thanks to:
- MINDsLab Inc. for GPU support
- Combiner not trained yet...
- Code cleanup