Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GMM training result after 200 epoch looks worse than the pretrain model #15

Open
thotd21 opened this issue Jun 9, 2020 · 4 comments
Open

Comments

@thotd21
Copy link

thotd21 commented Jun 9, 2020

I add some new cloth and do training step 1 (using your provided data pairs dataset/data_pair.txt) with the following cmd: python train.py --train_mode gmm .
After 200 epochs, I used generated model at the path below: net_model/gmm_checkpoint/generator/checkpoint_G_epoch_199.tar as input value for "--resume_gmm" option when run demo.py.
I got the following result:
200 epoch
(The bottom 2 rows are for the new clothes I added)
The red circled area has different results than the pretrain model

So I have some questions:

  1. This is right way to use new-train GMM model?
  2. Which train options should I use to re-produce the result as mentioned in the paper?
  3. How many epochs did you run on each training step (GMM, Parsing, Appearance) ?
  4. Did I have any mistake on GMM training?
@AIprogrammer
Copy link
Collaborator

  1. I didn't find something wrong from your description.
  2. You can directly retrain the model following the guidance in the train.sh step by step.
  3. The default training epoch is 200, but emperically GMM model optimizes more quickly. The difference in the red circle is mainly caused by GMM and parsing transformation part. You can train this part for more iterations to find out if it can be better.
  4. Our training procedure of GMM is similar to VITON and CPVTON, which your can refer to for more details of this part.

@iamrishab
Copy link

iamrishab commented Aug 16, 2020

Hi @AIprogrammer
Thank you for sharing this great work!
Can you please confirm one thing that after we train individual components in the model before end-to-end step as mentioned in the paper, do we need to move the previous checkpoints to the pretrained directory or we just need to run the train.sh, and it does what's all required? tia.
Also which checkpoints to use after running the train.sh?

@iamrishab
Copy link

Never-mind. I figured it out. Thanks!

@jlakhan1010
Copy link

@iamrishab , I am facinng same issue. Actually I am trying for downcloths. I trained model on sample dataset (40 images) and changes --resume checkpoints as:
resume_gmm = "pretrained_checkpoint/step_009000.pth"
resume_G_parse = '/content/drive/MyDrive/Down-to-the-Last-Detail-Virtual-Try-on-with-Detail-Carving/net_model/parsing_checkpoint/generator/checkpoint_G_epoch_12_loss_3.80623_pth.tar'
resume_G_app_cpvton = '/content/drive/MyDrive/Down-to-the-Last-Detail-Virtual-Try-on-with-Detail-Carving/net_model/joint_checkpoint/generator_appearance/checkpoint_G_epoch_12_loss_4.16898_pth.tar'
resume_G_face = '/content/drive/MyDrive/Down-to-the-Last-Detail-Virtual-Try-on-with-Detail-Carving/net_model/face_checkpoint/generator/checkpoint_G_epoch_12_loss_1.54948_pth.tar'

But My results are completly blank.
0

Please help me.
Thanks in advance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants