Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Distributed Training for Fine-tuning Stable Diffusion Example #1802

Open
clintg6 opened this issue Mar 17, 2024 · 5 comments
Open

Support Distributed Training for Fine-tuning Stable Diffusion Example #1802

clintg6 opened this issue Mar 17, 2024 · 5 comments

Comments

@clintg6
Copy link

clintg6 commented Mar 17, 2024

Issue Type

Documentation Feature Request

Source

source

Keras Version

Keras 2.13.1

Custom Code

Yes

OS Platform and Distribution

Linux Ubuntu 22.04

Python version

3.9

GPU model and memory

No response

Current Behavior?

The current example documentation for Fine-tuning Stable Diffusion only demonstrates how to fine-tune on a single GPU. At the end of the documentation, Sayak concludes that to improve the quality of the stable diffusion model generation that the next steps would be "To enable that, having support for gradient accumulation and distributed training is crucial. This can be thought of as the next step in this tutorial.".

It is not trivial from reading the current TensorFlow docs how to update a custom Trainer class to achieve distributed training as current documentation mostly details it for compiled models. It would be nice to have a section going into greater detail for integrating a Trainer class with distributed training. Consequently, this example could also be included as an additional example of how to perform distributed training in Keras with custom Trainer classes.

I would like to update the documentation and associated files for this example to include a new section that demonstrates how to fine-tune a stable diffusion model in Keras/Tensorflow through distributed training using multiple GPUs when using custom Trainer classes. This would involve:

  • intro to how distributed training works with Keras/Tensorflow
  • modify the Trainer class/loss function to handle multiple GPUs

Standalone code to reproduce the issue or tutorial link

https://keras.io/examples/generative/finetune_stable_diffusion/

Relevant log output

No response

@sachinprasadhs
Copy link
Collaborator

Adding author of the notebook @sayakpaul

@sayakpaul
Copy link
Contributor

Won't have the bandwidth to work on this.

@clintg6
Copy link
Author

clintg6 commented May 3, 2024

I have already expanded the code in the Trainer class for multi GPU training as well as the text introducing the reader to distributed training so it shouldn't take much time as it just needs review. @sayakpaul

@sachinprasadhs
Copy link
Collaborator

@clintg6 , Please create a PR, Keras team would be happy to review.

@clintg6
Copy link
Author

clintg6 commented May 3, 2024

Thank you will do

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants