-
Notifications
You must be signed in to change notification settings - Fork 455
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'huggingface:main' into andreyan/exporters_model_configs
- Loading branch information
Showing
31 changed files
with
2,935 additions
and
352 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
134 changes: 134 additions & 0 deletions
134
examples/onnxruntime/training/stable-diffusion/text-to-image/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,134 @@ | ||
# Stable Diffusion Text-to-Image Fine-Tuning | ||
|
||
This example shows how to leverage ONNX Runtime Training to fine-tune stable diffusion model on your own dataset. | ||
|
||
Our team has tested finetuning `CompVis/stable-diffusion-v1-4` model on the `lambdalabs/pokemon-blip-captions` dataset and achieved the following speedup: | ||
![image](https://github.com/microsoft/onnxruntime-training-examples/assets/31260940/00f199b1-3a84-4369-924d-fd6c613bd3b4) | ||
|
||
___Note___: | ||
|
||
___This script is experimental. The script fine-tunes the whole model and often times the model overfits and runs into issues like catastrophic forgetting. It's recommended to try different hyperparamters to get the best result on your dataset.___ | ||
|
||
|
||
## Running locally with PyTorch | ||
### Installing the dependencies | ||
|
||
___Note___: This example requires PyTorch nightly and [ONNX Runtime](https://github.com/Microsoft/onnxruntime) nightly | ||
```bash | ||
pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu118 | ||
pip install onnxruntime-training --pre -f https://download.onnxruntime.ai/onnxruntime_nightly_cu118.html | ||
python -m onnxruntime.training.ortmodule.torch_cpp_extensions.install | ||
``` | ||
Or get your environment ready via Docker: [examples/onnxruntime/training/docker/Dockerfile-ort-nightly-cu118](https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/training/docker/Dockerfile-ort-nightly-cu118) | ||
|
||
Then, cd in the example folder and run | ||
```bash | ||
pip install -r requirements.txt | ||
``` | ||
|
||
And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: | ||
|
||
```bash | ||
accelerate config | ||
``` | ||
|
||
### Pokemon example | ||
|
||
You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens). | ||
|
||
Run the following command to authenticate your token | ||
|
||
```bash | ||
huggingface-cli login | ||
``` | ||
|
||
If you have already cloned the repo, then you won't need to go through these steps. | ||
|
||
<br> | ||
|
||
#### Hardware | ||
Cited performance metrics used an NIVIDIA V100, 8-GPU cluster. | ||
|
||
**___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___** | ||
<!-- accelerate_snippet_start --> | ||
```bash | ||
export MODEL_NAME="CompVis/stable-diffusion-v1-4" | ||
export dataset_name="lambdalabs/pokemon-blip-captions" | ||
|
||
accelerate launch --mixed_precision="fp16" train_text_to_image.py --ort \ | ||
--pretrained_model_name_or_path=$MODEL_NAME \ | ||
--dataset_name=$dataset_name \ | ||
--use_ema \ | ||
--resolution=512 --center_crop --random_flip \ | ||
--train_batch_size=1 \ | ||
--gradient_accumulation_steps=4 \ | ||
--gradient_checkpointing \ | ||
--max_train_steps=15000 \ | ||
--learning_rate=1e-05 \ | ||
--max_grad_norm=1 \ | ||
--lr_scheduler="constant" --lr_warmup_steps=0 \ | ||
--output_dir="sd-pokemon-model" | ||
``` | ||
<!-- accelerate_snippet_end --> | ||
|
||
|
||
To run on your own training files prepare the dataset according to the format required by `datasets`, you can find the instructions for how to do that in this [document](https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder-with-metadata). | ||
If you wish to use custom loading logic, you should modify the script, we have left pointers for that in the training script. | ||
|
||
```bash | ||
export MODEL_NAME="CompVis/stable-diffusion-v1-4" | ||
export TRAIN_DIR="path_to_your_dataset" | ||
|
||
accelerate launch --mixed_precision="fp16" train_text_to_image.py --ort \ | ||
--pretrained_model_name_or_path=$MODEL_NAME \ | ||
--train_data_dir=$TRAIN_DIR \ | ||
--use_ema \ | ||
--resolution=512 --center_crop --random_flip \ | ||
--train_batch_size=1 \ | ||
--gradient_accumulation_steps=4 \ | ||
--gradient_checkpointing \ | ||
--max_train_steps=15000 \ | ||
--learning_rate=1e-05 \ | ||
--max_grad_norm=1 \ | ||
--lr_scheduler="constant" --lr_warmup_steps=0 \ | ||
--output_dir="sd-pokemon-model" | ||
``` | ||
|
||
|
||
Once the training is finished the model will be saved in the `output_dir` specified in the command. In this example it's `sd-pokemon-model`. To load the fine-tuned model for inference just pass that path to `ORTStableDiffusionPipeline` | ||
|
||
|
||
```python | ||
from optimum.onnxruntime import ORTStableDiffusionPipeline | ||
|
||
model_path = "path_to_saved_model" | ||
pipe = ORTStableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16) | ||
pipe.to("cuda") | ||
|
||
image = pipe(prompt="yoda").images[0] | ||
image.save("yoda-pokemon.png") | ||
``` | ||
|
||
#### Training with multiple GPUs | ||
|
||
`accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch) | ||
for running distributed training with `accelerate`. Here is an example command: | ||
|
||
```bash | ||
export MODEL_NAME="CompVis/stable-diffusion-v1-4" | ||
export dataset_name="lambdalabs/pokemon-blip-captions" | ||
|
||
accelerate launch --mixed_precision="fp16" --multi_gpu train_text_to_image.py --ort \ | ||
--pretrained_model_name_or_path=$MODEL_NAME \ | ||
--dataset_name=$dataset_name \ | ||
--use_ema \ | ||
--resolution=512 --center_crop --random_flip \ | ||
--train_batch_size=1 \ | ||
--gradient_accumulation_steps=4 \ | ||
--gradient_checkpointing \ | ||
--max_train_steps=15000 \ | ||
--learning_rate=1e-05 \ | ||
--max_grad_norm=1 \ | ||
--lr_scheduler="constant" --lr_warmup_steps=0 \ | ||
--output_dir="sd-pokemon-model" | ||
``` |
7 changes: 7 additions & 0 deletions
7
examples/onnxruntime/training/stable-diffusion/text-to-image/requirements.txt
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
accelerate>=0.16.0 | ||
transformers>=4.25.1 | ||
datasets | ||
git+https://github.com/huggingface/diffusers | ||
ftfy | ||
tensorboard | ||
Jinja2 |
Oops, something went wrong.