Skip to content

Commit

Permalink
fix flux fine tuning to work
Browse files Browse the repository at this point in the history
  • Loading branch information
kohya-ss committed Aug 17, 2024
1 parent 400955d commit 25f77f6
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 4 deletions.
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,10 @@ __Please update PyTorch to 2.4.0. We have tested with `torch==2.4.0` and `torchv
The command to install PyTorch is as follows:
`pip3 install torch==2.4.0 torchvision==0.19.0 --index-url https://download.pytorch.org/whl/cu124`


Aug 17. 2024:
Added a script `flux_train.py` to train FLUX.1. The script is experimental and not an optimized version. It needs >28GB VRAM for training.

Aug 16, 2024:

Added a script `networks/flux_merge_lora.py` to merge LoRA into FLUX.1 checkpoint. See [Merge LoRA to FLUX.1 checkpoint](#merge-lora-to-flux1-checkpoint) for details.
Expand Down
6 changes: 2 additions & 4 deletions flux_train.py
Original file line number Diff line number Diff line change
Expand Up @@ -674,9 +674,7 @@ def optimizer_hook(parameter: torch.Tensor):
# if is_main_process:
flux = accelerator.unwrap_model(flux)
clip_l = accelerator.unwrap_model(clip_l)
clip_g = accelerator.unwrap_model(clip_g)
if t5xxl is not None:
t5xxl = accelerator.unwrap_model(t5xxl)
t5xxl = accelerator.unwrap_model(t5xxl)

accelerator.end_training()

Expand All @@ -686,7 +684,7 @@ def optimizer_hook(parameter: torch.Tensor):
del accelerator # この後メモリを使うのでこれは消す

if is_main_process:
flux_train_utils.save_flux_model_on_train_end(args, save_dtype, epoch, global_step, flux, ae)
flux_train_utils.save_flux_model_on_train_end(args, save_dtype, epoch, global_step, flux)
logger.info("model saved.")


Expand Down

0 comments on commit 25f77f6

Please sign in to comment.