Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FLUX LoRA Training - Text Encoder active training indicated when it should be disabled with --network_train_unet_only #1634

Open
Enyakk opened this issue Sep 22, 2024 · 2 comments

Comments

@Enyakk
Copy link

Enyakk commented Sep 22, 2024

The log for my training shows:
INFO create LoRA network. base dim (rank): 128, alpha: 128 lora_flux.py:594
INFO neuron dropout: p=0.25, rank dropout: p=None, module dropout: p=None lora_flux.py:595
INFO split qkv for LoRA lora_flux.py:603
INFO train all blocks only lora_flux.py:605
INFO create LoRA for Text Encoder 1: lora_flux.py:741
INFO create LoRA for Text Encoder 1: 72 modules. lora_flux.py:744
INFO create LoRA for FLUX all blocks: 6 modules. lora_flux.py:765
INFO enable LoRA for U-Net: 6 modules lora_flux.py:916

That is despite using the option --network_train_unet_only both by commandline and in the config-toml: network_train_unet_only = true.
I use the following commandline:
accelerate launch --num_cpu_threads_per_process 1 flux_train_network.py --persistent_data_loader_workers --max_data_loader_n_workers 2 --highvram --network_train_unet_only --config_file %1
config_lora.zip

This option might be out of the ordinary:
network_args = [ "train_double_block_indices=none", "train_single_block_indices=7,20", "split_qkv=True",]

@Enyakk Enyakk changed the title FLUX LoRA Training - Text Encoder trained with --network_train_unet_only FLUX LoRA Training - Text Encoder active training indicated when it should be disabled with --network_train_unet_only Sep 22, 2024
@Enyakk
Copy link
Author

Enyakk commented Sep 22, 2024

I have retested with a standard LoRA (training allblocks) and I get the same result of TE1 training being indicated in the log. I don't know if that's actually the case but it does seem that way to me. The size of the resulting .safetensor does indicate that it includes the additional TE1 modules though.

@kohya-ss
Copy link
Owner

INFO create LoRA for Text Encoder 1: lora_flux.py:741
INFO create LoRA for Text Encoder 1: 72 modules. lora_flux.py:744
INFO create LoRA for FLUX all blocks: 6 modules. lora_flux.py:765
INFO enable LoRA for U-Net: 6 modules lora_flux.py:916

This log indicates that several LoRA modules were created, but only six of them were trained as active modules.

Please run networks/check_lora_weights.py with python networks/check_lora_weights.py path/to/trained_lora.safetensors. It shows the keys in the safetensors, so uou can see which modules are included. If it contains modules for Text Encoders (keys start with lora_te), please report it. Otherwise only U-Net(DiT) is trained, and it's fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants