You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The log for my training shows:
INFO create LoRA network. base dim (rank): 128, alpha: 128 lora_flux.py:594
INFO neuron dropout: p=0.25, rank dropout: p=None, module dropout: p=None lora_flux.py:595
INFO split qkv for LoRA lora_flux.py:603
INFO train all blocks only lora_flux.py:605
INFO create LoRA for Text Encoder 1: lora_flux.py:741
INFO create LoRA for Text Encoder 1: 72 modules. lora_flux.py:744
INFO create LoRA for FLUX all blocks: 6 modules. lora_flux.py:765
INFO enable LoRA for U-Net: 6 modules lora_flux.py:916
That is despite using the option --network_train_unet_only both by commandline and in the config-toml: network_train_unet_only = true.
I use the following commandline:
accelerate launch --num_cpu_threads_per_process 1 flux_train_network.py --persistent_data_loader_workers --max_data_loader_n_workers 2 --highvram --network_train_unet_only --config_file %1 config_lora.zip
This option might be out of the ordinary:
network_args = [ "train_double_block_indices=none", "train_single_block_indices=7,20", "split_qkv=True",]
The text was updated successfully, but these errors were encountered:
Enyakk
changed the title
FLUX LoRA Training - Text Encoder trained with --network_train_unet_only
FLUX LoRA Training - Text Encoder active training indicated when it should be disabled with --network_train_unet_only
Sep 22, 2024
I have retested with a standard LoRA (training allblocks) and I get the same result of TE1 training being indicated in the log. I don't know if that's actually the case but it does seem that way to me. The size of the resulting .safetensor does indicate that it includes the additional TE1 modules though.
INFO create LoRA for Text Encoder 1: lora_flux.py:741
INFO create LoRA for Text Encoder 1: 72 modules. lora_flux.py:744
INFO create LoRA for FLUX all blocks: 6 modules. lora_flux.py:765
INFO enable LoRA for U-Net: 6 modules lora_flux.py:916
This log indicates that several LoRA modules were created, but only six of them were trained as active modules.
Please run networks/check_lora_weights.py with python networks/check_lora_weights.py path/to/trained_lora.safetensors. It shows the keys in the safetensors, so uou can see which modules are included. If it contains modules for Text Encoders (keys start with lora_te), please report it. Otherwise only U-Net(DiT) is trained, and it's fine.
The log for my training shows:
INFO create LoRA network. base dim (rank): 128, alpha: 128 lora_flux.py:594
INFO neuron dropout: p=0.25, rank dropout: p=None, module dropout: p=None lora_flux.py:595
INFO split qkv for LoRA lora_flux.py:603
INFO train all blocks only lora_flux.py:605
INFO create LoRA for Text Encoder 1: lora_flux.py:741
INFO create LoRA for Text Encoder 1: 72 modules. lora_flux.py:744
INFO create LoRA for FLUX all blocks: 6 modules. lora_flux.py:765
INFO enable LoRA for U-Net: 6 modules lora_flux.py:916
That is despite using the option --network_train_unet_only both by commandline and in the config-toml: network_train_unet_only = true.
I use the following commandline:
accelerate launch --num_cpu_threads_per_process 1 flux_train_network.py --persistent_data_loader_workers --max_data_loader_n_workers 2 --highvram --network_train_unet_only --config_file %1
config_lora.zip
This option might be out of the ordinary:
network_args = [ "train_double_block_indices=none", "train_single_block_indices=7,20", "split_qkv=True",]
The text was updated successfully, but these errors were encountered: