-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feauture request: Dynamic Lora Weights #2591
Comments
Take a look at this: Currently, weight scheduling is not available, but it seems like you can make a feature request to there. |
You can also combine my prompt control utility with any dynamic prompt utility, like the MUWildCard node from my other repo that implements prompt fusion style functions and variables: https://github.com/asagi4/comfyui-utility-nodes The syntax isn't quite as nice as sd-webui-loractl, but you can get them to do similar things. |
IIRC, discussion about this elsewhere indicated that developers were waiting for resolution of #2666 , the execution model inversion changes, before attempting work like this. Anyone want to take a look at to see if this is more feasible now? |
So if I wanted to do something like Lora A starts at first step and stops at 15th step and Lora B starts at 15th step and is active until 60th step in a generation with 60 steps, how should i use prompt control and JinjaRender to do that? |
@tncrdn You don't need JinjaRender for a simple case like that. You can just do |
@asagi4 Thank you very much. And I use PromptToSchedule and ScheduleToModel nodes for that? |
@tncrdn yes. PromptToSchedule does the parsing and ScheduleToModel applies a model patch that does the LoRA scheduling. |
@asagi4 Thank you. One last question. Would this be better than using 2 KSampler Advanced nodes and setting start and stop steps? |
@tncrdn If you use the PCSplitSampling node to enable split sampling, that's essentially what it will do. The effects are different though. Doing two ksampler passes isn't quite the same thing as one pass with the same number of steps, at least with some samplers. |
@asagi4 So is this the correct way to use it for one pass with SDXL? (I also used a ScheduleToCond) If you can check the attached workflow please? Also does it work with Flux? |
@tncrdn that works, though you don't necessarily need two separate schedules for ScheduleToModel and ScheduleToCond; In fact you'll want to pass the LoRAs into ScheduleToCond too if you want the LoRA to apply to the text encoder, because otherwise they'll only apply to the unet. |
@asagi4 So I wrote the prompt, lora and SDXL(896 1152, 896 1152, 0 0) into one PromptToSchedule node and sent it both to ScheduleToModel and ScheduleToCond. Thank you very much. |
Native support for dynamic lora weights is being discussed and will likely happen soon(™️ ) |
@mcmonkey4eva That's great news. Will it support Flux as well? It would also be great if there was native support for prompts like [cat:dog:10] to change the prompt from cat to dog after 10 steps (or in fraction if that's the only possible way). And if it also worked for Flux. |
Hey there!
There's an incredibly powerful extension for A1111 called Dynamic Lora Weights which allows one to control the weights of the lora at any given step during the whole generation. For instance - [email protected],[email protected] means that lora weight starts at 0.2 till 20% of steps, and then ramps up to 1 from 20% of steps till the end of the generation.
Link to extension - https://github.com/cheald/sd-webui-loractl
The text was updated successfully, but these errors were encountered: