Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feauture request: Dynamic Lora Weights #2591

Open
OliviaOliveiira opened this issue Jan 19, 2024 · 14 comments
Open

Feauture request: Dynamic Lora Weights #2591

OliviaOliveiira opened this issue Jan 19, 2024 · 14 comments
Labels
Feature A new feature to add to ComfyUI.

Comments

@OliviaOliveiira
Copy link

Hey there!
There's an incredibly powerful extension for A1111 called Dynamic Lora Weights which allows one to control the weights of the lora at any given step during the whole generation. For instance - [email protected],[email protected] means that lora weight starts at 0.2 till 20% of steps, and then ramps up to 1 from 20% of steps till the end of the generation.
Link to extension - https://github.com/cheald/sd-webui-loractl

@ltdrdata
Copy link
Collaborator

ltdrdata commented Jan 20, 2024

Take a look at this:
https://github.com/asagi4/comfyui-prompt-control

Currently, weight scheduling is not available, but it seems like you can make a feature request to there.

@asagi4
Copy link
Contributor

asagi4 commented Jan 23, 2024

You can also combine my prompt control utility with any dynamic prompt utility, like the MUWildCard node from my other repo that implements prompt fusion style functions and variables: https://github.com/asagi4/comfyui-utility-nodes
Or you can use the JinjaRender node for even more advanced dynamic prompts.

The syntax isn't quite as nice as sd-webui-loractl, but you can get them to do similar things.

@brendanhoar
Copy link

brendanhoar commented Aug 20, 2024

IIRC, discussion about this elsewhere indicated that developers were waiting for resolution of #2666 , the execution model inversion changes, before attempting work like this.

Anyone want to take a look at to see if this is more feasible now?

@tncrdn
Copy link

tncrdn commented Sep 6, 2024

You can also combine my prompt control utility with any dynamic prompt utility, like the MUWildCard node from my other repo that implements prompt fusion style functions and variables: https://github.com/asagi4/comfyui-utility-nodes Or you can use the JinjaRender node for even more advanced dynamic prompts.

The syntax isn't quite as nice as sd-webui-loractl, but you can get them to do similar things.

So if I wanted to do something like Lora A starts at first step and stops at 15th step and Lora B starts at 15th step and is active until 60th step in a generation with 60 steps, how should i use prompt control and JinjaRender to do that?

@asagi4
Copy link
Contributor

asagi4 commented Sep 6, 2024

@tncrdn You don't need JinjaRender for a simple case like that. You can just do [<lora:A:weight>:<lora:B:weight>:0.25]. Absolute step counts aren't supported (mostly because it's not available at cond generation time) so you'll have to calculate the fraction, but it'll work. JinjaRender may be useful if you want more complicated schedules since typing those by hand is going to be tedious.

@tncrdn
Copy link

tncrdn commented Sep 6, 2024

@asagi4 Thank you very much. And I use PromptToSchedule and ScheduleToModel nodes for that?

@asagi4
Copy link
Contributor

asagi4 commented Sep 6, 2024

@tncrdn yes. PromptToSchedule does the parsing and ScheduleToModel applies a model patch that does the LoRA scheduling.

@ltdrdata ltdrdata closed this as completed Sep 6, 2024
@tncrdn
Copy link

tncrdn commented Sep 6, 2024

@asagi4 Thank you. One last question. Would this be better than using 2 KSampler Advanced nodes and setting start and stop steps?

@asagi4
Copy link
Contributor

asagi4 commented Sep 6, 2024

@tncrdn If you use the PCSplitSampling node to enable split sampling, that's essentially what it will do.

The effects are different though. Doing two ksampler passes isn't quite the same thing as one pass with the same number of steps, at least with some samplers.

@tncrdn
Copy link

tncrdn commented Sep 6, 2024

@asagi4 So is this the correct way to use it for one pass with SDXL? (I also used a ScheduleToCond) If you can check the attached workflow please? Also does it work with Flux?

@asagi4
Copy link
Contributor

asagi4 commented Sep 6, 2024

@tncrdn that works, though you don't necessarily need two separate schedules for ScheduleToModel and ScheduleToCond; In fact you'll want to pass the LoRAs into ScheduleToCond too if you want the LoRA to apply to the text encoder, because otherwise they'll only apply to the unet.

@tncrdn
Copy link

tncrdn commented Sep 6, 2024

@asagi4 So I wrote the prompt, lora and SDXL(896 1152, 896 1152, 0 0) into one PromptToSchedule node and sent it both to ScheduleToModel and ScheduleToCond. Thank you very much.

@mcmonkey4eva mcmonkey4eva reopened this Sep 9, 2024
@mcmonkey4eva mcmonkey4eva added the Feature A new feature to add to ComfyUI. label Sep 9, 2024
@mcmonkey4eva
Copy link
Collaborator

Native support for dynamic lora weights is being discussed and will likely happen soon(™️ )

@tncrdn
Copy link

tncrdn commented Sep 10, 2024

@mcmonkey4eva That's great news. Will it support Flux as well? It would also be great if there was native support for prompts like [cat:dog:10] to change the prompt from cat to dog after 10 steps (or in fraction if that's the only possible way). And if it also worked for Flux.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature A new feature to add to ComfyUI.
Projects
None yet
Development

No branches or pull requests

6 participants