-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support for transformer LoRA adapters #931
Comments
Hey @jonberliner, Thanks for the great suggestion! We'll definitely add this to the backlog. I haven't played around much with peft before -- do people typically use it just to load LoRA style adapters for inference? I'm wondering if we can just extend the |
Thanks for the response! Yes, typically people are loading LoRA-style adapters with peft, along with setting up adapters for training which is beyond the scope of guidance. Setting up transformers to take an adapter path would be great. Even better would be the ability to load a base model with |
@jonberliner thank you for the input here! From a user experience perspective, do you feel that having an exposed interface for adding adaptors to e.g. from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import LoraConfig
model_id = "microsoft/phi-2"
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
lora_config = LoraConfig(
target_modules=[ "q_proj", "k_proj", "v_proj", "dense" ],
init_lora_weights=False
)
model.add_adapter(lora_config, adapter_name="adapter_1") import guidance
guidance_model = guidance.models.Transformers(model=model, tokenizer=tokenizer) @Harsha-Nori I'm a little bit concerned about the slippery slope of having to reimplement (and maintain) the entire API of an external model-provider, and my knee-jerk reaction is to suggest we only mirror the core parts and recommend "wrapping" anything else like in my above example. Thoughts? |
If |
@riedgar-ms can confirm it works for me and totally agreed. @jonberliner is there anything that the approach I outlined above misses? |
Hi! Thanks so much for this awesome package. I would like to use a transformers model with a LoRA adapter attached using the peft package. Currently, it seems that it can only use a regular transformers model
I would like to pass the path to a LoRA adapter and load the base model with the attached adapter.
Currently, I can merge the adapter and save the model, but this is not ideal, as I have many adapters I would like to attach to the same base model.
Again, thanks so much for this awesome project!
The text was updated successfully, but these errors were encountered: