Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/peft compatible models #346

Merged
merged 28 commits into from
Jun 27, 2023
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
cfd9795
ignore venv in dir
danbider Jun 12, 2023
5848b4e
adding a lean ComposerHFCausalLMFromPython that converts a loaded hf …
danbider Jun 13, 2023
dadd930
typechecking the composer convertor, import peft and transformers.
danbider Jun 20, 2023
d75fd09
support for input_embeds in forward calls for peft compatibility
danbider Jun 14, 2023
db837d3
fixing inputs_embeds typos
danbider Jun 16, 2023
081d00e
precommit fixes docs
danbider Jun 20, 2023
879c9cc
merged upstream
danbider Jun 22, 2023
635fb20
Merge branch 'main' into feature/peft-compatible-models
mvpatel2000 Jun 22, 2023
d012638
refactored hf causal
danbider Jun 23, 2023
459415e
merged upstream
danbider Jun 23, 2023
de5b5ad
attempt to conclude merge
danbider Jun 23, 2023
aba2921
removed python convertor from inits
danbider Jun 23, 2023
cd3452a
wip train.py
danbider Jun 24, 2023
0b4403d
added lora deps
danbider Jun 24, 2023
7bec6c5
removed 8 bit defaults
danbider Jun 24, 2023
9d64a03
Update llmfoundry/models/mpt/modeling_mpt.py
codestar12 Jun 26, 2023
63ef548
precommit edits models
danbider Jun 26, 2023
bdea8ec
Update llmfoundry/models/mpt/modeling_mpt.py
danbider Jun 27, 2023
786ac5e
delete deprecated hf class from init
danbider Jun 27, 2023
ef14b74
removed 8-bit and device map support for now
danbider Jun 27, 2023
c490802
formatting the peft builder for precommit
danbider Jun 27, 2023
23576d4
fixed comments on model ifs
danbider Jun 27, 2023
f30bdea
added a util for printing trainable params
danbider Jun 27, 2023
f3cf98a
deps pinned down and sent to gpu
danbider Jun 27, 2023
9b3ae8e
scipy dep for bitsandbytes
danbider Jun 27, 2023
22faff4
sent lora deps to regular install_requires
danbider Jun 27, 2023
1382d4e
pinned down scipy
danbider Jun 27, 2023
d9d0cad
Merge branch 'main' into feature/peft-compatible-models
codestar12 Jun 27, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,9 @@ ENV/
env.bak/
venv.bak/

# python venv installed in the dir, llmfoundry-venv
*-venv

# Spyder project settings
.spyderproject
.spyproject
Expand Down
3 changes: 2 additions & 1 deletion llmfoundry/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
build_finetuning_dataloader,
build_text_denoising_dataloader)
from llmfoundry.models.hf import (ComposerHFCausalLM, ComposerHFPrefixLM,
ComposerHFT5)
ComposerHFT5, ComposerHFCausalLMFromPython)
from llmfoundry.models.layers.attention import (
MultiheadAttention, attn_bias_shape, build_alibi_bias, build_attn_bias,
flash_attn_fn, scaled_multihead_dot_product_attention,
Expand Down Expand Up @@ -46,6 +46,7 @@
'MPTForCausalLM',
'ComposerMPTCausalLM',
'ComposerHFCausalLM',
'ComposerHFCausalLMFromPython',
danbider marked this conversation as resolved.
Show resolved Hide resolved
'ComposerHFPrefixLM',
'ComposerHFT5',
'COMPOSER_MODEL_REGISTRY',
Expand Down
3 changes: 2 additions & 1 deletion llmfoundry/models/hf/__init__.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Copyright 2022 MosaicML LLM Foundry authors
# SPDX-License-Identifier: Apache-2.0

from llmfoundry.models.hf.hf_causal_lm import ComposerHFCausalLM
from llmfoundry.models.hf.hf_causal_lm import ComposerHFCausalLM, ComposerHFCausalLMFromPython
from llmfoundry.models.hf.hf_fsdp import (prepare_hf_causal_lm_model_for_fsdp,
prepare_hf_enc_dec_model_for_fsdp,
prepare_hf_model_for_fsdp)
Expand All @@ -10,6 +10,7 @@

__all__ = [
'ComposerHFCausalLM',
'ComposerHFCausalLMFromPython',
'ComposerHFPrefixLM',
'ComposerHFT5',
'prepare_hf_causal_lm_model_for_fsdp',
Expand Down
37 changes: 37 additions & 0 deletions llmfoundry/models/hf/hf_causal_lm.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,10 @@
from llmfoundry.models.hf.model_wrapper import HuggingFaceModelWithZLoss
from llmfoundry.models.utils import init_empty_weights

# required for loading a python model into composer
import peft
import transformers

__all__ = ['ComposerHFCausalLM']

Tokenizer = Union[PreTrainedTokenizer, PreTrainedTokenizerFast]
Expand Down Expand Up @@ -148,3 +152,36 @@ def __init__(self, om_model_config: DictConfig, tokenizer: Tokenizer):
init_device=init_device)

return composer_model


class ComposerHFCausalLMFromPython(HuggingFaceModelWithZLoss):
"""Configures a :class:`.HuggingFaceModel` around a Causal LM that is loaded in memory.
Args:
model (peft.peft_model.PeftModel or transformers.PreTrainedModel): The HF model loaded into python memory.
tokenizer (PreTrainedTokenizer): The tokenizer that the model will use.
"""

def __init__(self, model: Union[peft.peft_model.PeftModel,transformers.PreTrainedModel], tokenizer: Tokenizer):
codestar12 marked this conversation as resolved.
Show resolved Hide resolved

train_metrics = [
LanguageCrossEntropy(),
LanguagePerplexity(),
]
eval_metrics = [
LanguageCrossEntropy(),
LanguagePerplexity(),
InContextLearningLMAccuracy(),
InContextLearningMultipleChoiceAccuracy(),
InContextLearningQAAccuracy(),
InContextLearningLMExpectedCalibrationError(),
InContextLearningMCExpectedCalibrationError()
]

composer_model = super().__init__(model=model,
shift_labels=True,
tokenizer=tokenizer,
metrics=train_metrics,
eval_metrics=eval_metrics,
z_loss=0.0)

return composer_model
Loading