Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

加载UltraChat65B进行微调的问题 #28

Open
jinmin527 opened this issue Aug 19, 2023 · 0 comments
Open

加载UltraChat65B进行微调的问题 #28

jinmin527 opened this issue Aug 19, 2023 · 0 comments

Comments

@jinmin527
Copy link

def get_model_tokenizer(args):
model = LlamaForCausalLM.from_pretrained(args.model_name_or_path)
tokenizer = LlamaTokenizer.from_pretrained(args.model_name_or_path)
tokenizer.add_special_tokens({'pad_token': ""})
model.resize_token_embeddings(len(tokenizer))
model = bmt.BMTrainModelWrapper(model)
return model, tokenizer

假设在单机8卡服务器上,加载UltraChat65B的模型进行微调,会不会存在OOM的问题?每个卡都会执行model = LlamaForCausalLM.from_pretrained(args.model_name_or_path)加载一份模型,哪怕存CPU内存,65B大概需要130G的内存,8卡差不多需要1T的内存,而服务器总内存也差不多1T。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant