We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please provide a clear and concise description of what the question is.
1.在10万条多语言ShareGPT GPT4多轮对话数据集shibing624/sharegpt_gpt4上SFT微调了一版baichuan-13b-chat多轮问答模型,日常问答和医疗问答效果有提升,发布微调后的LoRA权重
2.在240万条中英文医疗数据集shibing624/medical上SFT微调了一版Ziya-LLaMA-13B模型,医疗问答效果有提升,发布微调后的完整模型权重(单轮对话) 这个微调是240w的数据全部用了,还是只用了1000,看sft命令max_train_samples设置的是1000
The text was updated successfully, but these errors were encountered:
1.有补充医疗数据;2.全部数据。
Sorry, something went wrong.
No branches or pull requests
Describe the Question
Please provide a clear and concise description of what the question is.
1.在10万条多语言ShareGPT GPT4多轮对话数据集shibing624/sharegpt_gpt4上SFT微调了一版baichuan-13b-chat多轮问答模型,日常问答和医疗问答效果有提升,发布微调后的LoRA权重
2.在240万条中英文医疗数据集shibing624/medical上SFT微调了一版Ziya-LLaMA-13B模型,医疗问答效果有提升,发布微调后的完整模型权重(单轮对话)
这个微调是240w的数据全部用了,还是只用了1000,看sft命令max_train_samples设置的是1000
The text was updated successfully, but these errors were encountered: