Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run app.py报错 #204

Closed
txiang163 opened this issue Apr 11, 2023 · 5 comments
Closed

run app.py报错 #204

txiang163 opened this issue Apr 11, 2023 · 5 comments

Comments

@txiang163
Copy link

在运行app.py时出现以下错误
(my_lm) xt@ji-jupyter-6713700621420699648-master-0:~/txiang/LMFlow/service$ python app.py
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Failed to use RAM optimized load. Automatically use original load instead.
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File "/home/xt/txiang/LMFlow/service/../src/lmflow/models/hf_decoder_model.py", line 192, in init
self.backend_model = AutoModelForCausalLM.from_pretrained(
File "/home/xt/anaconda3/envs/my_lm/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 474, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers_modules.configuration_chatglm.ChatGLMConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CodeGenConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, GitConfig, GPT2Config, GPT2Config, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MvpConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

我是在hugging face models里面把chatglm下载下来放到了output_models目录下,运行的时候说无法识别ChatGLMConfig这个配置类,这个问题该如何解决?

@research4pan
Copy link
Contributor

Thanks for your interest in LMFlow! Currently we haven't merged the support for chatglm yet. The corresponding pull request is this one. We will let you know once the merge is completed. Thanks for your understanding 🙏

@txiang163
Copy link
Author

So what are the models on huggingface that you currently support?

@research4pan
Copy link
Contributor

We support all decoder-only models in huggingface, such as gpt2, gpt2-large, gpt2-xl, gpt-neo-2.7b, galactica, bloom, llama etc. The aforementioned pull request will introduce more supports of all encoder-decoder models in huggingface, such as T5. Hope that answers your question. Thanks 😄

@research4pan
Copy link
Contributor

As for the issue not displayed here (but in my mailbox), you may run the ./scripts/run_chatbot.sh under the project directory. Running it under ./scripts may result in a model-not-found error. Hope that helps. Thanks~

@shizhediao
Copy link
Contributor

This issue has been marked as stale because it has not had recent activity. If you think this still needs to be addressed please feel free to reopen this issue. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants