-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference Output Format Issues in GUI Odyssey #7
Comments
Hi, There could be two possible reasons:
|
Thank you for response. so I didn't change anything from original code and I just changed the model from OdysseyAgent to hflqf88888/OdysseyAgent-random. but when I try to evaluate the model I'm encountering a configuration class mismatch error related to QWenConfig. Environment
Error Message
I'm using the following eval.sh
Additional ContextUsing distributed training with 4 GPUs QuestionHave you encountered this QWenConfig class inconsistency issue before? What would be the recommended way to resolve this configuration mismatch? |
Hi, |
I followed your suggestion to clone the model locally using: And updated the checkpoint parameter to point to my local path. However, I'm still encountering the same AttributeError: AttributeError: 'QWenTokenizer' object has no attribute 'IMAGE_ST' The full traceback shows that the error occurs in tokenization_qwen.py line 227: if surface_form not in SPECIAL_TOKENS + self.IMAGE_ST: This error appears across all ranks (0-3) when attempting to initialize the tokenizer. Could you please provide additional guidance on resolving this IMAGE_ST attribute error? Is there perhaps a specific version of dependencies I should be using, or are there any additional setup steps I might be missing? |
Maybe you could check out this link: https://huggingface.co/MMInstruction/Silkie/discussions/1. |
I try this, it can work. But during inference, result show that tensor in different cuda devices, which is an error. |
I've been following the quickstart documentation closely, but I'm encountering issues with the inference output format. Specifically, instead of the expected structured outputs, the results appear in unexpected formats, such as:
Natural language text, rather than the structured format.
Output with random or nonsensical numbers.
Results in Chinese, which was not anticipated.
I would greatly appreciate any guidance on resolving these output inconsistencies to achieve the expected output format. If there are specific configurations or parameters I need to adjust, please let me know.
Thank you for your assistance!
The text was updated successfully, but these errors were encountered: