-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chapter 4 - Inference Error. Pinpointed error, however don't know how to solve #141
Comments
Very funny. Actually I managed to get it working now, I simply deleted "mask_token" from special_token_file and it worked. Still not sure why it worked before and why it doesn't now. If possible could someone try and point out to me regarding the config changes, etc. But also best for this code to be changed too.
|
Did further digging. I checked others who trained this too on HuggingFace, especially ones that were created within the last week. Those who ran in Tensorflow seems to still be able to run inference automatically. However, for people who used PyTorch like me, simillar issue were faced. Hence I believe this error is more PyTorch only for now, and will try and resolve and make a pull request soon when debugged. |
Sorry. I think I found the error. It's from now updating my libraries. I will try and update and retrain the models (and spend 0.6 usd of compute credits) and test this hypothesis again tmr. |
I managed to get it to work. basically the install file on colab (idk if ur using colab) is faulty, or if you can call it that. basically it's installation requirement is installing older versions, when newer ones exists. And the ones that works are the newer libraries. You ca try and run this after running the default installation file: #%%capture
!pip install transformers==4.41.2
!pip install datasets==2.20.0
!pip install pyarrow==16.0
!pip install requests==2.32.3
!pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0
!pip install importlib-metadata
!pip install accelerate -U And you can mainly just try and refer to my file too: https://colab.research.google.com/drive/1F5L_vL1o6WC3DxGWDF_g6ZPKTJ7dcmxR#scrollTo=r1SReYWcdRjZ |
I will leave this here if anyone encounters the same bug. The bug is caused by this code block: !git clone https://github.com/nlp-with-transformers/notebooks.git
%cd notebooks
from install import *
install_requirements() Because it would install older depracated versions of libraries, and cause these bugs. My hypothesis at least. |
To the authors, you should try and fix the requirement.txt and install.py to get them up to date. That's all for now. |
Information
The problem arises in chapter:
Describe the bug
The error stems from that if you run the exemplar colab code, and push it to your huggingface hub and try to run it. When trying to run inference, it outputs the error: "Can't load tokenizer using from_pretrained, please update its configuration: tokenizers.AddedToken() got multiple values for keyword argument 'special'".
To Reproduce
Steps to reproduce the behavior:
Run the exemplar colab code (https://colab.research.google.com/github/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb) up to:
Then visit the Huggingface hub once the model is trained and pushed to the hub, and use your personal inference API, and this tokenizer error can be seen:
Same also apply if you try to import the model off it, the error occurs at the tokenizer stage, and more precisely I believe at "special_tokens_map.json".
However, this seems to be subvertable if I instead just pass in the special token of "mask_token" as an extra kwargs, as per recommended by GPT-4
Expected behavior
I was expecting that once I fine-tuned the model by running the exemplar code and pushed it to the hub. That the model can be easily ran from the Inference API. Can also try and check my code on my personal notebook: https://colab.research.google.com/drive/1F5L_vL1o6WC3DxGWDF_g6ZPKTJ7dcmxR#scrollTo=orgQubxKVrNX
However, the same error occured when I ran it using the exemplar code directly, so I think it's likely due to some changes made with the library after this book was published causing this? it's still runnable as mentioned if I passed in "mask_token" as a **kwarg. But this is very strange, and I would love to know what's causing this error, as I am still learning, etc.
The text was updated successfully, but these errors were encountered: