You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to ask how can I use QLoRa or Parameter-Efficient Fine-Tuning thin a model does not register at Hugging face instead is Based on OFA
i am trying to Quantize the Tiny version but I don’t know if I need to use Lora in which way for Parameter-Efficient Fine-Tuning
i thought if i reconstruct the model BioMedGPT_Tiny from Unify_Transfomer.py following fie ofa.py and indicate to Config parameters to have BiomedGPT_tiny in separation file then apply Quantization Techniques but the problem is that the tokenizer Pet-Trained model not available i think
The text was updated successfully, but these errors were encountered:
I would like to ask how can I use QLoRa or Parameter-Efficient Fine-Tuning thin a model does not register at Hugging face instead is Based on OFA
i am trying to Quantize the Tiny version but I don’t know if I need to use Lora in which way for Parameter-Efficient Fine-Tuning
i thought if i reconstruct the model BioMedGPT_Tiny from Unify_Transfomer.py following fie ofa.py and indicate to Config parameters to have BiomedGPT_tiny in separation file then apply Quantization Techniques but the problem is that the tokenizer Pet-Trained model not available i think
The text was updated successfully, but these errors were encountered: