You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fine-tune the Llama2-7b model using the provided notebook.
Execute the model's predictions using the predict function with modified parameters, including setting skip_save_unprocessed_output to False and providing a specific output_directory.
Despite modifications, the token-level probabilities remain 0.0.
Hello, @MoOo2mini -- thank you for using Ludwig's LLM fine-tuning capabilities and reporting your issue. We cannot reproduce your error, because we do not have access to your model:
FileNotFoundError: [Errno 2] No such file or directory: '/content/test/model_hyperparameters.json'
Could you please make your model available (e.g., on HuggingFace), and I will be happy to troubleshoot the problem.
Describe the bug
The token-level probabilities consistently appear as 0.0 when fine-tuning the Llama2-7b model using "Ludwig + DeepLearning.ai: Efficient Fine-Tuning for Llama2-7b on a Single GPU.ipynb".
https://colab.research.google.com/drive/1Ly01S--kUwkKQalE-75skalp-ftwl0fE?usp=sharing
below thing is my code that has a problem...
https://colab.research.google.com/drive/1OmbCKlPzlxm4__iThYqB9PSLUWZZVptz?usp=sharing
To Reproduce
Steps to reproduce the behavior:
predict
function with modified parameters, including settingskip_save_unprocessed_output
toFalse
and providing a specificoutput_directory
.Expected behavior
Token-level probabilities should reflect the model's confidence in predicting each token's output.
Screenshots
N/A
Environment:
Additional context
The logger within the predict function does not seem to function as expected.
The text was updated successfully, but these errors were encountered: