-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine-Tuning process #9
Comments
Yes, I have the same question. The repo is extremely useful and provides good quality results and easy to use and setup compared to some purely research based github repos. However, this might be a naive question, but does this repo even include the code needed to train the .bin file. Would love to recreate this in other languages, so it would be extremely helpful if a re-training guide can be included in the readme, with links to the source datasets. |
@ugmSorcero please see fine-tuning parameters below
|
@thusithaC I will have to play around and think about how to best incorporate training the model from scratch and get back on this. If you have any ideas about that, feel free to let us know. |
Hi!
I would like to know the process of fine-tuning UniLM with inverted SQUAD (hardware, training time, number of steps, parameters, etc.)
Would that be possible?
Thanks in advance!
The text was updated successfully, but these errors were encountered: