You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently in the process of training a model and have been tracking the loss and accuracy metrics.
However, I've noticed that while I can calculate these metrics for the training data, there isn't a straightforward way to calculate the loss and accuracy for the validation data within the current workflow.
Is there any plan to add support for computing these metrics on validation data in the near future? This feature would be extremely helpful for better evaluating model performance during the development process.
Thank you for considering this request.
The text was updated successfully, but these errors were encountered:
Thank you for your question!
Actually when we just started to create and maintain WeSpeaker, we have considered whether to add the validation process during training. We found that the model with the minimal validation loss or lowest EER on the dev set was not bound to be the best model on the test set. The models trained with enough steps could generalize well on the unseen data. So in WeSpeaker, we only guarantee enough epochs are trained and average the last 10 epochs for robustness.
If you are not sure whether the model converges or not, you can check the loss tendency or just simply train more epochs.
Is there any plan to add support for computing these metrics on validation data in the near future? This feature would be extremely helpful for better evaluating model performance during the development process.
We will try to support this or EER based validation in the future
I'm currently in the process of training a model and have been tracking the loss and accuracy metrics.
However, I've noticed that while I can calculate these metrics for the training data, there isn't a straightforward way to calculate the loss and accuracy for the validation data within the current workflow.
Is there any plan to add support for computing these metrics on validation data in the near future? This feature would be extremely helpful for better evaluating model performance during the development process.
Thank you for considering this request.
The text was updated successfully, but these errors were encountered: