Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a Python evaluator? #281

Open
frostedoyster opened this issue Jul 3, 2024 · 0 comments
Open

Add a Python evaluator? #281

frostedoyster opened this issue Jul 3, 2024 · 0 comments
Labels
Infrastructure: Miscellaneous General infrastructure issues Priority: Medium Important issues to address after high priority.

Comments

@frostedoyster
Copy link
Collaborator

eval is great to have, but users often want to do their own evaluation of the model, with complicated error metrics, splitting of the test set into different subset, etc
Given the common interface of models after exporting, it would be relatively simple to provide a Python evaluator that the user can call to do custom evaluation and that we could also use internally for eval. I think a small tutorial would be the best way to expose it to the user

@frostedoyster frostedoyster added Priority: Medium Important issues to address after high priority. Infrastructure: Miscellaneous General infrastructure issues labels Jul 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Infrastructure: Miscellaneous General infrastructure issues Priority: Medium Important issues to address after high priority.
Projects
None yet
Development

No branches or pull requests

1 participant