Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider using a new distributed batch training logic #68

Open
Tracked by #63
Innixma opened this issue Jul 18, 2024 · 0 comments
Open
Tracked by #63

Consider using a new distributed batch training logic #68

Innixma opened this issue Jul 18, 2024 · 0 comments
Milestone

Comments

@Innixma
Copy link
Collaborator

Innixma commented Jul 18, 2024

When fitting all the models for a repository, we originally used AutoMLBenchmark (AMLB). This is because AMLB has a well-tested logic for running AutoGluon. However, AMLB is somewhat intimidating to use for those unfamiliar, and thus to increase adoption of TabRepo in terms of researchers fitting their models on our datasets to extend our Repos, it might be better to have a more generic package for fitting models. Particularly one that is simpler to debug and test locally (which AMLB is unfortunately lacking in).

I have seen multiple papers running their methods on TabZilla, so it might be worth understanding what the extension process is in TabZilla and if we can apply a similar process.

@Innixma Innixma added this to the TabRepo 2.0 milestone Jul 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant