Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Export models #34

Merged
merged 41 commits into from
Feb 2, 2024
Merged

Export models #34

merged 41 commits into from
Feb 2, 2024

Conversation

frostedoyster
Copy link
Collaborator

@frostedoyster frostedoyster commented Jan 20, 2024

Exporting models


📚 Documentation preview 📚: https://metatensor-models--34.org.readthedocs.build/en/34/

Base automatically changed from finalize-training to main January 24, 2024 10:37
@frostedoyster frostedoyster marked this pull request as ready for review January 26, 2024 20:53
Copy link
Contributor

@PicoCentauri PicoCentauri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some first comments. I will continue asap.

help="Filename of the exported model (default: %(default)s).",
)


def export_model(model: str, output: str) -> None:
def export_model(model: str, output: Optional[str]) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this should be optional here, please provide the default argument. I just saw that this is the same in the eval_model code. Can you maybe fix this there as well?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course


# The above script can be found in the `scripts` folder of the repository.

# Finally, the `metatestor-models export`, i.e.,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we maybe add a comment that even though the file ending of model.pt and exported-model.pt is the same, the organization of the content is different. The first one is the internal format which allows for retraining while the latter is a model in evaluation mode, compiled (?) functions which can only be used for running md.

Question though. Can one use the one also for the eval script? If no we should add a check and an error message in the eval script.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both should be usable for eval

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay makes sens to add this maybe here.

@@ -201,6 +200,7 @@ def train_model(options: DictConfig) -> None:
outputs=outputs,
)

logger.info("Calling model trainer")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is our syntax. When do we call it model and when architecture? I thought model is really the trained or to be trained pytorch object and everything else we call architecture.

Suggested change
logger.info("Calling model trainer")
logger.info("Calling architecture trainer")

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok!

Copy link
Contributor

@PicoCentauri PicoCentauri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice. I think the major thing is if we want to perform some consistency checks on the model when exporting or not?


# The above script can be found in the `scripts` folder of the repository.

# Finally, the `metatestor-models export`, i.e.,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay makes sens to add this maybe here.

# Load the model
loaded_model = load_model(model)

# Export the model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we do some more checks here? Especially that the units are not None maybe?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the idea was that, if no units are available, the numbers will be passed on to the engine as they are
@Luthaf is this true?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh yes I think you are right.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although we might want to warn about this in here

# since the second argument is missing,
# this calculates all the available properties:
predictions = loaded_model(structure_list)
# this calculates all the properties that the model is capable of predicting:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add the Optional type hint to this function as well?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually it will never receive None, because of the defaults in the parsers. I think it's fine this way, right?

@PicoCentauri PicoCentauri merged commit 74ff6a4 into main Feb 2, 2024
8 checks passed
@PicoCentauri PicoCentauri deleted the export branch February 2, 2024 12:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants