-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export models #34
Export models #34
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some first comments. I will continue asap.
help="Filename of the exported model (default: %(default)s).", | ||
) | ||
|
||
|
||
def export_model(model: str, output: str) -> None: | ||
def export_model(model: str, output: Optional[str]) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this should be optional here, please provide the default argument. I just saw that this is the same in the eval_model
code. Can you maybe fix this there as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course
|
||
# The above script can be found in the `scripts` folder of the repository. | ||
|
||
# Finally, the `metatestor-models export`, i.e., |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we maybe add a comment that even though the file ending of model.pt
and exported-model.pt
is the same, the organization of the content is different. The first one is the internal format which allows for retraining while the latter is a model in evaluation mode, compiled (?) functions which can only be used for running md.
Question though. Can one use the one also for the eval script? If no we should add a check and an error message in the eval script.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both should be usable for eval
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay makes sens to add this maybe here.
@@ -201,6 +200,7 @@ def train_model(options: DictConfig) -> None: | |||
outputs=outputs, | |||
) | |||
|
|||
logger.info("Calling model trainer") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is our syntax. When do we call it model and when architecture? I thought model is really the trained or to be trained pytorch object and everything else we call architecture.
logger.info("Calling model trainer") | |
logger.info("Calling architecture trainer") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice. I think the major thing is if we want to perform some consistency checks on the model when exporting or not?
|
||
# The above script can be found in the `scripts` folder of the repository. | ||
|
||
# Finally, the `metatestor-models export`, i.e., |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay makes sens to add this maybe here.
# Load the model | ||
loaded_model = load_model(model) | ||
|
||
# Export the model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we do some more checks here? Especially that the units are not None maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the idea was that, if no units are available, the numbers will be passed on to the engine as they are
@Luthaf is this true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh yes I think you are right.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although we might want to warn about this in here
# since the second argument is missing, | ||
# this calculates all the available properties: | ||
predictions = loaded_model(structure_list) | ||
# this calculates all the properties that the model is capable of predicting: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add the Optional
type hint to this function as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually it will never receive None, because of the defaults in the parsers. I think it's fine this way, right?
Exporting models
📚 Documentation preview 📚: https://metatensor-models--34.org.readthedocs.build/en/34/