-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export models #34
Export models #34
Changes from all commits
3ea3e6a
79d161e
e65679d
8591d7d
8df7e41
0ffc0a9
9253931
9b93eaa
83c94e6
4f1c569
7c32d9e
3081a0f
ed8697f
589b586
01a528b
aa31433
9061f1c
631d274
dc523b7
fe0bd01
f579652
2e82d22
3f9c927
0142784
bc60b87
6590939
236e5a3
addd65d
a7811bc
f8b2325
ecb8279
27e3bc9
e7397ac
c34f85c
cf069fb
75c1408
853f1d7
2431ef3
87f8611
3d42e28
60e3fa8
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
# Since torch.jit.save cannot handle Labels.single(), we need to replace it with | ||
# Labels(names=["_"], values=_dispatch.zeros_like(block.values, (1, 1))) | ||
# in metatensor-operations. This is a hacky way to do it. | ||
|
||
import os | ||
import metatensor.operations | ||
|
||
file = os.path.join( | ||
os.path.dirname(metatensor.operations.__file__), | ||
"reduce_over_samples.py" | ||
) | ||
|
||
# Find the line that contains "Labels.single()" | ||
# and replace "Labels.single()" with | ||
# "Labels(names=["_"], values=_dispatch.zeros_like(block.values, (1, 1)))" | ||
with open(file, "r") as f: | ||
lines = f.readlines() | ||
for i, line in enumerate(lines): | ||
if "samples_label = Labels.single()" in line: | ||
lines[i] = line.replace( | ||
"samples_label = Labels.single()", | ||
"samples_label = Labels(names=[\"_\"], values=_dispatch.zeros_like(block.values, (1, 1)))" | ||
) | ||
break | ||
|
||
with open(file, "w") as f: | ||
f.writelines(lines) |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -42,7 +42,7 @@ def _add_eval_model_parser(subparser: argparse._SubParsersAction) -> None: | |
) | ||
|
||
|
||
def eval_model(model: str, structures: str, output: str = "output.xyz") -> None: | ||
def eval_model(model: str, structures: str, output: str) -> None: | ||
"""Evaluate a pretrained model. | ||
|
||
``target_property`` wil be predicted on a provided set of structures. Predicted | ||
|
@@ -57,8 +57,7 @@ def eval_model(model: str, structures: str, output: str = "output.xyz") -> None: | |
loaded_model = load_model(model) | ||
structure_list = read_structures(structures) | ||
|
||
# since the second argument is missing, | ||
# this calculates all the available properties: | ||
predictions = loaded_model(structure_list) | ||
# this calculates all the properties that the model is capable of predicting: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you add the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually it will never receive None, because of the defaults in the parsers. I think it's fine this way, right? |
||
predictions = loaded_model(structure_list, loaded_model.capabilities.outputs) | ||
|
||
write_predictions(output, predictions, structure_list) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we maybe add a comment that even though the file ending of
model.pt
andexported-model.pt
is the same, the organization of the content is different. The first one is the internal format which allows for retraining while the latter is a model in evaluation mode, compiled (?) functions which can only be used for running md.Question though. Can one use the one also for the eval script? If no we should add a check and an error message in the eval script.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both should be usable for eval
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay makes sens to add this maybe here.