Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce Tests for the Multi-Transform #6

Open
cortner opened this issue Jan 14, 2022 · 5 comments
Open

Introduce Tests for the Multi-Transform #6

cortner opened this issue Jan 14, 2022 · 5 comments
Labels
help wanted Extra attention is needed
Milestone

Comments

@cortner
Copy link
Member

cortner commented Jan 14, 2022

these new transforms are not currently tested. I'm not sure how, and would be grateful if others can take this on.

@ LarsSchaaf @davkovacs

@cortner cortner added this to the 1.0 milestone Jan 14, 2022
@cortner cortner assigned davkovacs and unassigned davkovacs Jan 14, 2022
@cortner cortner added the help wanted Extra attention is needed label Jan 14, 2022
@LarsSchaaf
Copy link

What kind of tests would you be interested in? In terms of the performance of fits (RMSEs, MD stability ect.) or that the the transforms work as expected (more along the lines of the test you already wrote).

Here are some dimer curves, where we can clearly see that the ACE has different cutoffs depending on the species. I can supply some RMSE plots as-well, but these will only really have meaning if I also optimise setting a species independent cutoff.

image

@cortner
Copy link
Member Author

cortner commented Jan 18, 2022

Unit tests should specify what the implementation is meant to achieve. They are just intended to check that the implementation is consistent with what you want it to do. E.g., my test of the symmetric basis checks that the gradient is indeed the gradient of the value, and it checks that it is invariant under actions of the symmetry group. All this for relatively small configurations and small basis sets just for testing. That's the kind of test I'm envisioning.

@cortner
Copy link
Member Author

cortner commented Jan 18, 2022

For a transform, I test that the inverse and derivatives are correct, and that saving and loading works as expected. But that's about it. Maybe there are other things that should be tested. But what else should be tested for multi-transforms? Suppose that somebody made a PR to change the multi-transform implementation. What do you want to check before you accept the PR?

@LarsSchaaf
Copy link

LarsSchaaf commented Jan 25, 2022

I've jotted down some ideas.

What we want to test:

  1. Transform domain is correct:
    1. including transforms that depend on the centre of the atom, eg. (:C, :H) and (H:, C:)
  2. Transform r0 are applied as intended
  3. Multi-transform doesn’t return error for valid arguments
    1. Can only specify (:C, :H), don’t need to specify (:C, :H) and (:H, :C)
    2. Have species cutoffs and transform dict in different order. Ie (:C, :H) in transforms and (:H, :C) [this currently fails: maybe it should? if so we should mention in the doctoring that the keys of the cutoff dict have to be the same as those of the transform]

How we will test:

  1. Create a two species potential (can do this by training potential or setting random coefficients as per @davkovacs suggestion) -> check dimer curves to see wether its zero in the right domain (similar to the image posted above)
    2. Check site energies for dimer curves
site_energies(IP, train_data[1].at)
  1. Check derivative is maximum at desired place?
  2. Checking for errors when a set of valid arguments is passed to multi-transform.

Create potential with random coefficients:

c = rand(length(basis))
IP = JuLIP.MLIPs.combine(basis, c)

@cortner
Copy link
Member Author

cortner commented Jan 28, 2022

this sounds all very sensible. thanks for the summary and sorry for the slow reply. Give it a try in a PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants