Skip to content

How the MACE implement the multi-GPU lammps inference? #566

Answered by ilyes319
SyntaxSmith asked this question in Q&A
Discussion options

You must be logged in to vote

Communicating internal messages between domains is not a requirement it just makes things more efficient by reducing the impact of ghost atoms. To be clear, this impact of ghost atoms is not a product of message passing per se just of a large receptive field. If you make an Allegro model with 10A receptive field, you will have a similar problem. Just with message passing model you can do better.

Work is being done on an interface that includes internal inter GPUs communication for MACE in lammps.

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@SyntaxSmith
Comment options

@ilyes319
Comment options

@SyntaxSmith
Comment options

Answer selected by ilyes319
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants