Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-GPU simulation example #170

Open
charlesbmi opened this issue Oct 30, 2023 · 0 comments
Open

Multi-GPU simulation example #170

charlesbmi opened this issue Oct 30, 2023 · 0 comments

Comments

@charlesbmi
Copy link
Contributor

Describe the new feature or enhancement

Please provide a clear and concise description of what you want to add or change.

A full-head ultrasound simulation (such as Aubry et al. benchmark 8) takes a very long time to simulate on CPU, but does not fit in the memory of existing GPUs (which would speed up the simulation).

Please describe how you would use this new feature.

Users often want to simulate full-head ultrasound experiments.

With multiple GPUs, I believe this could take <1 hour (extrapolating from smaller 3D simulations, e.g. #114 ).

Describe your proposed implementation

In theory, the NVIDIA HPC SDK should support running multiple GPUs out of the box. However, NDK large simulations didn't "just work" on an AWS instance with multiple GPUs (NDK just uses the first GPU), so some setup is currently missing.
This will probably take some digging into stride's implementation on NVIDIA HPC SDK or asking their maintainer, Carlos, for some support.

Describe possible alternatives

If you've suggested an implementation above, list here any alternative implementations you can think of, and brief comments explaining why the chosen implementation is better.

  • Alternative: run large experiments on a huge-RAM CPU instance. This works but is slow.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant