Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running AlphaFold on models larger than AMD GPU available memory #958

Open
dipietrantonio opened this issue Jul 23, 2024 · 4 comments
Open

Comments

@dipietrantonio
Copy link

Dear developers,

I am facing an out-of-memory issue (follow the link for a detailed log) when running ColabFold (which is a repackaging of AlphaFold) on AMD GPUs.

I have found a comment on this repository that suggests GPU managed memory can be used to leverage host memory together with GPU memory when the latter is not enough to hold the dataset. However, the same configuration does not work on AMD GPUs.

Do you have any suggestions on how to overcome the memory issue?

Thank you!

@cgseitz
Copy link

cgseitz commented Oct 10, 2024

What have you tried so far? When I faced an issue like this, I used the reduced_dbs flag, which reduced the GPU memory required and still generated useful outputs for my project. See here

@dipietrantonio
Copy link
Author

We are working with AMD because their JAX implementation did not support Unified Memory to take advantage of the host memory.

@smilenaderi
Copy link

@dipietrantonio Can you please share how were you able to run alphafold on AMD GPUs? Thanks

@dipietrantonio
Copy link
Author

@smilenaderi we can only run it on small problem sizes, the ones that do not require more memory than it is available on the GPUs. We are using it through the ColabFold repackaging. Here is the Dockerfile:

https://github.com/PawseySC/pawsey-containers/blob/cdp-colabfold/colabfold/colabfold.dockerfile

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants