Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrating CUDA properly in OpenQuantumBase #40

Open
naezzell opened this issue Dec 8, 2020 · 2 comments
Open

Integrating CUDA properly in OpenQuantumBase #40

naezzell opened this issue Dec 8, 2020 · 2 comments
Assignees
Labels
GPU improvements related to GPU acceleration

Comments

@naezzell
Copy link
Member

naezzell commented Dec 8, 2020

To get CUDA to work in DiffEq solvers of OpenQuantumTools with minimal changes, I had to add support for CuArray type in OpenQuantumBase. In particular, the initial state u0 of A::AbstractHamiltonian(H, u0) is now allowed to be CuArray.

import CUDA.CuArray

abstract type AbstractAnnealing{hType <: AbstractHamiltonian,uType <: Union{Vector,Matrix,CuArray},} end

This brings up several issues. The two most important are
(I) How should we integrate CUDA with OpenQuantumBase.jl?
(II) Is there a way to make CUDA an optional dependence?

My proposed solution to (I) is to make CuHamiltonian and CuAnnealing constructors which inherit from the Abstract versions. When passed to a solver in OpenQuantumTools, we just use multiple-dispatch on "Cu" types to do GPU accelerated solvers.

Pros:
(1) CuHamiltonian/ CuAnnealing data can be optimized for GPU (i.e. Float32 and whatever else is necessary)
(2) Solvers will have GPU support via multiple-dispatch (no additional arguments/ "seperate gpu solvers")
(3) If partially solved in one GPU run, final state uf is CuArray, so supports future runs as u0 "natively"

Cons:
(1) Users have to define a separate CuH/CuA types if they want to run on GPU.
(2) CUDA is now native dependence.

@neversakura
Copy link
Collaborator

I agree with your proposal. Here are some of my thoughts:

(1) I think we should still keep CuDenseHamiltonian and CuSparseHamiltonian to distinguish CuArray and CuSparseArray.

(2) CuA and Annealing could share the same constructor, so the user only needs to define a CuH type to use GPU. Because Hamiltonian is necessary for every simulation, using it as a flag for CPG/GPU seems a reasonable choice.

(3) In the future, if the GPU library grows, we could separate it into another component package. Right now, loading CUDA seems to be a reasonable choice. The drawbacks of making CUDA a dependent package are: 1. We need to load CUDA each time OpenQuantumTools is loaded. It will consume more memory, but I don't think it will have a huge impact because of the JIT compiler. 2. There is an additional complication of the conditional usage, meaning we may need to manually check if CUDA is functional or not each time OpenQuantumTools is launched. I am not sure how this will impact OpenQuantumTools. And I will open a separate issue for this topic.

@neversakura
Copy link
Collaborator

CUDA conditional usage: #45.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GPU improvements related to GPU acceleration
Projects
None yet
Development

No branches or pull requests

4 participants