You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To get CUDA to work in DiffEq solvers of OpenQuantumTools with minimal changes, I had to add support for CuArray type in OpenQuantumBase. In particular, the initial state u0 of A::AbstractHamiltonian(H, u0) is now allowed to be CuArray.
abstract type AbstractAnnealing{hType <:AbstractHamiltonian,uType <:Union{Vector,Matrix,CuArray},} end
This brings up several issues. The two most important are
(I) How should we integrate CUDA with OpenQuantumBase.jl?
(II) Is there a way to make CUDA an optional dependence?
My proposed solution to (I) is to make CuHamiltonian and CuAnnealing constructors which inherit from the Abstract versions. When passed to a solver in OpenQuantumTools, we just use multiple-dispatch on "Cu" types to do GPU accelerated solvers.
Pros:
(1) CuHamiltonian/ CuAnnealing data can be optimized for GPU (i.e. Float32 and whatever else is necessary)
(2) Solvers will have GPU support via multiple-dispatch (no additional arguments/ "seperate gpu solvers")
(3) If partially solved in one GPU run, final state uf is CuArray, so supports future runs as u0 "natively"
Cons:
(1) Users have to define a separate CuH/CuA types if they want to run on GPU.
(2) CUDA is now native dependence.
The text was updated successfully, but these errors were encountered:
I agree with your proposal. Here are some of my thoughts:
(1) I think we should still keep CuDenseHamiltonian and CuSparseHamiltonian to distinguish CuArray and CuSparseArray.
(2) CuA and Annealing could share the same constructor, so the user only needs to define a CuH type to use GPU. Because Hamiltonian is necessary for every simulation, using it as a flag for CPG/GPU seems a reasonable choice.
(3) In the future, if the GPU library grows, we could separate it into another component package. Right now, loading CUDA seems to be a reasonable choice. The drawbacks of making CUDA a dependent package are: 1. We need to load CUDA each time OpenQuantumTools is loaded. It will consume more memory, but I don't think it will have a huge impact because of the JIT compiler. 2. There is an additional complication of the conditional usage, meaning we may need to manually check if CUDA is functional or not each time OpenQuantumTools is launched. I am not sure how this will impact OpenQuantumTools. And I will open a separate issue for this topic.
To get CUDA to work in DiffEq solvers of OpenQuantumTools with minimal changes, I had to add support for CuArray type in OpenQuantumBase. In particular, the initial state u0 of A::AbstractHamiltonian(H, u0) is now allowed to be CuArray.
OpenQuantumBase.jl/src/OpenQuantumBase.jl
Line 11 in e6778bc
OpenQuantumBase.jl/src/OpenQuantumBase.jl
Line 48 in e6778bc
This brings up several issues. The two most important are
(I) How should we integrate CUDA with OpenQuantumBase.jl?
(II) Is there a way to make CUDA an optional dependence?
My proposed solution to (I) is to make CuHamiltonian and CuAnnealing constructors which inherit from the Abstract versions. When passed to a solver in OpenQuantumTools, we just use multiple-dispatch on "Cu" types to do GPU accelerated solvers.
Pros:
(1) CuHamiltonian/ CuAnnealing data can be optimized for GPU (i.e. Float32 and whatever else is necessary)
(2) Solvers will have GPU support via multiple-dispatch (no additional arguments/ "seperate gpu solvers")
(3) If partially solved in one GPU run, final state uf is CuArray, so supports future runs as u0 "natively"
Cons:
(1) Users have to define a separate CuH/CuA types if they want to run on GPU.
(2) CUDA is now native dependence.
The text was updated successfully, but these errors were encountered: