Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default to sparse arrays with 64-bits indices #2011

Open
ma-sadeghi opened this issue Jul 29, 2023 · 1 comment
Open

Default to sparse arrays with 64-bits indices #2011

ma-sadeghi opened this issue Jul 29, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@ma-sadeghi
Copy link

ma-sadeghi commented Jul 29, 2023

MWE:

I = [1, 2^32]
J = [1, 2^32]
V = [1.0, 1.0]
sp = sparse(I, J, V)
spc = cu(sp)  # InexactError

In my actual use case, my matrix was not even 2^32 large, it was close to 2^28, but I still get the same InexactError. Specifically:

  • size: (506_255_563, 506_255_563)
  • number of nonzeros: 3_384_286_971

Anyways, is there a way to force CUSAPRSE to use Int64?

Here's the traceback for InexactError:

ERROR: InexactError: trunc(Int32, 4294967296)
Stacktrace:
  [1] throw_inexacterror(f::Symbol, #unused#::Type{Int32}, val::Int64)
    @ Core ./boot.jl:634
  [2] checked_trunc_sint
    @ ./boot.jl:656 [inlined]
  [3] toInt32
    @ ./boot.jl:693 [inlined]
  [4] Int32
    @ ./boot.jl:783 [inlined]
  [5] convert
    @ ./number.jl:7 [inlined]
  [6] setindex!
    @ ./array.jl:969 [inlined]
  [7] _unsafe_copyto!(dest::Vector{Int32}, doffs::Int64, src::Vector{Int64}, soffs::Int64, n::Int64)
    @ Base ./array.jl:250
  [8] unsafe_copyto!
    @ ./array.jl:304 [inlined]
  [9] _copyto_impl!
    @ ./array.jl:327 [inlined]
 [10] copyto!
    @ ./array.jl:314 [inlined]
 [11] copyto!
    @ ./array.jl:339 [inlined]
 [12] copyto_axcheck!
    @ ./abstractarray.jl:1180 [inlined]
 [13] Vector{Int32}(x::Vector{Int64})
    @ Base ./array.jl:621
 [14] Array
    @ ./boot.jl:501 [inlined]
 [15] convert
    @ ./array.jl:613 [inlined]
 [16] CuArray
    @ ~/.julia/packages/CUDA/tVtYo/src/array.jl:359 [inlined]
 [17] CuArray
    @ ~/.julia/packages/CUDA/tVtYo/src/array.jl:363 [inlined]
 [18] (CuSparseMatrixCSC{Float32})(Mat::SparseMatrixCSC{Float64, Int64})
    @ CUDA.CUSPARSE ~/.julia/packages/CUDA/tVtYo/lib/cusparse/array.jl:365
 [19] adapt_storage
    @ ~/.julia/packages/CUDA/tVtYo/lib/cusparse/array.jl:418 [inlined]
 [20] adapt_structure
    @ ~/.julia/packages/Adapt/UtItS/src/Adapt.jl:57 [inlined]
 [21] adapt
    @ ~/.julia/packages/Adapt/UtItS/src/Adapt.jl:40 [inlined]
 [22] adapt_storage
    @ ~/.julia/packages/CUDA/tVtYo/lib/cusparse/array.jl:422 [inlined]
 [23] adapt_structure
    @ ~/.julia/packages/Adapt/UtItS/src/Adapt.jl:57 [inlined]
 [24] adapt
    @ ~/.julia/packages/Adapt/UtItS/src/Adapt.jl:40 [inlined]
 [25] #cu#1022
    @ ~/.julia/packages/CUDA/tVtYo/src/array.jl:664 [inlined]
 [26] cu(xs::SparseMatrixCSC{Float64, Int64})
    @ CUDA ~/.julia/packages/CUDA/tVtYo/src/array.jl:664
 [27] top-level scope
    @ REPL[23]:1
@maleadt
Copy link
Member

maleadt commented Jul 29, 2023

sp = sparse(I, J, V)

That always hangs or crashes here...

Anyway, on the GPU side, it is actually possible to use sparse arrays with 64-bits integers:

julia> sp = CuSparseMatrixCSC{Float32,Int64}(cu(I), cu(J), cu(V), (2,2));

julia> typeof(sp)
CuSparseMatrixCSC{Float32, Int64}

However, that only works with the generic cuSPARSE APIs, is why we're defaulting to 32-bit indices. That also means a bunch of functionality will probably not work.

It would be great to use more of the generic APIs so that we can default to 64-bits indices: https://docs.nvidia.com/cuda/cusparse/index.html#cusparse-generic-apis. PRs much appreciated!

@maleadt maleadt added the enhancement New feature or request label Jul 29, 2023
@maleadt maleadt changed the title CUSPARSE sparse matrix creator results in InexactError due to Int32 indices Default to sparse arrays with 64-bits indices Jul 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants