-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The output of ufn
and ugr
is different from uofg
#61
Comments
I am rather confused why you are using unconstrained tools (ufn, uofg, ugr) on a constrained problem. You need cfn, etc |
@amontoison yes you need to use the constrained tools (cfn, cofg, cgr). The Python interface automatically selects the correct tools depending on whether the problem is constrained or not. |
@jfowkes Is it possible to just evaluate the objective if we have a constrained problem? |
@amontoison there is now |
@amontoison there is also |
In CUTEst.jl, |
@dpo It was just a way to compare the new version of My biggest issue now is that a modification was made here between CUTEst v2.0.7 and v2.0.27 that deallocates memory allocated by C or Julia (it's not allowed!!!). |
Ok, I think I found the culprit. With a recent commit, the size of the vector I compiled CUTEst with several options ( At line 14 of file ../src/tools/unames.F90
Fortran runtime error: Actual string length is shorter than the declared one for dummy argument 'pname' (1/10) I am sure that I am providing an argument On the bright side, the Julia interface is now working in both single and double precision across all platforms (Linux, Intel Mac, Mac Silicon, Windows). The next step is to support quadruple precision in the Julia interface. |
Yes, the time array length was a bug on the fortran side that we had to fix. What is the call to unames that gives this 'pname' error? |
It was: status = Cint[0]
n = nlp.meta.nvar
m = nlp.meta.ncon
pname = Vector{Cchar}(undef, 10)
vname = Matrix{Cchar}(undef, 10, n)
if m == 0
cutest_unames_(status, Cint[n], pname, vnames)
end The SIF problem was |
@amontoison I think you want to pass in |
Yes, it's |
@amontoison it's entirely possible there is a bug in the |
I wonder ... in GALAHAD, when we move from fortran to C characters, we have to account for the extra null that C expects at the end of a string when using the standard fortran c bindings. The strings on the C side are always one character longer to account for this. But we don't do this in the C interface to CUTEr that @dpo wrote, and that is now part of cutest.h. @jfowkes, the map in varnames looks identical to that in unames, so I am not sure why it would fail for one and not the other. @amontoison what happens if you use probname instead of unames to examine pname? It might be that we will have to write CUTEST_Cint_* variants for all CUTEst tools that have character dummy arguments; there aren't very many, just u/cnames, probname, varnames, connames. I'll leave this to one of you. |
I don't have any warning with For the C characters, I have two options in Julia. I can specify if a string is null-terminated ( Nick, did you do some some unit tests of the quadruple precision version of CUTEst? |
@amontoison you can run some unit tests for quadruple precision with:
which is equivalent to what we're doing for single and double precision in |
There are more comprehensive tests (of each subroutine), but these are not yet enabled in quad. |
OK, done. For the makefile version, it's make -s -f /home/nimg/Dropbox/fortran/optrove/cutest/makefiles/pc64.lnx.gfo PRECIS=quadruple run_test_cutest from ./src/test, I'm sure you will know the meson magic ! |
Please remember, this will only work with compilers that support real128 |
I will update |
Good news guys, julia> using CUTEst, Quadmath, NLPModels # automatically download the SIF collection for the user
julia> x = rand(2)
2-element Vector{Float64}:
0.5547518833473369
0.6324463397132569
julia> nlp = CUTEstModel("HS14", precision = :single)
Problem name: HS14
All variables: ████████████████████ 2 All constraints: ████████████████████ 2
free: ████████████████████ 2 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: ( 33.33% sparsity) 2 linear: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
nonlinear: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
nnzj: ( 0.00% sparsity) 4
julia> x_single = Float32.(x)
2-element Vector{Float32}:
0.5547519
0.63244635
julia> obj(nlp, x_single)
2.2238379f0
julia> cons(nlp, x_single)
2-element Vector{Float32}:
0.28985918
0.52307415
julia> hess(nlp, x_single)
2×2 Symmetric{Float32, SparseMatrixCSC{Float32, Int64}}:
2.0 ⋅
⋅ 2.0
julia> nlp = CUTEstModel("HS14", precision = :double)
Problem name: HS14
All variables: ████████████████████ 2 All constraints: ████████████████████ 2
free: ████████████████████ 2 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: ( 33.33% sparsity) 2 linear: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
nonlinear: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
nnzj: ( 0.00% sparsity) 4
julia> x_double = Float64.(x)
2-element Vector{Float64}:
0.5547518833473369
0.6324463397132569
julia> obj(nlp, x_double)
2.223837811878252
julia> cons(nlp, x_double)
2-element Vector{Float64}:
0.2898592039208232
0.5230742143639493
julia> hess(nlp, x_double)
2×2 Symmetric{Float64, SparseMatrixCSC{Float64, Int64}}:
2.0 ⋅
⋅ 2.0
julia> nlp = CUTEstModel("HS14", precision = :quadruple)
Problem name: HS14
All variables: ████████████████████ 2 All constraints: ████████████████████ 2
free: ████████████████████ 2 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: ( 33.33% sparsity) 2 linear: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
nonlinear: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1
nnzj: ( 0.00% sparsity) 4
julia> x_quadruple = Float128.(x)
2-element Vector{Float128}:
5.54751883347336938179239496093941852e-01
6.32446339713256922010486960061825812e-01
julia> obj(nlp, x_quadruple)
2.22383781187825211305610387984903636e+00
julia> cons(nlp, x_quadruple)
2-element Vector{Float128}:
2.89859203920823094158265575970290229e-01
5.23074214363949287782027980068469322e-01
julia> hess(nlp, x_quadruple)
2×2 Symmetric{Float128, SparseMatrixCSC{Float128, Int64}}:
2.00000000000000000000000000000000000e+00 ⋅
⋅ 2.00000000000000000000000000000000000e+00 |
Excellent. Thanks, Alexis. I suppose that we can think of 128bit GALAHAD, but the main issue will be lack of suitable lapack/blas (other than the ones that we compile). At least now that we have the HSL subset, it is trivial to extend HSL to 128bit, so we do have some useful sparse solvers. Goodness knows what will happen with SPRAL or MUMPS, though. |
@jfowkes
Do you observed the same issue with the Python interface?
I remarked that with the problem
"HS36"
:The text was updated successfully, but these errors were encountered: