Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding a thin wrapper to NLPModels #790

Closed
alonsoC1s opened this issue Jul 26, 2024 · 6 comments · Fixed by #792
Closed

Adding a thin wrapper to NLPModels #790

alonsoC1s opened this issue Jul 26, 2024 · 6 comments · Fixed by #792

Comments

@alonsoC1s
Copy link
Contributor

Working on issue SciML/SciMLBenchmarks.jl#935 to include problems CUTEst.jl to the SciML Benchmarks we discussed that a wrapper to NLPModels would be useful to that issue and possibly to other users. Currently, CUTEst.jl only produces NLPModels, so being able to create an OptimizationProblem from them would make including not only CUTEst, but other libraries as part of the benchmarks much easier

If this wrapper would be welcome, I'd like to start a PR to work on it

@Vaibhavdixit02
Copy link
Member

Sounds good

@alonsoC1s
Copy link
Contributor Author

I've been exploring the NLPModel and OptimizationProblem APIs and I have some thought/questions. Firstly, I have noticed that the way of storing and interacting with things like bounds, counters and even the objective function are very different and result in usage patterns that potentially mean sacrificing some compatibility with the rest of the ecosystem. For instance, @amontoison helped me understand that wrapping the obj function from NLPModel and using it as f on an OptimizationModel would potentially break automatic differentiation and other things that some solvers rely on.

Since this would be a wrapper of a wrapper, I think the levels of inderection compound and could result in some unwieldy code. Or at least that is what I could see happening by wrapping CUTEst.jl, which is the only package currently wrapped that I'm somewhat familiar with

@Vaibhavdixit02
Copy link
Member

For instance, @amontoison helped me understand that wrapping the obj function from NLPModel and using it as f on an OptimizationModel would potentially break automatic differentiation and other things that some solvers rely on.

Could you link to some code to understand this better?

Since this would be a wrapper of a wrapper, I think the levels of inderection compound and could result in some unwieldy code. Or at least that is what I could see happening by wrapping CUTEst.jl, which is the only package currently wrapped that I'm somewhat familiar with

Right, this was my concern in the slack thread as well, I don't have an answer for you right away. If you could link to some relevant parts of the codebase that you were looking at so far it'd be helpful to come up with a concrete answer here

@alonsoC1s
Copy link
Contributor Author

Could you link to some code to understand this better?
Sure, taking the objective function as an example. According to the NLPModel API, whenever you need the objective function, you call obj(m::NLPModel, x). i.e it's not a property of the object, but a function that dispatches on it. So, in particular in the case of CUTEst, the objective function looks (simplified) like this:

function NLPModels.objcons!(
  nlp::CUTEstModel,
  x::StrideOneVector{Float64},
  c::StrideOneVector{Float64},
)
  #= 
   Some checks omitted
  =#
  
    ccall(
      dlsym(cutest_lib, :cutest_cfn_),
      Nothing,
      (Ptr{Int32}, Ref{Int32}, Ref{Int32}, Ptr{Float64}, Ptr{Float64}, Ptr{Float64}),
      io_err,
      nvar,
      ncon,
      x,
      f,
      c,
    )
    increment!(nlp, :neval_cons)
  
  increment!(nlp, :neval_obj)

  return f[1], c
end

In this case, (I think) the fortran side of the code won't break AD

@Vaibhavdixit02
Copy link
Member

Okay, I think an API that looks like this

function OptimizationFunction(model::NLPModel, .....)

    function objective(x, p)
          NLPModels.obj(model, x)
   end
.
.
.
.
.
    return OptimizationFunction(objective .....)
end

might be doable? Take a look at the https://github.com/SciML/SciMLBase.jl/blob/56ba8819c1637fffbae5722057c5532b8d48c21c/src/problems/optimization_problems.jl#L130-L145 NonlinearFunction/Problem -> OptimizationFunction/Problem here for some reference, thought that's pretty trivial comparatively since they look quite similar already

@Vaibhavdixit02
Copy link
Member

The NLPModel could be passed as the parameter to the objective as well, so

function OptimizationFunction(model::NLPModel, .....)

    function objective(x, p = model)
          NLPModels.obj(p, x)
   end
.
.
.
.
.
    return OptimizationFunction(objective .....)
end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants