diff --git a/previews/PR97/.documenter-siteinfo.json b/previews/PR97/.documenter-siteinfo.json index 436fa2d..e4209e1 100644 --- a/previews/PR97/.documenter-siteinfo.json +++ b/previews/PR97/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-11-06T02:27:39","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-11-05T20:53:33","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/previews/PR97/core/index.html b/previews/PR97/core/index.html index 9294452..4053381 100644 --- a/previews/PR97/core/index.html +++ b/previews/PR97/core/index.html @@ -1,5 +1,5 @@ -API Manual · ExaModels.jl

ExaModels

ExaModels.ExaModelsModule
ExaModels

An algebraic modeling and automatic differentiation tool in Julia Language, specialized for SIMD abstraction of nonlinear programs.

For more information, please visit https://github.com/exanauts/ExaModels.jl

source
ExaModels.AdjointNode1Type
AdjointNode1{F, T, I}

A node with one child for first-order forward pass tree

Fields:

  • x::T: function value
  • y::T: first-order sensitivity
  • inner::I: children
source
ExaModels.AdjointNode2Type
AdjointNode2{F, T, I1, I2}

A node with two children for first-order forward pass tree

Fields:

  • x::T: function value
  • y1::T: first-order sensitivity w.r.t. first argument
  • y2::T: first-order sensitivity w.r.t. second argument
  • inner1::I1: children #1
  • inner2::I2: children #2
source
ExaModels.AdjointNodeSourceType
AdjointNodeSource{VT}

A source of AdjointNode. adjoint_node_source[i] returns an AdjointNodeVar at index i.

Fields:

  • inner::VT: variable vector
source
ExaModels.CompressorType
Compressor{I}

Data structure for the sparse index

Fields:

  • inner::I: stores the sparse index as a tuple form
source
ExaModels.ExaCoreType

ExaCore([array_eltype::Type; backend = backend, minimize = true])

Returns an intermediate data object ExaCore, which later can be used for creating ExaModel

Example

julia> using ExaModels
+API Manual · ExaModels.jl

ExaModels

ExaModels.ExaModelsModule
ExaModels

An algebraic modeling and automatic differentiation tool in Julia Language, specialized for SIMD abstraction of nonlinear programs.

For more information, please visit https://github.com/exanauts/ExaModels.jl

source
ExaModels.AdjointNode1Type
AdjointNode1{F, T, I}

A node with one child for first-order forward pass tree

Fields:

  • x::T: function value
  • y::T: first-order sensitivity
  • inner::I: children
source
ExaModels.AdjointNode2Type
AdjointNode2{F, T, I1, I2}

A node with two children for first-order forward pass tree

Fields:

  • x::T: function value
  • y1::T: first-order sensitivity w.r.t. first argument
  • y2::T: first-order sensitivity w.r.t. second argument
  • inner1::I1: children #1
  • inner2::I2: children #2
source
ExaModels.AdjointNodeSourceType
AdjointNodeSource{VT}

A source of AdjointNode. adjoint_node_source[i] returns an AdjointNodeVar at index i.

Fields:

  • inner::VT: variable vector
source
ExaModels.CompressorType
Compressor{I}

Data structure for the sparse index

Fields:

  • inner::I: stores the sparse index as a tuple form
source
ExaModels.ExaCoreType

ExaCore([array_eltype::Type; backend = backend, minimize = true])

Returns an intermediate data object ExaCore, which later can be used for creating ExaModel

Example

julia> using ExaModels
 
 julia> c = ExaCore()
 An ExaCore
@@ -31,7 +31,7 @@
   Backend: ......................... CUDA.CUDAKernels.CUDABackend
 
   number of objective patterns: .... 0
-  number of constraint patterns: ... 0
source
ExaModels.ExaModelMethod
ExaModel(core)

Returns an ExaModel object, which can be solved by nonlinear optimization solvers within JuliaSmoothOptimizer ecosystem, such as NLPModelsIpopt or MadNLP.

Example

julia> using ExaModels
+  number of constraint patterns: ... 0
source
ExaModels.ExaModelMethod
ExaModel(core)

Returns an ExaModel object, which can be solved by nonlinear optimization solvers within JuliaSmoothOptimizer ecosystem, such as NLPModelsIpopt or MadNLP.

Example

julia> using ExaModels
 
 julia> c = ExaCore();                      # create an ExaCore object
 
@@ -58,7 +58,7 @@
 
 julia> result = ipopt(m; print_level=0)    # solve the problem
 "Execution stats: first-order stationary"
-
source
ExaModels.Node1Type
Node1{F, I}

A node with one child for symbolic expression tree

Fields:

  • inner::I: children
source
ExaModels.Node2Type
Node2{F, I1, I2}

A node with two children for symbolic expression tree

Fields:

  • inner1::I1: children #1
  • inner2::I2: children #2
source
ExaModels.SIMDFunctionType
SIMDFunction(gen::Base.Generator, o0 = 0, o1 = 0, o2 = 0)

Returns a SIMDFunction using the gen.

Arguments:

  • gen: an iterable function specified in Base.Generator format
  • o0: offset for the function evaluation
  • o1: offset for the derivative evalution
  • o2: offset for the second-order derivative evalution
source
ExaModels.SecondAdjointNode1Type
SecondAdjointNode1{F, T, I}

A node with one child for second-order forward pass tree

Fields:

  • x::T: function value
  • y::T: first-order sensitivity
  • h::T: second-order sensitivity
  • inner::I: DESCRIPTION
source
ExaModels.SecondAdjointNode2Type
SecondAdjointNode2{F, T, I1, I2}

A node with one child for second-order forward pass tree

Fields:

  • x::T: function value
  • y1::T: first-order sensitivity w.r.t. first argument
  • y2::T: first-order sensitivity w.r.t. first argument
  • h11::T: second-order sensitivity w.r.t. first argument
  • h12::T: second-order sensitivity w.r.t. first and second argument
  • h22::T: second-order sensitivity w.r.t. second argument
  • inner1::I1: children #1
  • inner2::I2: children #2
source
ExaModels.VarType
Var{I}

A variable node for symbolic expression tree

Fields:

  • i::I: (parameterized) index
source
ExaModels.constraintMethod
constraint(core, n; start = 0, lcon = 0,  ucon = 0)

Adds empty constraints of dimension n, so that later the terms can be added with constraint!.

source
ExaModels.constraintMethod
constraint(core, generator; start = 0, lcon = 0,  ucon = 0)

Adds constraints specified by a generator to core, and returns an Constraint object.

Keyword Arguments

  • start: The initial guess of the solution. Can either be Number, AbstractArray, or Generator.
  • lcon : The constraint lower bound. Can either be Number, AbstractArray, or Generator.
  • ucon : The constraint upper bound. Can either be Number, AbstractArray, or Generator.

Example

julia> using ExaModels
+
source
ExaModels.Node1Type
Node1{F, I}

A node with one child for symbolic expression tree

Fields:

  • inner::I: children
source
ExaModels.Node2Type
Node2{F, I1, I2}

A node with two children for symbolic expression tree

Fields:

  • inner1::I1: children #1
  • inner2::I2: children #2
source
ExaModels.SIMDFunctionType
SIMDFunction(gen::Base.Generator, o0 = 0, o1 = 0, o2 = 0)

Returns a SIMDFunction using the gen.

Arguments:

  • gen: an iterable function specified in Base.Generator format
  • o0: offset for the function evaluation
  • o1: offset for the derivative evalution
  • o2: offset for the second-order derivative evalution
source
ExaModels.SecondAdjointNode1Type
SecondAdjointNode1{F, T, I}

A node with one child for second-order forward pass tree

Fields:

  • x::T: function value
  • y::T: first-order sensitivity
  • h::T: second-order sensitivity
  • inner::I: DESCRIPTION
source
ExaModels.SecondAdjointNode2Type
SecondAdjointNode2{F, T, I1, I2}

A node with one child for second-order forward pass tree

Fields:

  • x::T: function value
  • y1::T: first-order sensitivity w.r.t. first argument
  • y2::T: first-order sensitivity w.r.t. first argument
  • h11::T: second-order sensitivity w.r.t. first argument
  • h12::T: second-order sensitivity w.r.t. first and second argument
  • h22::T: second-order sensitivity w.r.t. second argument
  • inner1::I1: children #1
  • inner2::I2: children #2
source
ExaModels.VarType
Var{I}

A variable node for symbolic expression tree

Fields:

  • i::I: (parameterized) index
source
ExaModels.constraintMethod
constraint(core, n; start = 0, lcon = 0,  ucon = 0)

Adds empty constraints of dimension n, so that later the terms can be added with constraint!.

source
ExaModels.constraintMethod
constraint(core, generator; start = 0, lcon = 0,  ucon = 0)

Adds constraints specified by a generator to core, and returns an Constraint object.

Keyword Arguments

  • start: The initial guess of the solution. Can either be Number, AbstractArray, or Generator.
  • lcon : The constraint lower bound. Can either be Number, AbstractArray, or Generator.
  • ucon : The constraint upper bound. Can either be Number, AbstractArray, or Generator.

Example

julia> using ExaModels
 
 julia> c = ExaCore();
 
@@ -70,7 +70,7 @@
   s.t. (...)
        g♭ ≤ [g(x,p)]_{p ∈ P} ≤ g♯
 
-  where |P| = 9
source
ExaModels.constraintMethod
constraint(core, expr [, pars]; start = 0, lcon = 0,  ucon = 0)

Adds constraints specified by a expr and pars to core, and returns an Constraint object.

source
ExaModels.drpassMethod
drpass(d::D, y, adj)

Performs dense gradient evaluation via the reverse pass on the computation (sub)graph formed by forward pass

Arguments:

  • d: first-order computation (sub)graph
  • y: result vector
  • adj: adjoint propagated up to the current node
source
ExaModels.gradient!Method
gradient!(y, f, x, adj)

Performs dense gradient evalution

Arguments:

  • y: result vector
  • f: the function to be differentiated in SIMDFunction format
  • x: variable vector
  • adj: initial adjoint
source
ExaModels.grpassMethod
grpass(d::D, comp, y, o1, cnt, adj)

Performs dsparse gradient evaluation via the reverse pass on the computation (sub)graph formed by forward pass

Arguments:

  • d: first-order computation (sub)graph
  • comp: a Compressor, which helps map counter to sparse vector index
  • y: result vector
  • o1: index offset
  • cnt: counter
  • adj: adjoint propagated up to the current node
source
ExaModels.hdrpassMethod
hdrpass(t1::T1, t2::T2, comp, y1, y2, o2, cnt, adj)

Performs sparse hessian evaluation ((df1/dx)(df2/dx)' portion) via the reverse pass on the computation (sub)graph formed by second-order forward pass

Arguments:

  • t1: second-order computation (sub)graph regarding f1
  • t2: second-order computation (sub)graph regarding f2
  • comp: a Compressor, which helps map counter to sparse vector index
  • y1: result vector #1
  • y2: result vector #2 (only used when evaluating sparsity)
  • o2: index offset
  • cnt: counter
  • adj: second adjoint propagated up to the current node
source
ExaModels.jrpassMethod
jrpass(d::D, comp, i, y1, y2, o1, cnt, adj)

Performs sparse jacobian evaluation via the reverse pass on the computation (sub)graph formed by forward pass

Arguments:

  • d: first-order computation (sub)graph
  • comp: a Compressor, which helps map counter to sparse vector index
  • i: constraint index (this is i-th constraint)
  • y1: result vector #1
  • y2: result vector #2 (only used when evaluating sparsity)
  • o1: index offset
  • cnt: counter
  • adj: adjoint propagated up to the current node
source
ExaModels.multipliersMethod
multipliers(result, y)

Returns the multipliers for constraints y associated with result, obtained by solving the model.

Example

julia> using ExaModels, NLPModelsIpopt
+  where |P| = 9
source
ExaModels.constraintMethod
constraint(core, expr [, pars]; start = 0, lcon = 0,  ucon = 0)

Adds constraints specified by a expr and pars to core, and returns an Constraint object.

source
ExaModels.drpassMethod
drpass(d::D, y, adj)

Performs dense gradient evaluation via the reverse pass on the computation (sub)graph formed by forward pass

Arguments:

  • d: first-order computation (sub)graph
  • y: result vector
  • adj: adjoint propagated up to the current node
source
ExaModels.gradient!Method
gradient!(y, f, x, adj)

Performs dense gradient evalution

Arguments:

  • y: result vector
  • f: the function to be differentiated in SIMDFunction format
  • x: variable vector
  • adj: initial adjoint
source
ExaModels.grpassMethod
grpass(d::D, comp, y, o1, cnt, adj)

Performs dsparse gradient evaluation via the reverse pass on the computation (sub)graph formed by forward pass

Arguments:

  • d: first-order computation (sub)graph
  • comp: a Compressor, which helps map counter to sparse vector index
  • y: result vector
  • o1: index offset
  • cnt: counter
  • adj: adjoint propagated up to the current node
source
ExaModels.hdrpassMethod
hdrpass(t1::T1, t2::T2, comp, y1, y2, o2, cnt, adj)

Performs sparse hessian evaluation ((df1/dx)(df2/dx)' portion) via the reverse pass on the computation (sub)graph formed by second-order forward pass

Arguments:

  • t1: second-order computation (sub)graph regarding f1
  • t2: second-order computation (sub)graph regarding f2
  • comp: a Compressor, which helps map counter to sparse vector index
  • y1: result vector #1
  • y2: result vector #2 (only used when evaluating sparsity)
  • o2: index offset
  • cnt: counter
  • adj: second adjoint propagated up to the current node
source
ExaModels.jrpassMethod
jrpass(d::D, comp, i, y1, y2, o1, cnt, adj)

Performs sparse jacobian evaluation via the reverse pass on the computation (sub)graph formed by forward pass

Arguments:

  • d: first-order computation (sub)graph
  • comp: a Compressor, which helps map counter to sparse vector index
  • i: constraint index (this is i-th constraint)
  • y1: result vector #1
  • y2: result vector #2 (only used when evaluating sparsity)
  • o1: index offset
  • cnt: counter
  • adj: adjoint propagated up to the current node
source
ExaModels.multipliersMethod
multipliers(result, y)

Returns the multipliers for constraints y associated with result, obtained by solving the model.

Example

julia> using ExaModels, NLPModelsIpopt
 
 julia> c = ExaCore();                     
 
@@ -88,7 +88,7 @@
 
 
 julia> val[1] ≈ 0.81933930
-true
source
ExaModels.multipliers_LMethod
multipliers_L(result, x)

Returns the multipliers_L for variable x associated with result, obtained by solving the model.

Example

julia> using ExaModels, NLPModelsIpopt
+true
source
ExaModels.multipliers_LMethod
multipliers_L(result, x)

Returns the multipliers_L for variable x associated with result, obtained by solving the model.

Example

julia> using ExaModels, NLPModelsIpopt
 
 julia> c = ExaCore();                     
 
@@ -103,7 +103,7 @@
 julia> val = multipliers_L(result, x);
 
 julia> isapprox(val, fill(0, 10), atol=sqrt(eps(Float64)), rtol=Inf)
-true
source
ExaModels.multipliers_UMethod
multipliers_U(result, x)

Returns the multipliers_U for variable x associated with result, obtained by solving the model.

Example

julia> using ExaModels, NLPModelsIpopt
+true
source
ExaModels.multipliers_UMethod
multipliers_U(result, x)

Returns the multipliers_U for variable x associated with result, obtained by solving the model.

Example

julia> using ExaModels, NLPModelsIpopt
 
 julia> c = ExaCore();                     
 
@@ -118,7 +118,7 @@
 julia> val = multipliers_U(result, x);
 
 julia> isapprox(val, fill(2, 10), atol=sqrt(eps(Float64)), rtol=Inf)
-true
source
ExaModels.objectiveMethod
objective(core::ExaCore, generator)

Adds objective terms specified by a generator to core, and returns an Objective object. Note: it is assumed that the terms are summed.

Example

julia> using ExaModels
+true
source
ExaModels.objectiveMethod
objective(core::ExaCore, generator)

Adds objective terms specified by a generator to core, and returns an Objective object. Note: it is assumed that the terms are summed.

Example

julia> using ExaModels
 
 julia> c = ExaCore();
 
@@ -129,7 +129,7 @@
 
   min (...) + ∑_{p ∈ P} f(x,p)
 
-  where |P| = 10
source
ExaModels.objectiveMethod
objective(core::ExaCore, expr [, pars])

Adds objective terms specified by a expr and pars to core, and returns an Objective object.

source
ExaModels.sgradient!Method

sgradient!(y, f, x, adj)

Performs sparse gradient evalution

Arguments:

  • y: result vector
  • f: the function to be differentiated in SIMDFunction format
  • x: variable vector
  • adj: initial adjoint
source
ExaModels.shessian!Method
shessian!(y1, y2, f, x, adj1, adj2)

Performs sparse jacobian evalution

Arguments:

  • y1: result vector #1
  • y2: result vector #2 (only used when evaluating sparsity)
  • f: the function to be differentiated in SIMDFunction format
  • x: variable vector
  • adj1: initial first adjoint
  • adj2: initial second adjoint
source
ExaModels.sjacobian!Method
sjacobian!(y1, y2, f, x, adj)

Performs sparse jacobian evalution

Arguments:

  • y1: result vector #1
  • y2: result vector #2 (only used when evaluating sparsity)
  • f: the function to be differentiated in SIMDFunction format
  • x: variable vector
  • adj: initial adjoint
source
ExaModels.solutionMethod
solution(result, x)

Returns the solution for variable x associated with result, obtained by solving the model.

Example

julia> using ExaModels, NLPModelsIpopt
+  where |P| = 10
source
ExaModels.objectiveMethod
objective(core::ExaCore, expr [, pars])

Adds objective terms specified by a expr and pars to core, and returns an Objective object.

source
ExaModels.sgradient!Method

sgradient!(y, f, x, adj)

Performs sparse gradient evalution

Arguments:

  • y: result vector
  • f: the function to be differentiated in SIMDFunction format
  • x: variable vector
  • adj: initial adjoint
source
ExaModels.shessian!Method
shessian!(y1, y2, f, x, adj1, adj2)

Performs sparse jacobian evalution

Arguments:

  • y1: result vector #1
  • y2: result vector #2 (only used when evaluating sparsity)
  • f: the function to be differentiated in SIMDFunction format
  • x: variable vector
  • adj1: initial first adjoint
  • adj2: initial second adjoint
source
ExaModels.sjacobian!Method
sjacobian!(y1, y2, f, x, adj)

Performs sparse jacobian evalution

Arguments:

  • y1: result vector #1
  • y2: result vector #2 (only used when evaluating sparsity)
  • f: the function to be differentiated in SIMDFunction format
  • x: variable vector
  • adj: initial adjoint
source
ExaModels.solutionMethod
solution(result, x)

Returns the solution for variable x associated with result, obtained by solving the model.

Example

julia> using ExaModels, NLPModelsIpopt
 
 julia> c = ExaCore();                     
 
@@ -144,7 +144,7 @@
 julia> val = solution(result, x);
 
 julia> isapprox(val, fill(1, 10), atol=sqrt(eps(Float64)), rtol=Inf)
-true
source
ExaModels.variableMethod
variable(core, dims...; start = 0, lvar = -Inf, uvar = Inf)

Adds variables with dimensions specified by dims to core, and returns Variable object. dims can be either Integer or UnitRange.

Keyword Arguments

  • start: The initial guess of the solution. Can either be Number, AbstractArray, or Generator.
  • lvar : The variable lower bound. Can either be Number, AbstractArray, or Generator.
  • uvar : The variable upper bound. Can either be Number, AbstractArray, or Generator.

Example

julia> using ExaModels
+true
source
ExaModels.variableMethod
variable(core, dims...; start = 0, lvar = -Inf, uvar = Inf)

Adds variables with dimensions specified by dims to core, and returns Variable object. dims can be either Integer or UnitRange.

Keyword Arguments

  • start: The initial guess of the solution. Can either be Number, AbstractArray, or Generator.
  • lvar : The variable lower bound. Can either be Number, AbstractArray, or Generator.
  • uvar : The variable upper bound. Can either be Number, AbstractArray, or Generator.

Example

julia> using ExaModels
 
 julia> c = ExaCore();
 
@@ -157,7 +157,7 @@
 Variable
 
   x ∈ R^{9 × 3}
-
source
ExaModels.@register_bivariateMacro
register_bivariate(f, df1, df2, ddf11, ddf12, ddf22)

Register a bivariate function f to ExaModels, so that it can be used within objective and constraint expressions

Arguments:

  • f: function
  • df1: derivative function (w.r.t. first argument)
  • df2: derivative function (w.r.t. second argument)
  • ddf11: second-order derivative funciton (w.r.t. first argument)
  • ddf12: second-order derivative funciton (w.r.t. first and second argument)
  • ddf22: second-order derivative funciton (w.r.t. second argument)

Example

julia> using ExaModels
+
source
ExaModels.@register_bivariateMacro
register_bivariate(f, df1, df2, ddf11, ddf12, ddf22)

Register a bivariate function f to ExaModels, so that it can be used within objective and constraint expressions

Arguments:

  • f: function
  • df1: derivative function (w.r.t. first argument)
  • df2: derivative function (w.r.t. second argument)
  • ddf11: second-order derivative funciton (w.r.t. first argument)
  • ddf12: second-order derivative funciton (w.r.t. first and second argument)
  • ddf22: second-order derivative funciton (w.r.t. second argument)

Example

julia> using ExaModels
 
 julia> relu23(x) = (x > 0 || y > 0) ? (x + y)^3 : zero(x)
 relu23 (generic function with 1 method)
@@ -177,7 +177,7 @@
 julia> ddrelu2322(x) = (x > 0 || y > 0) ? 6 * (x + y) : zero(x)
 ddrelu2322 (generic function with 1 method)
 
-julia> @register_bivariate(relu23, drelu231, drelu232, ddrelu2311, ddrelu2312, ddrelu2322)
source
ExaModels.@register_univariateMacro
@register_univariate(f, df, ddf)

Register a univariate function f to ExaModels, so that it can be used within objective and constraint expressions

Arguments:

  • f: function
  • df: derivative function
  • ddf: second-order derivative funciton

Example

julia> using ExaModels
+julia> @register_bivariate(relu23, drelu231, drelu232, ddrelu2311, ddrelu2312, ddrelu2322)
source
ExaModels.@register_univariateMacro
@register_univariate(f, df, ddf)

Register a univariate function f to ExaModels, so that it can be used within objective and constraint expressions

Arguments:

  • f: function
  • df: derivative function
  • ddf: second-order derivative funciton

Example

julia> using ExaModels
 
 julia> relu3(x) = x > 0 ? x^3 : zero(x)
 relu3 (generic function with 1 method)
@@ -188,4 +188,4 @@
 julia> ddrelu3(x) = x > 0 ? 6*x : zero(x)
 ddrelu3 (generic function with 1 method)
 
-julia> @register_univariate(relu3, drelu3, ddrelu3)
source
+julia> @register_univariate(relu3, drelu3, ddrelu3)
source
diff --git a/previews/PR97/develop/index.html b/previews/PR97/develop/index.html index d371e15..8b1a7e2 100644 --- a/previews/PR97/develop/index.html +++ b/previews/PR97/develop/index.html @@ -86,4 +86,4 @@ -3.8788212776372465e-5 -7.376592164341867e-6 ] -end +end diff --git a/previews/PR97/distillation/index.html b/previews/PR97/distillation/index.html index cbad7d5..da76723 100644 --- a/previews/PR97/distillation/index.html +++ b/previews/PR97/distillation/index.html @@ -72,4 +72,4 @@ end
distillation_column_model (generic function with 2 methods)
using ExaModels, NLPModelsIpopt
 
 m = distillation_column_model(10)
-ipopt(m)
"Execution stats: first-order stationary"

This page was generated using Literate.jl.

+ipopt(m)
"Execution stats: first-order stationary"

This page was generated using Literate.jl.

diff --git a/previews/PR97/gpu/index.html b/previews/PR97/gpu/index.html index 3170f0e..4f45acd 100644 --- a/previews/PR97/gpu/index.html +++ b/previews/PR97/gpu/index.html @@ -46,4 +46,4 @@ return ExaModel(c) end
cuda_luksan_vlcek_model (generic function with 1 method)
m = cuda_luksan_vlcek_model(10)
-madnlp(m)

This page was generated using Literate.jl.

+madnlp(m)

This page was generated using Literate.jl.

diff --git a/previews/PR97/guide/index.html b/previews/PR97/guide/index.html index ed2b876..022084a 100644 --- a/previews/PR97/guide/index.html +++ b/previews/PR97/guide/index.html @@ -57,4 +57,4 @@ 0.9999966246997642 0.9999995512524277 0.999999944919307 - 0.999999930070643

ExaModels provide several APIs similar to this:

This concludes a short tutorial on how to use ExaModels to model and solve optimization problems. Want to learn more? Take a look at the following examples, which provide further tutorial on how to use ExaModels.jl. Each of the examples are designed to instruct a few additional techniques.


This page was generated using Literate.jl.

+ 0.999999930070643

ExaModels provide several APIs similar to this:

This concludes a short tutorial on how to use ExaModels to model and solve optimization problems. Want to learn more? Take a look at the following examples, which provide further tutorial on how to use ExaModels.jl. Each of the examples are designed to instruct a few additional techniques.


This page was generated using Literate.jl.

diff --git a/previews/PR97/index.html b/previews/PR97/index.html index 6f7d795..7583300 100644 --- a/previews/PR97/index.html +++ b/previews/PR97/index.html @@ -6,4 +6,4 @@ eprint={2307.16830}, archivePrefix={arXiv}, primaryClass={math.OC} -}

Supporting ExaModels.jl

+}

Supporting ExaModels.jl

diff --git a/previews/PR97/jump/index.html b/previews/PR97/jump/index.html index 36690e1..dc81142 100644 --- a/previews/PR97/jump/index.html +++ b/previews/PR97/jump/index.html @@ -61,21 +61,21 @@ Number of Iterations....: 7 (scaled) (unscaled) -Objective...............: 7.8690682927808819e-01 6.2323020878824593e+00 -Dual infeasibility......: 1.9831098139189152e-06 1.5706229726237811e-05 -Constraint violation....: 5.3644583980288433e-11 5.3644583980288433e-11 -Complementarity.........: 1.1122043961251575e-05 8.8086588173112493e-05 -Overall NLP error.......: 8.8086588173112493e-05 8.8086588173112493e-05 +Objective...............: 7.8690682927808731e-01 6.2323020878824522e+00 +Dual infeasibility......: 1.9831098326816843e-06 1.5706229874838943e-05 +Constraint violation....: 5.3644585532052792e-11 5.3644585532052792e-11 +Complementarity.........: 1.1122043961252076e-05 8.8086588173116463e-05 +Overall NLP error.......: 8.8086588173116463e-05 8.8086588173116463e-05 Number of objective function evaluations = 8 Number of objective gradient evaluations = 8 Number of constraint evaluations = 8 Number of constraint Jacobian evaluations = 8 Number of Lagrangian Hessian evaluations = 7 -Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 0.161 -Total wall-clock secs in linear solver = 0.018 -Total wall-clock secs in NLP function evaluations = 0.016 -Total wall-clock secs = 0.195 +Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 0.052 +Total wall-clock secs in linear solver = 0.009 +Total wall-clock secs in NLP function evaluations = 0.008 +Total wall-clock secs = 0.069 EXIT: Optimal Solution Found (tol = 1.0e-04). -

Again, only scalar objective/constraints created via @constraint and @objective API are supported. Older syntax like @NLconstraint and @NLobjective are not supported.


This page was generated using Literate.jl.

+

Again, only scalar objective/constraints created via @constraint and @objective API are supported. Older syntax like @NLconstraint and @NLobjective are not supported.


This page was generated using Literate.jl.

diff --git a/previews/PR97/opf/index.html b/previews/PR97/opf/index.html index 4af4d16..92fae80 100644 --- a/previews/PR97/opf/index.html +++ b/previews/PR97/opf/index.html @@ -189,7 +189,7 @@ Downloads.download( "https://raw.githubusercontent.com/power-grid-lib/pglib-opf/dc6be4b2f85ca0e776952ec22cbd4c22396ea5a3/pglib_opf_case3_lmbd.m", case, -)
"/tmp/jl_Bkpez0PgTm.m"

Then, we can model/sovle the problem.

using PowerModels, ExaModels, NLPModelsIpopt
+)
"/tmp/jl_I7GVghKTyB.m"

Then, we can model/sovle the problem.

using PowerModels, ExaModels, NLPModelsIpopt
 
 m = ac_power_model(case)
-ipopt(m)
"Execution stats: first-order stationary"

This page was generated using Literate.jl.

+ipopt(m)
"Execution stats: first-order stationary"

This page was generated using Literate.jl.

diff --git a/previews/PR97/performance/index.html b/previews/PR97/performance/index.html index ebc88c3..2138144 100644 --- a/previews/PR97/performance/index.html +++ b/previews/PR97/performance/index.html @@ -13,7 +13,7 @@ m = ExaModel(c) end -println("$t seconds elapsed")
0.178431722 seconds elapsed
+println("$t seconds elapsed")
0.100709978 seconds elapsed
 

Even at the second call,

t = @elapsed begin
     c = ExaCore()
     N = 10
@@ -26,7 +26,7 @@
     m = ExaModel(c)
 end
 
-println("$t seconds elapsed")
0.176039465 seconds elapsed
+println("$t seconds elapsed")
0.096844354 seconds elapsed
 

the model creation time can be slightly reduced but the compilation time is still quite significant.

But instead, if you create a function, we can significantly reduce the model creation time.

function luksan_vlcek_model(N)
     c = ExaCore()
     x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))
@@ -40,9 +40,9 @@
 end
 
 t = @elapsed luksan_vlcek_model(N)
-println("$t seconds elapsed")
0.197882836 seconds elapsed
+println("$t seconds elapsed")
0.111111729 seconds elapsed
 
t = @elapsed luksan_vlcek_model(N)
-println("$t seconds elapsed")
0.000122904 seconds elapsed
+println("$t seconds elapsed")
0.000101106 seconds elapsed
 

So, the model creation time can be essentially nothing. Thus, if you care about the model creation time, always make sure to write a function for creating the model, and do not directly create a model from the REPL.

Make sure your array's eltype is concrete

In order for ExaModels to run for loops over the array you provided without any overhead caused by type inference, the eltype of the data array should always be a concrete type. Furthermore, this is required if you want to run ExaModels on GPU accelerators.

Let's take an example.

using ExaModels
 
 N = 1000
@@ -148,7 +148,7 @@
 end
benchmark_callbacks (generic function with 1 method)

The performance comparison is here:

m1 = luksan_vlcek_model_concrete(N)
 m2 = luksan_vlcek_model_non_concrete(N)
 
-benchmark_callbacks(m1)
(tobj = 1.97457e-6, tcon = 7.43174e-5, tgrad = 4.1918499999999995e-6, tjac = 0.00014950855000000002, thess = 0.00070227492, tjacs = 1.877276e-5, thesss = 2.536899e-5)
benchmark_callbacks(m2)
(tobj = 0.00030228149, tcon = 0.0005757477500000001, tgrad = 0.00012270718, tjac = 0.0005138847, thess = 0.0013747706700000001, tjacs = 0.0003864415, thesss = 0.00072836099)

As can be seen here, having concrete eltype dramatically improves the performance. This is because when all the data arrays' eltypes are concrete, the AD evaluations can be performed without any type inferernce, and this should be as fast as highly optimized C/C++/Fortran code.

When you're using GPU accelerators, the eltype of the array should always be concrete. In fact, non-concrete etlype will already cause an error when creating the array. For example,

using CUDA
+benchmark_callbacks(m1)
(tobj = 1.2981000000000001e-6, tcon = 2.47138e-5, tgrad = 2.63309e-6, tjac = 5.055966000000001e-5, thess = 0.00044733208000000005, tjacs = 1.076116e-5, thesss = 1.753494e-5)
benchmark_callbacks(m2)
(tobj = 0.00013450468, tcon = 0.00019564444, tgrad = 4.413147e-5, tjac = 0.00024392682, thess = 0.0007955851899999999, tjacs = 0.000195035, thesss = 0.00035830939)

As can be seen here, having concrete eltype dramatically improves the performance. This is because when all the data arrays' eltypes are concrete, the AD evaluations can be performed without any type inferernce, and this should be as fast as highly optimized C/C++/Fortran code.

When you're using GPU accelerators, the eltype of the array should always be concrete. In fact, non-concrete etlype will already cause an error when creating the array. For example,

using CUDA
 
 try
     arr1 = CuArray(Array{Any}(2:N))
@@ -156,4 +156,4 @@
     showerror(stdout, e)
 end
CuArray only supports element types that are allocated inline.
 Any is not allocated inline
-

This page was generated using Literate.jl.

+

This page was generated using Literate.jl.

diff --git a/previews/PR97/quad/index.html b/previews/PR97/quad/index.html index 6418260..b8de288 100644 --- a/previews/PR97/quad/index.html +++ b/previews/PR97/quad/index.html @@ -82,4 +82,4 @@ end
quadrotor_model (generic function with 2 methods)
using ExaModels, NLPModelsIpopt
 
 m = quadrotor_model(100)
-result = ipopt(m)
"Execution stats: first-order stationary"

This page was generated using Literate.jl.

+result = ipopt(m)
"Execution stats: first-order stationary"

This page was generated using Literate.jl.

diff --git a/previews/PR97/ref/index.html b/previews/PR97/ref/index.html index f2cd749..83f24e4 100644 --- a/previews/PR97/ref/index.html +++ b/previews/PR97/ref/index.html @@ -1,2 +1,2 @@ -References · ExaModels.jl

References

[1]
L. T. Biegler. Nonlinear programming: concepts, algorithms, and applications to chemical processes (SIAM, 2010).
[2]
C. Coffrin, R. Bent, K. Sundar, Y. Ng and M. Lubin. PowerModels.jl: An open-source framework for exploring power flow formulations. In: 2018 Power Systems Computation Conference (PSCC) (IEEE, 2018); pp. 1–8.
[3]
L. Lukšan and J. Vlček. Indefinitely preconditioned inexact Newton method for large sparse equality constrained non-linear programming problems. Numerical linear algebra with applications 5, 219–247 (1998).
+References · ExaModels.jl

References

[1]
L. T. Biegler. Nonlinear programming: concepts, algorithms, and applications to chemical processes (SIAM, 2010).
[2]
C. Coffrin, R. Bent, K. Sundar, Y. Ng and M. Lubin. PowerModels.jl: An open-source framework for exploring power flow formulations. In: 2018 Power Systems Computation Conference (PSCC) (IEEE, 2018); pp. 1–8.
[3]
L. Lukšan and J. Vlček. Indefinitely preconditioned inexact Newton method for large sparse equality constrained non-linear programming problems. Numerical linear algebra with applications 5, 219–247 (1998).
diff --git a/previews/PR97/search_index.js b/previews/PR97/search_index.js index 122cf62..82eb020 100644 --- a/previews/PR97/search_index.js +++ b/previews/PR97/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"develop/#Developing-Extensions","page":"Developing Extensions","title":"Developing Extensions","text":"","category":"section"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"ExaModels.jl's API only uses simple julia funcitons, and thus, implementing the extensions is straightforward. Below, we suggest a good practice for implementing an extension package.","category":"page"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"Let's say that we want to implement an extension package for the example problem in Getting Started. An extension package may look like:","category":"page"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"Root\n├───Project.toml\n├── src\n│ └── LuksanVlcekModels.jl\n└── test\n └── runtest.jl","category":"page"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"Each of the files containing","category":"page"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"# Project.toml\n\nname = \"LuksanVlcekModels\"\nuuid = \"0c5951a0-f777-487f-ad29-fac2b9a21bf1\"\nauthors = [\"Sungho Shin \"]\nversion = \"0.1.0\"\n\n[deps]\nExaModels = \"1037b233-b668-4ce9-9b63-f9f681f55dd2\"\n\n[extras]\nNLPModelsIpopt = \"f4238b75-b362-5c4c-b852-0801c9a21d71\"\nTest = \"8dfed614-e22c-5e08-85e1-65c5234f0b40\"\n\n[targets]\ntest = [\"Test\", \"NLPModelsIpopt\"]","category":"page"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"# src/LuksanVlcekModels.jl\n\nmodule LuksanVlcekModels\n\nimport ExaModels\n\nfunction luksan_vlcek_obj(x,i)\n return 100*(x[i-1]^2-x[i])^2+(x[i-1]-1)^2\nend\n\nfunction luksan_vlcek_con(x,i)\n return 3x[i+1]^3+2*x[i+2]-5+sin(x[i+1]-x[i+2])sin(x[i+1]+x[i+2])+4x[i+1]-x[i]exp(x[i]-x[i+1])-3\nend\n\nfunction luksan_vlcek_x0(i)\n return mod(i,2)==1 ? -1.2 : 1.0\nend\n\nfunction luksan_vlcek_model(N; backend = nothing)\n \n c = ExaModels.ExaCore(backend)\n x = ExaModels.variable(\n c, N;\n start = (luksan_vlcek_x0(i) for i=1:N)\n )\n ExaModels.constraint(\n c,\n luksan_vlcek_con(x,i)\n for i in 1:N-2)\n ExaModels.objective(c, luksan_vlcek_obj(x,i) for i in 2:N)\n \n return ExaModels.ExaModel(c) # returns the model\nend\n\nexport luksan_vlcek_model\n\nend # module LuksanVlcekModels","category":"page"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"# test/runtest.jl\n\nusing Test, LuksanVlcekModels, NLPModelsIpopt\n\n@testset \"LuksanVlcekModelsTest\" begin\n m = luksan_vlcek_model(10)\n result = ipopt(m)\n\n @test result.status == :first_order\n @test result.solution ≈ [\n -0.9505563573613093\n 0.9139008176388945\n 0.9890905176644905\n 0.9985592422681151\n 0.9998087408802769\n 0.9999745932450963\n 0.9999966246997642\n 0.9999995512524277\n 0.999999944919307\n 0.999999930070643\n ]\n @test result.multipliers ≈ [\n 4.1358568305002255\n -1.876494903703342\n -0.06556333356358675\n -0.021931863018312875\n -0.0019537261317119302\n -0.00032910445671233547\n -3.8788212776372465e-5\n -7.376592164341867e-6\n ]\nend","category":"page"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"EditURL = \"distillation.jl\"","category":"page"},{"location":"distillation/#distillation","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"","category":"section"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"function distillation_column_model(T = 3; backend = nothing)\n\n NT = 30\n FT = 17\n Ac = 0.5\n At = 0.25\n Ar = 1.0\n D = 0.2\n F = 0.4\n ybar = 0.8958\n ubar = 2.0\n alpha = 1.6\n dt = 10 / T\n xAf = 0.5\n xA0s = ExaModels.convert_array([(i, 0.5) for i = 0:NT+1], backend)\n\n itr0 = ExaModels.convert_array(collect(Iterators.product(1:T, 1:FT-1)), backend)\n itr1 = ExaModels.convert_array(collect(Iterators.product(1:T, FT+1:NT)), backend)\n itr2 = ExaModels.convert_array(collect(Iterators.product(0:T, 0:NT+1)), backend)\n\n c = ExaCore(backend)\n\n xA = variable(c, 0:T, 0:NT+1; start = 0.5)\n yA = variable(c, 0:T, 0:NT+1; start = 0.5)\n u = variable(c, 0:T; start = 1.0)\n V = variable(c, 0:T; start = 1.0)\n L2 = variable(c, 0:T; start = 1.0)\n\n objective(c, (yA[t, 1] - ybar)^2 for t = 0:T)\n objective(c, (u[t] - ubar)^2 for t = 0:T)\n\n constraint(c, xA[0, i] - xA0 for (i, xA0) in xA0s)\n constraint(\n c,\n (xA[t, 0] - xA[t-1, 0]) / dt - (1 / Ac) * (yA[t, 1] - xA[t, 0]) for t = 1:T\n )\n constraint(\n c,\n (xA[t, i] - xA[t-1, i]) / dt -\n (1 / At) * (u[t] * D * (yA[t, i-1] - xA[t, i]) - V[t] * (yA[t, i] - yA[t, i+1])) for\n (t, i) in itr0\n )\n constraint(\n c,\n (xA[t, FT] - xA[t-1, FT]) / dt -\n (1 / At) * (\n F * xAf + u[t] * D * xA[t, FT-1] - L2[t] * xA[t, FT] -\n V[t] * (yA[t, FT] - yA[t, FT+1])\n ) for t = 1:T\n )\n constraint(\n c,\n (xA[t, i] - xA[t-1, i]) / dt -\n (1 / At) * (L2[t] * (yA[t, i-1] - xA[t, i]) - V[t] * (yA[t, i] - yA[t, i+1])) for\n (t, i) in itr1\n )\n constraint(\n c,\n (xA[t, NT+1] - xA[t-1, NT+1]) / dt -\n (1 / Ar) * (L2[t] * xA[t, NT] - (F - D) * xA[t, NT+1] - V[t] * yA[t, NT+1]) for\n t = 1:T\n )\n constraint(c, V[t] - u[t] * D - D for t = 0:T)\n constraint(c, L2[t] - u[t] * D - F for t = 0:T)\n constraint(\n c,\n yA[t, i] * (1 - xA[t, i]) - alpha * xA[t, i] * (1 - yA[t, i]) for (t, i) in itr2\n )\n\n return ExaModel(c)\nend","category":"page"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"distillation_column_model (generic function with 2 methods)","category":"page"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"using ExaModels, NLPModelsIpopt\n\nm = distillation_column_model(10)\nipopt(m)","category":"page"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"\"Execution stats: first-order stationary\"","category":"page"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"","category":"page"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"This page was generated using Literate.jl.","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"EditURL = \"opf.jl\"","category":"page"},{"location":"opf/#opf","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"","category":"section"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"function parse_ac_power_data(filename)\n data = PowerModels.parse_file(filename)\n PowerModels.standardize_cost_terms!(data, order = 2)\n PowerModels.calc_thermal_limits!(data)\n ref = PowerModels.build_ref(data)[:it][:pm][:nw][0]\n\n arcdict = Dict(a => k for (k, a) in enumerate(ref[:arcs]))\n busdict = Dict(k => i for (i, (k, v)) in enumerate(ref[:bus]))\n gendict = Dict(k => i for (i, (k, v)) in enumerate(ref[:gen]))\n branchdict = Dict(k => i for (i, (k, v)) in enumerate(ref[:branch]))\n\n return (\n bus = [\n begin\n bus_loads = [ref[:load][l] for l in ref[:bus_loads][k]]\n bus_shunts = [ref[:shunt][s] for s in ref[:bus_shunts][k]]\n pd = sum(load[\"pd\"] for load in bus_loads; init = 0.0)\n gs = sum(shunt[\"gs\"] for shunt in bus_shunts; init = 0.0)\n qd = sum(load[\"qd\"] for load in bus_loads; init = 0.0)\n bs = sum(shunt[\"bs\"] for shunt in bus_shunts; init = 0.0)\n (i = busdict[k], pd = pd, gs = gs, qd = qd, bs = bs)\n end for (k, v) in ref[:bus]\n ],\n gen = [\n (\n i = gendict[k],\n cost1 = v[\"cost\"][1],\n cost2 = v[\"cost\"][2],\n cost3 = v[\"cost\"][3],\n bus = busdict[v[\"gen_bus\"]],\n ) for (k, v) in ref[:gen]\n ],\n arc = [\n (i = k, rate_a = ref[:branch][l][\"rate_a\"], bus = busdict[i]) for\n (k, (l, i, j)) in enumerate(ref[:arcs])\n ],\n branch = [\n begin\n f_idx = arcdict[i, branch[\"f_bus\"], branch[\"t_bus\"]]\n t_idx = arcdict[i, branch[\"t_bus\"], branch[\"f_bus\"]]\n g, b = PowerModels.calc_branch_y(branch)\n tr, ti = PowerModels.calc_branch_t(branch)\n ttm = tr^2 + ti^2\n g_fr = branch[\"g_fr\"]\n b_fr = branch[\"b_fr\"]\n g_to = branch[\"g_to\"]\n b_to = branch[\"b_to\"]\n c1 = (-g * tr - b * ti) / ttm\n c2 = (-b * tr + g * ti) / ttm\n c3 = (-g * tr + b * ti) / ttm\n c4 = (-b * tr - g * ti) / ttm\n c5 = (g + g_fr) / ttm\n c6 = (b + b_fr) / ttm\n c7 = (g + g_to)\n c8 = (b + b_to)\n (\n i = branchdict[i],\n j = 1,\n f_idx = f_idx,\n t_idx = t_idx,\n f_bus = busdict[branch[\"f_bus\"]],\n t_bus = busdict[branch[\"t_bus\"]],\n c1 = c1,\n c2 = c2,\n c3 = c3,\n c4 = c4,\n c5 = c5,\n c6 = c6,\n c7 = c7,\n c8 = c8,\n rate_a_sq = branch[\"rate_a\"]^2,\n )\n end for (i, branch) in ref[:branch]\n ],\n ref_buses = [busdict[i] for (i, k) in ref[:ref_buses]],\n vmax = [v[\"vmax\"] for (k, v) in ref[:bus]],\n vmin = [v[\"vmin\"] for (k, v) in ref[:bus]],\n pmax = [v[\"pmax\"] for (k, v) in ref[:gen]],\n pmin = [v[\"pmin\"] for (k, v) in ref[:gen]],\n qmax = [v[\"qmax\"] for (k, v) in ref[:gen]],\n qmin = [v[\"qmin\"] for (k, v) in ref[:gen]],\n rate_a = [ref[:branch][l][\"rate_a\"] for (k, (l, i, j)) in enumerate(ref[:arcs])],\n angmax = [b[\"angmax\"] for (i, b) in ref[:branch]],\n angmin = [b[\"angmin\"] for (i, b) in ref[:branch]],\n )\nend\n\nconvert_data(data::N, backend) where {names,N<:NamedTuple{names}} =\n NamedTuple{names}(ExaModels.convert_array(d, backend) for d in data)\n\nparse_ac_power_data(filename, backend) =\n convert_data(parse_ac_power_data(filename), backend)\n\nfunction ac_power_model(filename; backend = nothing, T = Float64)\n\n data = parse_ac_power_data(filename, backend)\n\n w = ExaCore(T; backend = backend)\n\n va = variable(w, length(data.bus);)\n\n vm = variable(\n w,\n length(data.bus);\n start = fill!(similar(data.bus, Float64), 1.0),\n lvar = data.vmin,\n uvar = data.vmax,\n )\n pg = variable(w, length(data.gen); lvar = data.pmin, uvar = data.pmax)\n\n qg = variable(w, length(data.gen); lvar = data.qmin, uvar = data.qmax)\n\n p = variable(w, length(data.arc); lvar = -data.rate_a, uvar = data.rate_a)\n\n q = variable(w, length(data.arc); lvar = -data.rate_a, uvar = data.rate_a)\n\n o = objective(w, g.cost1 * pg[g.i]^2 + g.cost2 * pg[g.i] + g.cost3 for g in data.gen)\n\n c1 = constraint(w, va[i] for i in data.ref_buses)\n\n c2 = constraint(\n w,\n p[b.f_idx] - b.c5 * vm[b.f_bus]^2 -\n b.c3 * (vm[b.f_bus] * vm[b.t_bus] * cos(va[b.f_bus] - va[b.t_bus])) -\n b.c4 * (vm[b.f_bus] * vm[b.t_bus] * sin(va[b.f_bus] - va[b.t_bus])) for\n b in data.branch\n )\n\n c3 = constraint(\n w,\n q[b.f_idx] +\n b.c6 * vm[b.f_bus]^2 +\n b.c4 * (vm[b.f_bus] * vm[b.t_bus] * cos(va[b.f_bus] - va[b.t_bus])) -\n b.c3 * (vm[b.f_bus] * vm[b.t_bus] * sin(va[b.f_bus] - va[b.t_bus])) for\n b in data.branch\n )\n\n c4 = constraint(\n w,\n p[b.t_idx] - b.c7 * vm[b.t_bus]^2 -\n b.c1 * (vm[b.t_bus] * vm[b.f_bus] * cos(va[b.t_bus] - va[b.f_bus])) -\n b.c2 * (vm[b.t_bus] * vm[b.f_bus] * sin(va[b.t_bus] - va[b.f_bus])) for\n b in data.branch\n )\n\n c5 = constraint(\n w,\n q[b.t_idx] +\n b.c8 * vm[b.t_bus]^2 +\n b.c2 * (vm[b.t_bus] * vm[b.f_bus] * cos(va[b.t_bus] - va[b.f_bus])) -\n b.c1 * (vm[b.t_bus] * vm[b.f_bus] * sin(va[b.t_bus] - va[b.f_bus])) for\n b in data.branch\n )\n\n c6 = constraint(\n w,\n va[b.f_bus] - va[b.t_bus] for b in data.branch;\n lcon = data.angmin,\n ucon = data.angmax,\n )\n c7 = constraint(\n w,\n p[b.f_idx]^2 + q[b.f_idx]^2 - b.rate_a_sq for b in data.branch;\n lcon = fill!(similar(data.branch, Float64, length(data.branch)), -Inf),\n )\n c8 = constraint(\n w,\n p[b.t_idx]^2 + q[b.t_idx]^2 - b.rate_a_sq for b in data.branch;\n lcon = fill!(similar(data.branch, Float64, length(data.branch)), -Inf),\n )\n\n c9 = constraint(w, b.pd + b.gs * vm[b.i]^2 for b in data.bus)\n\n c10 = constraint(w, b.qd - b.bs * vm[b.i]^2 for b in data.bus)\n\n c11 = constraint!(w, c9, a.bus => p[a.i] for a in data.arc)\n c12 = constraint!(w, c10, a.bus => q[a.i] for a in data.arc)\n\n c13 = constraint!(w, c9, g.bus => -pg[g.i] for g in data.gen)\n c14 = constraint!(w, c10, g.bus => -qg[g.i] for g in data.gen)\n\n return ExaModel(w)\n\nend","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"ac_power_model (generic function with 1 method)","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"We first download the case file.","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"using Downloads\n\ncase = tempname() * \".m\"\n\nDownloads.download(\n \"https://raw.githubusercontent.com/power-grid-lib/pglib-opf/dc6be4b2f85ca0e776952ec22cbd4c22396ea5a3/pglib_opf_case3_lmbd.m\",\n case,\n)","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"\"/tmp/jl_Bkpez0PgTm.m\"","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"Then, we can model/sovle the problem.","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"using PowerModels, ExaModels, NLPModelsIpopt\n\nm = ac_power_model(case)\nipopt(m)","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"\"Execution stats: first-order stationary\"","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"This page was generated using Literate.jl.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"EditURL = \"performance.jl\"","category":"page"},{"location":"performance/#Performance-Tips","page":"Performance Tips","title":"Performance Tips","text":"","category":"section"},{"location":"performance/#Use-a-function-to-create-a-model","page":"Performance Tips","title":"Use a function to create a model","text":"","category":"section"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"It is always better to use functions to create ExaModels. This in this way, the functions used for specifing objective/constraint functions are not recreated over all over, and thus, we can take advantage of the previously compiled model creation code. Let's consider the following example.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"using ExaModels\n\nt = @elapsed begin\n c = ExaCore()\n N = 10\n x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))\n objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)\n constraint(\n c,\n 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] - x[i]exp(x[i] - x[i+1]) - 3 for i = 1:N-2\n )\n m = ExaModel(c)\nend\n\nprintln(\"$t seconds elapsed\")","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"0.178431722 seconds elapsed\n","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"Even at the second call,","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"t = @elapsed begin\n c = ExaCore()\n N = 10\n x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))\n objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)\n constraint(\n c,\n 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] - x[i]exp(x[i] - x[i+1]) - 3 for i = 1:N-2\n )\n m = ExaModel(c)\nend\n\nprintln(\"$t seconds elapsed\")","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"0.176039465 seconds elapsed\n","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"the model creation time can be slightly reduced but the compilation time is still quite significant.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"But instead, if you create a function, we can significantly reduce the model creation time.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"function luksan_vlcek_model(N)\n c = ExaCore()\n x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))\n objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)\n constraint(\n c,\n 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -\n x[i]exp(x[i] - x[i+1]) - 3 for i = 1:N-2\n )\n m = ExaModel(c)\nend\n\nt = @elapsed luksan_vlcek_model(N)\nprintln(\"$t seconds elapsed\")","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"0.197882836 seconds elapsed\n","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"t = @elapsed luksan_vlcek_model(N)\nprintln(\"$t seconds elapsed\")","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"0.000122904 seconds elapsed\n","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"So, the model creation time can be essentially nothing. Thus, if you care about the model creation time, always make sure to write a function for creating the model, and do not directly create a model from the REPL.","category":"page"},{"location":"performance/#Make-sure-your-array's-eltype-is-concrete","page":"Performance Tips","title":"Make sure your array's eltype is concrete","text":"","category":"section"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"In order for ExaModels to run for loops over the array you provided without any overhead caused by type inference, the eltype of the data array should always be a concrete type. Furthermore, this is required if you want to run ExaModels on GPU accelerators.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"Let's take an example.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"using ExaModels\n\nN = 1000\n\nfunction luksan_vlcek_model_concrete(N)\n c = ExaCore()\n\n arr1 = Array(2:N)\n arr2 = Array(1:N-2)\n\n x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))\n objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i in arr1)\n constraint(\n c,\n 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -\n x[i]exp(x[i] - x[i+1]) - 3 for i in arr2\n )\n m = ExaModel(c)\nend\n\nfunction luksan_vlcek_model_non_concrete(N)\n c = ExaCore()\n\n arr1 = Array{Any}(2:N)\n arr2 = Array{Any}(1:N-2)\n\n x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))\n objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i in arr1)\n constraint(\n c,\n 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -\n x[i]exp(x[i] - x[i+1]) - 3 for i in arr2\n )\n m = ExaModel(c)\nend","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"luksan_vlcek_model_non_concrete (generic function with 1 method)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"Here, observe that","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"isconcretetype(eltype(Array(2:N)))","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"true","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"isconcretetype(eltype(Array{Any}(2:N)))","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"false","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"As you can see, the first array type has concrete eltypes, whereas the second array type has non concrete eltypes. Due to this, the array stored in the model created by luksan_vlcek_model_non_concrete will have non-concrete eltypes.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"Now let's compare the performance. We will use the following benchmark function here.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"using NLPModels\n\nfunction benchmark_callbacks(m; N = 100)\n nvar = m.meta.nvar\n ncon = m.meta.ncon\n nnzj = m.meta.nnzj\n nnzh = m.meta.nnzh\n\n x = copy(m.meta.x0)\n y = similar(m.meta.x0, ncon)\n c = similar(m.meta.x0, ncon)\n g = similar(m.meta.x0, nvar)\n jac = similar(m.meta.x0, nnzj)\n hess = similar(m.meta.x0, nnzh)\n jrows = similar(m.meta.x0, Int, nnzj)\n jcols = similar(m.meta.x0, Int, nnzj)\n hrows = similar(m.meta.x0, Int, nnzh)\n hcols = similar(m.meta.x0, Int, nnzh)\n\n GC.enable(false)\n\n NLPModels.obj(m, x) # to compile\n\n tobj = (1 / N) * @elapsed for t = 1:N\n NLPModels.obj(m, x)\n end\n\n NLPModels.cons!(m, x, c) # to compile\n tcon = (1 / N) * @elapsed for t = 1:N\n NLPModels.cons!(m, x, c)\n end\n\n NLPModels.grad!(m, x, g) # to compile\n tgrad = (1 / N) * @elapsed for t = 1:N\n NLPModels.grad!(m, x, g)\n end\n\n NLPModels.jac_coord!(m, x, jac) # to compile\n tjac = (1 / N) * @elapsed for t = 1:N\n NLPModels.jac_coord!(m, x, jac)\n end\n\n NLPModels.hess_coord!(m, x, y, hess) # to compile\n thess = (1 / N) * @elapsed for t = 1:N\n NLPModels.hess_coord!(m, x, y, hess)\n end\n\n NLPModels.jac_structure!(m, jrows, jcols) # to compile\n tjacs = (1 / N) * @elapsed for t = 1:N\n NLPModels.jac_structure!(m, jrows, jcols)\n end\n\n NLPModels.hess_structure!(m, hrows, hcols) # to compile\n thesss = (1 / N) * @elapsed for t = 1:N\n NLPModels.hess_structure!(m, hrows, hcols)\n end\n\n GC.enable(true)\n\n return (\n tobj = tobj,\n tcon = tcon,\n tgrad = tgrad,\n tjac = tjac,\n thess = thess,\n tjacs = tjacs,\n thesss = thesss,\n )\nend","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"benchmark_callbacks (generic function with 1 method)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"The performance comparison is here:","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"m1 = luksan_vlcek_model_concrete(N)\nm2 = luksan_vlcek_model_non_concrete(N)\n\nbenchmark_callbacks(m1)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"(tobj = 1.97457e-6, tcon = 7.43174e-5, tgrad = 4.1918499999999995e-6, tjac = 0.00014950855000000002, thess = 0.00070227492, tjacs = 1.877276e-5, thesss = 2.536899e-5)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"benchmark_callbacks(m2)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"(tobj = 0.00030228149, tcon = 0.0005757477500000001, tgrad = 0.00012270718, tjac = 0.0005138847, thess = 0.0013747706700000001, tjacs = 0.0003864415, thesss = 0.00072836099)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"As can be seen here, having concrete eltype dramatically improves the performance. This is because when all the data arrays' eltypes are concrete, the AD evaluations can be performed without any type inferernce, and this should be as fast as highly optimized C/C++/Fortran code.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"When you're using GPU accelerators, the eltype of the array should always be concrete. In fact, non-concrete etlype will already cause an error when creating the array. For example,","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"using CUDA\n\ntry\n arr1 = CuArray(Array{Any}(2:N))\ncatch e\n showerror(stdout, e)\nend","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"CuArray only supports element types that are allocated inline.\nAny is not allocated inline\n","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"This page was generated using Literate.jl.","category":"page"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"EditURL = \"quad.jl\"","category":"page"},{"location":"quad/#quad","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"","category":"section"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"function quadrotor_model(N = 3; backend = nothing)\n\n n = 9\n p = 4\n nd = 9\n d(i, j, N) =\n (j == 1 ? 1 * sin(2 * pi / N * i) : 0.0) +\n (j == 3 ? 2 * sin(4 * pi / N * i) : 0.0) +\n (j == 5 ? 2 * i / N : 0.0)\n dt = 0.01\n R = fill(1 / 10, 4)\n Q = [1, 0, 1, 0, 1, 0, 1, 1, 1]\n Qf = [1, 0, 1, 0, 1, 0, 1, 1, 1] / dt\n\n x0s = [(i, 0.0) for i = 1:n]\n itr0 = [(i, j, R[j]) for (i, j) in Base.product(1:N, 1:p)]\n itr1 = [(i, j, Q[j], d(i, j, N)) for (i, j) in Base.product(1:N, 1:n)]\n itr2 = [(j, Qf[j], d(N + 1, j, N)) for j = 1:n]\n\n c = ExaCore(; backend = backend)\n\n x = variable(c, 1:N+1, 1:n)\n u = variable(c, 1:N, 1:p)\n\n constraint(c, x[1, i] - x0 for (i, x0) in x0s)\n constraint(c, -x[i+1, 1] + x[i, 1] + (x[i, 2]) * dt for i = 1:N)\n constraint(\n c,\n -x[i+1, 2] +\n x[i, 2] +\n (\n u[i, 1] * cos(x[i, 7]) * sin(x[i, 8]) * cos(x[i, 9]) +\n u[i, 1] * sin(x[i, 7]) * sin(x[i, 9])\n ) * dt for i = 1:N\n )\n constraint(c, -x[i+1, 3] + x[i, 3] + (x[i, 4]) * dt for i = 1:N)\n constraint(\n c,\n -x[i+1, 4] +\n x[i, 4] +\n (\n u[i, 1] * cos(x[i, 7]) * sin(x[i, 8]) * sin(x[i, 9]) -\n u[i, 1] * sin(x[i, 7]) * cos(x[i, 9])\n ) * dt for i = 1:N\n )\n constraint(c, -x[i+1, 5] + x[i, 5] + (x[i, 6]) * dt for i = 1:N)\n constraint(\n c,\n -x[i+1, 6] + x[i, 6] + (u[i, 1] * cos(x[i, 7]) * cos(x[i, 8]) - 9.8) * dt for\n i = 1:N\n )\n constraint(\n c,\n -x[i+1, 7] +\n x[i, 7] +\n (u[i, 2] * cos(x[i, 7]) / cos(x[i, 8]) + u[i, 3] * sin(x[i, 7]) / cos(x[i, 8])) * dt\n for i = 1:N\n )\n constraint(\n c,\n -x[i+1, 8] + x[i, 8] + (-u[i, 2] * sin(x[i, 7]) + u[i, 3] * cos(x[i, 7])) * dt for\n i = 1:N\n )\n constraint(\n c,\n -x[i+1, 9] +\n x[i, 9] +\n (\n u[i, 2] * cos(x[i, 7]) * tan(x[i, 8]) +\n u[i, 3] * sin(x[i, 7]) * tan(x[i, 8]) +\n u[i, 4]\n ) * dt for i = 1:N\n )\n\n objective(c, 0.5 * R * (u[i, j]^2) for (i, j, R) in itr0)\n objective(c, 0.5 * Q * (x[i, j] - d)^2 for (i, j, Q, d) in itr1)\n objective(c, 0.5 * Qf * (x[N+1, j] - d)^2 for (j, Qf, d) in itr2)\n\n m = ExaModel(c)\n\nend","category":"page"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"quadrotor_model (generic function with 2 methods)","category":"page"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"using ExaModels, NLPModelsIpopt\n\nm = quadrotor_model(100)\nresult = ipopt(m)","category":"page"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"\"Execution stats: first-order stationary\"","category":"page"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"","category":"page"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"This page was generated using Literate.jl.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"EditURL = \"guide.jl\"","category":"page"},{"location":"guide/#guide","page":"Getting Started","title":"Getting Started","text":"","category":"section"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"ExaModels can create nonlinear prgogramming models and allows solving the created models using NLP solvers (in particular, those that are interfaced with NLPModels, such as NLPModelsIpopt and MadNLP. This documentation page will describe how to use ExaModels to model and solve nonlinear optimization problems.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"We will first consider the following simple nonlinear program [3]:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"beginaligned\nmin_x_i_i=0^N sum_i=2^N 100(x_i-1^2-x_i)^2+(x_i-1-1)^2\ntextst 3x_i+1^3+2x_i+2-5+sin(x_i+1-x_i+2)sin(x_i+1+x_i+2)+4x_i+1-x_i e^x_i-x_i+1-3 = 0\nendaligned","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"We will follow the following Steps to create the model/solve this optimization problem.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Step 0: import ExaModels.jl\nStep 1: create a ExaCore object, wherein we can progressively build an optimization model.\nStep 2: create optimization variables with variable, while attaching it to previously created ExaCore.\nStep 3 (interchangable with Step 3): create objective function with objective, while attaching it to previously created ExaCore.\nStep 4 (interchangable with Step 2): create constraints with constraint, while attaching it to previously created ExaCore.\nStep 5: create an ExaModel based on the ExaCore.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Now, let's jump right in. We import ExaModels via (Step 0):","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"using ExaModels","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Now, all the functions that are necessary for creating model are imported to into Main.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"An ExaCore object can be created simply by (Step 1):","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"c = ExaCore()","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"An ExaCore\n\n Float type: ...................... Float64\n Array type: ...................... Vector{Float64}\n Backend: ......................... Nothing\n\n number of objective patterns: .... 0\n number of constraint patterns: ... 0\n","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"This is where our optimziation model information will be progressively stored. This object is not yet an NLPModel, but it will essentially store all the necessary information.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Now, let's create the optimziation variables. From the problem definition, we can see that we will need N scalar variables. We will choose N=10, and create the variable xinmathbbR^N with the follwoing command:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"N = 10\nx = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Variable\n\n x ∈ R^{10}\n","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"This creates the variable x, which we will be able to refer to when we create constraints/objective constraionts. Also, this modifies the information in the ExaCore object properly so that later an optimization model can be properly created with the necessary information. Observe that we have used the keyword argument start to specify the initial guess for the solution. The variable upper and lower bounds can be specified in a similar manner.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"The objective can be set as follows:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Objective\n\n min (...) + ∑_{p ∈ P} f(x,p)\n\n where |P| = 9\n","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"note: Note\nNote that the terms here are summed, without explicitly using sum( ... ) syntax.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"The constraints can be set as follows:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"constraint(\n c,\n 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -\n x[i]exp(x[i] - x[i+1]) - 3 for i = 1:N-2\n)","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Constraint\n\n s.t. (...)\n g♭ ≤ [g(x,p)]_{p ∈ P} ≤ g♯\n\n where |P| = 8\n","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Finally, we are ready to create an ExaModel from the data we have collected in ExaCore. Since ExaCore includes all the necessary information, we can do this simply by:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"m = ExaModel(c)","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"An ExaModel{Float64, Vector{Float64}, ...}\n\n Problem name: Generic\n All variables: ████████████████████ 10 All constraints: ████████████████████ 8 \n free: ████████████████████ 10 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ████████████████████ 8 \n infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n nnzh: (-36.36% sparsity) 75 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n nonlinear: ████████████████████ 8 \n nnzj: ( 70.00% sparsity) 24 \n\n","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Now, we got an optimization model ready to be solved. This problem can be solved with for example, with the Ipopt solver, as follows.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"using NLPModelsIpopt\nresult = ipopt(m)","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"\"Execution stats: first-order stationary\"","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Here, result is an AbstractExecutionStats, which typically contains the solution information. We can check several information as follows.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"println(\"Status: $(result.status)\")\nprintln(\"Number of iterations: $(result.iter)\")","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Status: first_order\nNumber of iterations: 6\n","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"The solution values for variable x can be inquired by:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"sol = solution(result, x)","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"10-element view(::Vector{Float64}, 1:10) with eltype Float64:\n -0.9505563573613093\n 0.9139008176388945\n 0.9890905176644905\n 0.9985592422681151\n 0.9998087408802769\n 0.9999745932450963\n 0.9999966246997642\n 0.9999995512524277\n 0.999999944919307\n 0.999999930070643","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"ExaModels provide several APIs similar to this:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"solution inquires the primal solution.\nmultipliers inquires the dual solution.\nmultipliers_L inquires the lower bound dual solution.\nmultipliers_U inquires the upper bound dual solution.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"This concludes a short tutorial on how to use ExaModels to model and solve optimization problems. Want to learn more? Take a look at the following examples, which provide further tutorial on how to use ExaModels.jl. Each of the examples are designed to instruct a few additional techniques.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Example: Quadrotor: modeling multiple types of objective values and constraints.\nExample: Distillation Column: using two-dimensional index sets for variables.\nExample: Optimal Power Flow: handling complex data and using constraint augmentation.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"This page was generated using Literate.jl.","category":"page"},{"location":"core/#ExaModels","page":"API Manual","title":"ExaModels","text":"","category":"section"},{"location":"core/","page":"API Manual","title":"API Manual","text":"Modules = [ExaModels]","category":"page"},{"location":"core/#ExaModels.ExaModels","page":"API Manual","title":"ExaModels.ExaModels","text":"ExaModels\n\nAn algebraic modeling and automatic differentiation tool in Julia Language, specialized for SIMD abstraction of nonlinear programs.\n\nFor more information, please visit https://github.com/exanauts/ExaModels.jl\n\n\n\n\n\n","category":"module"},{"location":"core/#ExaModels.AdjointNode1","page":"API Manual","title":"ExaModels.AdjointNode1","text":"AdjointNode1{F, T, I}\n\nA node with one child for first-order forward pass tree\n\nFields:\n\nx::T: function value\ny::T: first-order sensitivity\ninner::I: children\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.AdjointNode2","page":"API Manual","title":"ExaModels.AdjointNode2","text":"AdjointNode2{F, T, I1, I2}\n\nA node with two children for first-order forward pass tree\n\nFields:\n\nx::T: function value\ny1::T: first-order sensitivity w.r.t. first argument\ny2::T: first-order sensitivity w.r.t. second argument\ninner1::I1: children #1\ninner2::I2: children #2\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.AdjointNodeSource","page":"API Manual","title":"ExaModels.AdjointNodeSource","text":"AdjointNodeSource{VT}\n\nA source of AdjointNode. adjoint_node_source[i] returns an AdjointNodeVar at index i.\n\nFields:\n\ninner::VT: variable vector\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.AdjointNodeVar","page":"API Manual","title":"ExaModels.AdjointNodeVar","text":"AdjointNodeVar{I, T}\n\nA variable node for first-order forward pass tree\n\nFields:\n\ni::I: index\nx::T: value\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.AdjointNull","page":"API Manual","title":"ExaModels.AdjointNull","text":"Null\n\nA null node\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.Compressor","page":"API Manual","title":"ExaModels.Compressor","text":"Compressor{I}\n\nData structure for the sparse index\n\nFields:\n\ninner::I: stores the sparse index as a tuple form\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.ExaCore","page":"API Manual","title":"ExaModels.ExaCore","text":"ExaCore([array_eltype::Type; backend = backend, minimize = true])\n\nReturns an intermediate data object ExaCore, which later can be used for creating ExaModel\n\nExample\n\njulia> using ExaModels\n\njulia> c = ExaCore()\nAn ExaCore\n\n Float type: ...................... Float64\n Array type: ...................... Vector{Float64}\n Backend: ......................... Nothing\n\n number of objective patterns: .... 0\n number of constraint patterns: ... 0\n\njulia> c = ExaCore(Float32)\nAn ExaCore\n\n Float type: ...................... Float32\n Array type: ...................... Vector{Float32}\n Backend: ......................... Nothing\n\n number of objective patterns: .... 0\n number of constraint patterns: ... 0\n\njulia> using CUDA\n\njulia> c = ExaCore(Float32; backend = CUDABackend())\nAn ExaCore\n\n Float type: ...................... Float32\n Array type: ...................... CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}\n Backend: ......................... CUDA.CUDAKernels.CUDABackend\n\n number of objective patterns: .... 0\n number of constraint patterns: ... 0\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.ExaModel-Tuple{C} where C<:ExaCore","page":"API Manual","title":"ExaModels.ExaModel","text":"ExaModel(core)\n\nReturns an ExaModel object, which can be solved by nonlinear optimization solvers within JuliaSmoothOptimizer ecosystem, such as NLPModelsIpopt or MadNLP.\n\nExample\n\njulia> using ExaModels\n\njulia> c = ExaCore(); # create an ExaCore object\n\njulia> x = variable(c, 1:10); # create variables\n\njulia> objective(c, x[i]^2 for i in 1:10); # set objective function\n\njulia> m = ExaModel(c) # creat an ExaModel object\nAn ExaModel{Float64, Vector{Float64}, ...}\n\n Problem name: Generic\n All variables: ████████████████████ 10 All constraints: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n free: ████████████████████ 10 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n nnzh: ( 81.82% sparsity) 10 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n nonlinear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n nnzj: (------% sparsity)\n\njulia> using NLPModelsIpopt\n\njulia> result = ipopt(m; print_level=0) # solve the problem\n\"Execution stats: first-order stationary\"\n\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.Node1","page":"API Manual","title":"ExaModels.Node1","text":"Node1{F, I}\n\nA node with one child for symbolic expression tree\n\nFields:\n\ninner::I: children\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.Node2","page":"API Manual","title":"ExaModels.Node2","text":"Node2{F, I1, I2}\n\nA node with two children for symbolic expression tree\n\nFields:\n\ninner1::I1: children #1\ninner2::I2: children #2\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.Null","page":"API Manual","title":"ExaModels.Null","text":"Null\n\nA null node\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.ParIndexed","page":"API Manual","title":"ExaModels.ParIndexed","text":"ParIndexed{I, J}\n\nA parameterized data node\n\nFields:\n\ninner::I: parameter for the data\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.ParSource","page":"API Manual","title":"ExaModels.ParSource","text":"ParSource\n\nA source of parameterized data\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.SIMDFunction","page":"API Manual","title":"ExaModels.SIMDFunction","text":"SIMDFunction(gen::Base.Generator, o0 = 0, o1 = 0, o2 = 0)\n\nReturns a SIMDFunction using the gen.\n\nArguments:\n\ngen: an iterable function specified in Base.Generator format\no0: offset for the function evaluation\no1: offset for the derivative evalution\no2: offset for the second-order derivative evalution\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.SecondAdjointNode1","page":"API Manual","title":"ExaModels.SecondAdjointNode1","text":"SecondAdjointNode1{F, T, I}\n\nA node with one child for second-order forward pass tree\n\nFields:\n\nx::T: function value\ny::T: first-order sensitivity\nh::T: second-order sensitivity\ninner::I: DESCRIPTION\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.SecondAdjointNode2","page":"API Manual","title":"ExaModels.SecondAdjointNode2","text":"SecondAdjointNode2{F, T, I1, I2}\n\nA node with one child for second-order forward pass tree\n\nFields:\n\nx::T: function value\ny1::T: first-order sensitivity w.r.t. first argument\ny2::T: first-order sensitivity w.r.t. first argument\nh11::T: second-order sensitivity w.r.t. first argument\nh12::T: second-order sensitivity w.r.t. first and second argument\nh22::T: second-order sensitivity w.r.t. second argument\ninner1::I1: children #1\ninner2::I2: children #2\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.SecondAdjointNodeSource","page":"API Manual","title":"ExaModels.SecondAdjointNodeSource","text":"SecondAdjointNodeSource{VT}\n\nA source of AdjointNode. adjoint_node_source[i] returns an AdjointNodeVar at index i.\n\nFields:\n\ninner::VT: variable vector\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.SecondAdjointNodeVar","page":"API Manual","title":"ExaModels.SecondAdjointNodeVar","text":"SecondAdjointNodeVar{I, T}\n\nA variable node for first-order forward pass tree\n\nFields:\n\ni::I: index\nx::T: value\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.SecondAdjointNull","page":"API Manual","title":"ExaModels.SecondAdjointNull","text":"Null\n\nA null node\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.Var","page":"API Manual","title":"ExaModels.Var","text":"Var{I}\n\nA variable node for symbolic expression tree\n\nFields:\n\ni::I: (parameterized) index \n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.VarSource","page":"API Manual","title":"ExaModels.VarSource","text":"VarSource\n\nA source of variable nodes\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.WrapperNLPModel-Tuple{Any, Any}","page":"API Manual","title":"ExaModels.WrapperNLPModel","text":"WrapperNLPModel(VT, m)\n\nReturns a WrapperModel{T,VT} wrapping m <: AbstractNLPModel{T}\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.WrapperNLPModel-Tuple{Any}","page":"API Manual","title":"ExaModels.WrapperNLPModel","text":"WrapperNLPModel(m)\n\nReturns a WrapperModel{Float64,Vector{64}} wrapping m\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.constraint-Union{Tuple{C}, Tuple{T}, Tuple{C, Any}} where {T, C<:(ExaCore{T, VT} where VT<:AbstractVector{T})}","page":"API Manual","title":"ExaModels.constraint","text":"constraint(core, n; start = 0, lcon = 0, ucon = 0)\n\nAdds empty constraints of dimension n, so that later the terms can be added with constraint!. \n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.constraint-Union{Tuple{C}, Tuple{T}, Tuple{C, Base.Generator}} where {T, C<:(ExaCore{T, VT} where VT<:AbstractVector{T})}","page":"API Manual","title":"ExaModels.constraint","text":"constraint(core, generator; start = 0, lcon = 0, ucon = 0)\n\nAdds constraints specified by a generator to core, and returns an Constraint object. \n\nKeyword Arguments\n\nstart: The initial guess of the solution. Can either be Number, AbstractArray, or Generator.\nlcon : The constraint lower bound. Can either be Number, AbstractArray, or Generator.\nucon : The constraint upper bound. Can either be Number, AbstractArray, or Generator.\n\nExample\n\njulia> using ExaModels\n\njulia> c = ExaCore();\n\njulia> x = variable(c, 10);\n\njulia> constraint(c, x[i] + x[i+1] for i=1:9; lcon = -1, ucon = (1+i for i=1:9))\nConstraint\n\n s.t. (...)\n g♭ ≤ [g(x,p)]_{p ∈ P} ≤ g♯\n\n where |P| = 9\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.constraint-Union{Tuple{N}, Tuple{C}, Tuple{T}, Tuple{C, N}, Tuple{C, N, Any}} where {T, C<:(ExaCore{T, VT} where VT<:AbstractVector{T}), N<:ExaModels.AbstractNode}","page":"API Manual","title":"ExaModels.constraint","text":"constraint(core, expr [, pars]; start = 0, lcon = 0, ucon = 0)\n\nAdds constraints specified by a expr and pars to core, and returns an Constraint object. \n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.drpass-Union{Tuple{D}, Tuple{D, Any, Any}} where D<:ExaModels.AdjointNull","page":"API Manual","title":"ExaModels.drpass","text":"drpass(d::D, y, adj)\n\nPerforms dense gradient evaluation via the reverse pass on the computation (sub)graph formed by forward pass\n\nArguments:\n\nd: first-order computation (sub)graph\ny: result vector\nadj: adjoint propagated up to the current node\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.gradient!-NTuple{4, Any}","page":"API Manual","title":"ExaModels.gradient!","text":"gradient!(y, f, x, adj)\n\nPerforms dense gradient evalution\n\nArguments:\n\ny: result vector\nf: the function to be differentiated in SIMDFunction format\nx: variable vector\nadj: initial adjoint\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.grpass-Union{Tuple{D}, Tuple{D, Vararg{Any, 5}}} where D<:Union{ExaModels.AdjointNull, ExaModels.ParIndexed}","page":"API Manual","title":"ExaModels.grpass","text":"grpass(d::D, comp, y, o1, cnt, adj)\n\nPerforms dsparse gradient evaluation via the reverse pass on the computation (sub)graph formed by forward pass\n\nArguments:\n\nd: first-order computation (sub)graph\ncomp: a Compressor, which helps map counter to sparse vector index\ny: result vector\no1: index offset\ncnt: counter\nadj: adjoint propagated up to the current node\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.hdrpass-Union{Tuple{T2}, Tuple{T1}, Tuple{T1, T2, Vararg{Any, 6}}} where {T1<:ExaModels.SecondAdjointNode1, T2<:ExaModels.SecondAdjointNode1}","page":"API Manual","title":"ExaModels.hdrpass","text":"hdrpass(t1::T1, t2::T2, comp, y1, y2, o2, cnt, adj)\n\nPerforms sparse hessian evaluation ((df1/dx)(df2/dx)' portion) via the reverse pass on the computation (sub)graph formed by second-order forward pass\n\nArguments:\n\nt1: second-order computation (sub)graph regarding f1\nt2: second-order computation (sub)graph regarding f2\ncomp: a Compressor, which helps map counter to sparse vector index\ny1: result vector #1\ny2: result vector #2 (only used when evaluating sparsity)\no2: index offset\ncnt: counter\nadj: second adjoint propagated up to the current node\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.jrpass-Tuple{ExaModels.AdjointNull, Vararg{Any, 7}}","page":"API Manual","title":"ExaModels.jrpass","text":"jrpass(d::D, comp, i, y1, y2, o1, cnt, adj)\n\nPerforms sparse jacobian evaluation via the reverse pass on the computation (sub)graph formed by forward pass\n\nArguments:\n\nd: first-order computation (sub)graph\ncomp: a Compressor, which helps map counter to sparse vector index\ni: constraint index (this is i-th constraint)\ny1: result vector #1\ny2: result vector #2 (only used when evaluating sparsity)\no1: index offset\ncnt: counter\nadj: adjoint propagated up to the current node\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.multipliers-Tuple{SolverCore.AbstractExecutionStats, ExaModels.Constraint}","page":"API Manual","title":"ExaModels.multipliers","text":"multipliers(result, y)\n\nReturns the multipliers for constraints y associated with result, obtained by solving the model.\n\nExample\n\njulia> using ExaModels, NLPModelsIpopt\n\njulia> c = ExaCore(); \n\njulia> x = variable(c, 1:10, lvar = -1, uvar = 1);\n\njulia> objective(c, (x[i]-2)^2 for i in 1:10);\n\njulia> y = constraint(c, x[i] + x[i+1] for i=1:9; lcon = -1, ucon = (1+i for i=1:9));\n\njulia> m = ExaModel(c); \n\njulia> result = ipopt(m; print_level=0);\n\njulia> val = multipliers(result, y);\n\n\njulia> val[1] ≈ 0.81933930\ntrue\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.multipliers_L-Tuple{SolverCore.AbstractExecutionStats, Any}","page":"API Manual","title":"ExaModels.multipliers_L","text":"multipliers_L(result, x)\n\nReturns the multipliers_L for variable x associated with result, obtained by solving the model.\n\nExample\n\njulia> using ExaModels, NLPModelsIpopt\n\njulia> c = ExaCore(); \n\njulia> x = variable(c, 1:10, lvar = -1, uvar = 1);\n\njulia> objective(c, (x[i]-2)^2 for i in 1:10);\n\njulia> m = ExaModel(c); \n\njulia> result = ipopt(m; print_level=0);\n\njulia> val = multipliers_L(result, x);\n\njulia> isapprox(val, fill(0, 10), atol=sqrt(eps(Float64)), rtol=Inf)\ntrue\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.multipliers_U-Tuple{SolverCore.AbstractExecutionStats, Any}","page":"API Manual","title":"ExaModels.multipliers_U","text":"multipliers_U(result, x)\n\nReturns the multipliers_U for variable x associated with result, obtained by solving the model.\n\nExample\n\njulia> using ExaModels, NLPModelsIpopt\n\njulia> c = ExaCore(); \n\njulia> x = variable(c, 1:10, lvar = -1, uvar = 1);\n\njulia> objective(c, (x[i]-2)^2 for i in 1:10);\n\njulia> m = ExaModel(c); \n\njulia> result = ipopt(m; print_level=0);\n\njulia> val = multipliers_U(result, x);\n\njulia> isapprox(val, fill(2, 10), atol=sqrt(eps(Float64)), rtol=Inf)\ntrue\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.objective-Union{Tuple{C}, Tuple{C, Any}} where C<:ExaCore","page":"API Manual","title":"ExaModels.objective","text":"objective(core::ExaCore, generator)\n\nAdds objective terms specified by a generator to core, and returns an Objective object. Note: it is assumed that the terms are summed.\n\nExample\n\njulia> using ExaModels\n\njulia> c = ExaCore();\n\njulia> x = variable(c, 10);\n\njulia> objective(c, x[i]^2 for i=1:10)\nObjective\n\n min (...) + ∑_{p ∈ P} f(x,p)\n\n where |P| = 10\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.objective-Union{Tuple{N}, Tuple{C}, Tuple{C, N}, Tuple{C, N, Any}} where {C<:ExaCore, N<:ExaModels.AbstractNode}","page":"API Manual","title":"ExaModels.objective","text":"objective(core::ExaCore, expr [, pars])\n\nAdds objective terms specified by a expr and pars to core, and returns an Objective object.\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.sgradient!-NTuple{4, Any}","page":"API Manual","title":"ExaModels.sgradient!","text":"sgradient!(y, f, x, adj)\n\nPerforms sparse gradient evalution\n\nArguments:\n\ny: result vector\nf: the function to be differentiated in SIMDFunction format\nx: variable vector\nadj: initial adjoint\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.shessian!-NTuple{6, Any}","page":"API Manual","title":"ExaModels.shessian!","text":"shessian!(y1, y2, f, x, adj1, adj2)\n\nPerforms sparse jacobian evalution\n\nArguments:\n\ny1: result vector #1\ny2: result vector #2 (only used when evaluating sparsity)\nf: the function to be differentiated in SIMDFunction format\nx: variable vector\nadj1: initial first adjoint\nadj2: initial second adjoint\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.sjacobian!-NTuple{5, Any}","page":"API Manual","title":"ExaModels.sjacobian!","text":"sjacobian!(y1, y2, f, x, adj)\n\nPerforms sparse jacobian evalution\n\nArguments:\n\ny1: result vector #1\ny2: result vector #2 (only used when evaluating sparsity)\nf: the function to be differentiated in SIMDFunction format\nx: variable vector\nadj: initial adjoint\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.solution-Tuple{SolverCore.AbstractExecutionStats, Any}","page":"API Manual","title":"ExaModels.solution","text":"solution(result, x)\n\nReturns the solution for variable x associated with result, obtained by solving the model.\n\nExample\n\njulia> using ExaModels, NLPModelsIpopt\n\njulia> c = ExaCore(); \n\njulia> x = variable(c, 1:10, lvar = -1, uvar = 1);\n\njulia> objective(c, (x[i]-2)^2 for i in 1:10);\n\njulia> m = ExaModel(c); \n\njulia> result = ipopt(m; print_level=0);\n\njulia> val = solution(result, x);\n\njulia> isapprox(val, fill(1, 10), atol=sqrt(eps(Float64)), rtol=Inf)\ntrue\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.variable-Union{Tuple{C}, Tuple{T}, Tuple{C, Vararg{Any}}} where {T, C<:(ExaCore{T, VT} where VT<:AbstractVector{T})}","page":"API Manual","title":"ExaModels.variable","text":"variable(core, dims...; start = 0, lvar = -Inf, uvar = Inf)\n\nAdds variables with dimensions specified by dims to core, and returns Variable object. dims can be either Integer or UnitRange.\n\nKeyword Arguments\n\nstart: The initial guess of the solution. Can either be Number, AbstractArray, or Generator.\nlvar : The variable lower bound. Can either be Number, AbstractArray, or Generator.\nuvar : The variable upper bound. Can either be Number, AbstractArray, or Generator.\n\nExample\n\njulia> using ExaModels\n\njulia> c = ExaCore();\n\njulia> x = variable(c, 10; start = (sin(i) for i=1:10))\nVariable\n\n x ∈ R^{10}\n\njulia> y = variable(c, 2:10, 3:5; lvar = zeros(9,3), uvar = ones(9,3))\nVariable\n\n x ∈ R^{9 × 3}\n\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.@register_bivariate-NTuple{6, Any}","page":"API Manual","title":"ExaModels.@register_bivariate","text":"register_bivariate(f, df1, df2, ddf11, ddf12, ddf22)\n\nRegister a bivariate function f to ExaModels, so that it can be used within objective and constraint expressions\n\nArguments:\n\nf: function\ndf1: derivative function (w.r.t. first argument)\ndf2: derivative function (w.r.t. second argument)\nddf11: second-order derivative funciton (w.r.t. first argument)\nddf12: second-order derivative funciton (w.r.t. first and second argument)\nddf22: second-order derivative funciton (w.r.t. second argument)\n\nExample\n\njulia> using ExaModels\n\njulia> relu23(x) = (x > 0 || y > 0) ? (x + y)^3 : zero(x)\nrelu23 (generic function with 1 method)\n\njulia> drelu231(x) = (x > 0 || y > 0) ? 3 * (x + y)^2 : zero(x)\ndrelu231 (generic function with 1 method)\n\njulia> drelu232(x) = (x > 0 || y > 0) ? 3 * (x + y)^2 : zero(x)\ndrelu232 (generic function with 1 method)\n\njulia> ddrelu2311(x) = (x > 0 || y > 0) ? 6 * (x + y) : zero(x)\nddrelu2311 (generic function with 1 method)\n\njulia> ddrelu2312(x) = (x > 0 || y > 0) ? 6 * (x + y) : zero(x)\nddrelu2312 (generic function with 1 method)\n\njulia> ddrelu2322(x) = (x > 0 || y > 0) ? 6 * (x + y) : zero(x)\nddrelu2322 (generic function with 1 method)\n\njulia> @register_bivariate(relu23, drelu231, drelu232, ddrelu2311, ddrelu2312, ddrelu2322)\n\n\n\n\n\n","category":"macro"},{"location":"core/#ExaModels.@register_univariate-Tuple{Any, Any, Any}","page":"API Manual","title":"ExaModels.@register_univariate","text":"@register_univariate(f, df, ddf)\n\nRegister a univariate function f to ExaModels, so that it can be used within objective and constraint expressions\n\nArguments:\n\nf: function\ndf: derivative function\nddf: second-order derivative funciton\n\nExample\n\njulia> using ExaModels\n\njulia> relu3(x) = x > 0 ? x^3 : zero(x)\nrelu3 (generic function with 1 method)\n\njulia> drelu3(x) = x > 0 ? 3*x^2 : zero(x)\ndrelu3 (generic function with 1 method)\n\njulia> ddrelu3(x) = x > 0 ? 6*x : zero(x)\nddrelu3 (generic function with 1 method)\n\njulia> @register_univariate(relu3, drelu3, ddrelu3)\n\n\n\n\n\n","category":"macro"},{"location":"simd/#simd","page":"Mathematical Abstraction","title":"SIMD Abstraction","text":"","category":"section"},{"location":"simd/","page":"Mathematical Abstraction","title":"Mathematical Abstraction","text":"In this page, we explain what SIMD abstraction of nonlinear program is, and why it can be beneficial for scalable optimization of large-scale optimization problems. More discussion can be found in our paper.","category":"page"},{"location":"simd/#What-is-SIMD-abstraction?","page":"Mathematical Abstraction","title":"What is SIMD abstraction?","text":"","category":"section"},{"location":"simd/","page":"Mathematical Abstraction","title":"Mathematical Abstraction","text":"The mathematical statement of the problem formulation is as follows.","category":"page"},{"location":"simd/","page":"Mathematical Abstraction","title":"Mathematical Abstraction","text":"beginaligned\n min_x^flatleq x leq x^sharp\n sum_linLsum_iin I_l f^(l)(x p^(l)_i)\n textst leftg^(m)(x q_j)right_jin J_m +sum_nin N_msum_kin K_nh^(n)(x s^(n)_k) =0quad forall minM\nendaligned","category":"page"},{"location":"simd/","page":"Mathematical Abstraction","title":"Mathematical Abstraction","text":"where f^(ell)(cdotcdot), g^(m)(cdotcdot), and h^(n)(cdotcdot) are twice differentiable functions with respect to the first argument, whereas p^(k)_i_iin N_k_kinK, q^(k)_i_iin M_l_minM, and s^(n)_k_kinK_n_ninN_m_minM are problem data, which can either be discrete or continuous. It is also assumed that our functions f^(l)(cdotcdot), g^(m)(cdotcdot), and h^(n)(cdotcdot) can be expressed with computational graphs of moderate length. ","category":"page"},{"location":"simd/#Why-SIMD-abstraction?","page":"Mathematical Abstraction","title":"Why SIMD abstraction?","text":"","category":"section"},{"location":"simd/","page":"Mathematical Abstraction","title":"Mathematical Abstraction","text":"Many physics-based models, such as AC OPF, have a highly repetitive structure. One of the manifestations of it is that the mathematical statement of the model is concise, even if the practical model may contain millions of variables and constraints. This is possible due to the use of repetition over a certain index and data sets. For example, it suffices to use 15 computational patterns to fully specify the AC OPF model. These patterns arise from (1) generation cost, (2) reference bus voltage angle constraint, (3-6) active and reactive power flow (from and to), (7) voltage angle difference constraint, (8-9) apparent power flow limits (from and to), (10-11) power balance equations, (12-13) generators' contributions to the power balance equations, and (14-15) in/out flows contributions to the power balance equations. However, such repetitive structure is not well exploited in the standard NLP modeling paradigms. In fact, without the SIMD abstraction, it is difficult for the AD package to detect the parallelizable structure within the model, as it will require the full inspection of the computational graph over all expressions. By preserving the repetitive structures in the model, the repetitive structure can be directly available in AD implementation.","category":"page"},{"location":"simd/","page":"Mathematical Abstraction","title":"Mathematical Abstraction","text":"Using the multiple dispatch feature of Julia, ExaModels.jl generates highly efficient derivative computation code, specifically compiled for each computational pattern in the model. These derivative evaluation codes can be run over the data in various GPU array formats, and implemented via array and kernel programming in Julia Language. In turn, ExaModels.jl has the capability to efficiently evaluate first and second-order derivatives using GPU accelerators.","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"EditURL = \"jump.jl\"","category":"page"},{"location":"jump/#JuMP-Interface-(Experimental)","page":"JuMP Interface (experimental)","title":"JuMP Interface (Experimental)","text":"","category":"section"},{"location":"jump/#JuMP-to-an-ExaModel","page":"JuMP Interface (experimental)","title":"JuMP to an ExaModel","text":"","category":"section"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"We have an experimental interface to JuMP model. A JuMP model can be directly converted to a ExaModel. It is as simple as this:","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"using ExaModels, JuMP, CUDA\n\nN = 10\njm = Model()\n\n@variable(jm, x[i = 1:N], start = mod(i, 2) == 1 ? -1.2 : 1.0)\n@constraint(\n jm,\n s[i = 1:N-2],\n 3x[i+1]^3 + 2x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -\n x[i]exp(x[i] - x[i+1]) - 3 == 0.0\n)\n@objective(jm, Min, sum(100(x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N))\n\nem = ExaModel(jm; backend = CUDABackend())","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"An ExaModel{Float64, CUDA.CuArray{Float64, 1, CUDA.DeviceMemory}, ...}\n\n Problem name: Generic\n All variables: ████████████████████ 10 All constraints: ████████████████████ 8 \n free: ████████████████████ 10 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ████████████████████ 8 \n infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n nnzh: (-212.73% sparsity) 172 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n nonlinear: ████████████████████ 8 \n nnzj: ( 0.00% sparsity) 80 \n\n","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"Here, note that only scalar objective/constraints created via @constraint and @objective API are supported. Older syntax like @NLconstraint and @NLobjective are not supported. We can solve the model using any of the solvers supported by ExaModels. For example, we can use MadNLP:","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"using MadNLPGPU\n\nresult = madnlp(em)","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"\"Execution stats: Optimal Solution Found (tol = 1.0e-04).\"","category":"page"},{"location":"jump/#JuMP-Optimizer","page":"JuMP Interface (experimental)","title":"JuMP Optimizer","text":"","category":"section"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"Alternatively, one can use the Optimizer interface provided by ExaModels. This feature can be used as follows.","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"using ExaModels, JuMP, CUDA\nusing MadNLPGPU\n\nset_optimizer(jm, () -> ExaModels.MadNLPOptimizer(CUDABackend()))\noptimize!(jm)","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"This is MadNLP version v0.8.4, running with cuDSS v0.3.0\n\nNumber of nonzeros in constraint Jacobian............: 80\nNumber of nonzeros in Lagrangian Hessian.............: 172\n\nTotal number of variables............................: 10\n variables with only lower bounds: 0\n variables with lower and upper bounds: 0\n variables with only upper bounds: 0\nTotal number of equality constraints.................: 8\nTotal number of inequality constraints...............: 0\n inequality constraints with only lower bounds: 0\n inequality constraints with lower and upper bounds: 0\n inequality constraints with only upper bounds: 0\n\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 0 2.0570000e+03 2.48e+01 1.00e+02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0\n 1 2.0474072e+03 2.47e+01 2.97e+01 -1.0 2.27e+00 - 1.00e+00 4.00e-03h 1\n 2 1.1009058e+03 1.49e+01 2.24e+01 -1.0 2.24e+00 - 1.00e+00 1.00e+00h 1\n 3 1.1598223e+02 2.15e+00 5.34e+01 -1.0 2.14e+00 - 1.00e+00 1.00e+00h 1\n 4 6.5263510e+00 1.12e-01 4.74e+00 -1.0 1.72e-01 - 1.00e+00 1.00e+00h 1\n 5 6.2326771e+00 1.64e-03 2.08e-02 -1.0 5.91e-02 - 1.00e+00 1.00e+00h 1\n 6 6.2324576e+00 1.18e-06 1.22e-05 -3.8 1.40e-03 - 9.98e-01 1.00e+00h 1\n 7 6.2323021e+00 5.36e-11 1.98e-06 -5.0 3.12e-05 - 8.90e-01 1.00e+00h 1\n\nNumber of Iterations....: 7\n\n (scaled) (unscaled)\nObjective...............: 7.8690682927808819e-01 6.2323020878824593e+00\nDual infeasibility......: 1.9831098139189152e-06 1.5706229726237811e-05\nConstraint violation....: 5.3644583980288433e-11 5.3644583980288433e-11\nComplementarity.........: 1.1122043961251575e-05 8.8086588173112493e-05\nOverall NLP error.......: 8.8086588173112493e-05 8.8086588173112493e-05\n\nNumber of objective function evaluations = 8\nNumber of objective gradient evaluations = 8\nNumber of constraint evaluations = 8\nNumber of constraint Jacobian evaluations = 8\nNumber of Lagrangian Hessian evaluations = 7\nTotal wall-clock secs in solver (w/o fun. eval./lin. alg.) = 0.161\nTotal wall-clock secs in linear solver = 0.018\nTotal wall-clock secs in NLP function evaluations = 0.016\nTotal wall-clock secs = 0.195\n\nEXIT: Optimal Solution Found (tol = 1.0e-04).\n","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"Again, only scalar objective/constraints created via @constraint and @objective API are supported. Older syntax like @NLconstraint and @NLobjective are not supported.","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"This page was generated using Literate.jl.","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"EditURL = \"gpu.jl\"","category":"page"},{"location":"gpu/#Accelerations","page":"Accelerations","title":"Accelerations","text":"","category":"section"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"One of the key features of ExaModels.jl is being able to evaluate derivatives either on multi-threaded CPUs or GPU accelerators. Currently, GPU acceleration is only tested for NVIDIA GPUs. If you'd like to use multi-threaded CPU acceleration, start julia with","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"$ julia -t 4 # using 4 threads","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"Also, if you're using NVIDIA GPUs, make sure to have installed appropriate drivers.","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"Let's say that our CPU code is as follows.","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"function luksan_vlcek_obj(x, i)\n return 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2\nend\n\nfunction luksan_vlcek_con(x, i)\n return 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -\n x[i]exp(x[i] - x[i+1]) - 3\nend\n\nfunction luksan_vlcek_x0(i)\n return mod(i, 2) == 1 ? -1.2 : 1.0\nend\n\nfunction luksan_vlcek_model(N)\n\n c = ExaCore()\n x = variable(c, N; start = (luksan_vlcek_x0(i) for i = 1:N))\n constraint(c, luksan_vlcek_con(x, i) for i = 1:N-2)\n objective(c, luksan_vlcek_obj(x, i) for i = 2:N)\n\n return ExaModel(c)\nend","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"luksan_vlcek_model (generic function with 1 method)","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"Now we simply modify this by","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"function luksan_vlcek_model(N, backend = nothing)\n\n c = ExaCore(; backend = backend) # specify the backend\n x = variable(c, N; start = (luksan_vlcek_x0(i) for i = 1:N))\n constraint(c, luksan_vlcek_con(x, i) for i = 1:N-2)\n objective(c, luksan_vlcek_obj(x, i) for i = 2:N)\n\n return ExaModel(c)\nend","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"luksan_vlcek_model (generic function with 2 methods)","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"The acceleration can be done simply by specifying the backend. In particular, for multi-threaded CPUs,","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"using ExaModels, NLPModelsIpopt, KernelAbstractions\n\nm = luksan_vlcek_model(10, CPU())\nipopt(m)","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"\"Execution stats: first-order stationary\"","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"For NVIDIA GPUs, we can use CUDABackend. However, currently, there are not many optimization solvers that are capable of solving problems on GPUs. The only option right now is using MadNLP.jl. To use this, first install","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"import Pkg; Pkg.add(\"MadNLPGPU\")","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"Then, we can run:","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"using CUDA, MadNLPGPU\n\nm = luksan_vlcek_model(10, CUDABackend())\nmadnlp(m)","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"In the case we have arrays for the data, what we need to do is to simply convert the array types to the corresponding device array types. In particular,","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"function cuda_luksan_vlcek_model(N)\n c = ExaCore(; backend = CUDABackend())\n d1 = CuArray(1:N-2)\n d2 = CuArray(2:N)\n d3 = CuArray([luksan_vlcek_x0(i) for i = 1:N])\n\n x = variable(c, N; start = d3)\n constraint(c, luksan_vlcek_con(x, i) for i in d1)\n objective(c, luksan_vlcek_obj(x, i) for i in d2)\n\n return ExaModel(c)\nend","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"cuda_luksan_vlcek_model (generic function with 1 method)","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"m = cuda_luksan_vlcek_model(10)\nmadnlp(m)","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"This page was generated using Literate.jl.","category":"page"},{"location":"ref/#References","page":"References","title":"References","text":"","category":"section"},{"location":"ref/","page":"References","title":"References","text":"L. T. Biegler. Nonlinear programming: concepts, algorithms, and applications to chemical processes (SIAM, 2010).\n\n\n\nC. Coffrin, R. Bent, K. Sundar, Y. Ng and M. Lubin. PowerModels.jl: An open-source framework for exploring power flow formulations. In: 2018 Power Systems Computation Conference (PSCC) (IEEE, 2018); pp. 1–8.\n\n\n\nL. Lukšan and J. Vlček. Indefinitely preconditioned inexact Newton method for large sparse equality constrained non-linear programming problems. Numerical linear algebra with applications 5, 219–247 (1998).\n\n\n\n","category":"page"},{"location":"#Introduction","page":"Introduction","title":"Introduction","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"Welcome to the documentation of ExaModels.jl\t","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"note: Note\nExaModels runs on julia VERSION ≥ v\"1.9\"","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"warning: Warning\nPlease help us improve ExaModels and this documentation! ExaModels is in the early stage of development, and you may encounter unintended behaviors or missing documentations. If you find anything is not working as intended or documentation is missing, please open issues or pull requests or start discussions. ","category":"page"},{"location":"#What-is-ExaModels.jl?","page":"Introduction","title":"What is ExaModels.jl?","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"ExaModels.jl is an algebraic modeling and automatic differentiation tool in Julia Language, specialized for SIMD abstraction of nonlinear programs. ExaModels.jl employs what we call SIMD abstraction for nonlinear programs (NLPs), which allows for the preservation of the parallelizable structure within the model equations, facilitating efficient automatic differentiation either on the single-thread CPUs, multi-threaded CPUs, as well as GPU accelerators. More details about SIMD abstraction can be found here.","category":"page"},{"location":"#Key-differences-from-other-algebraic-modeling-tools","page":"Introduction","title":"Key differences from other algebraic modeling tools","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"ExaModels.jl is different from other algebraic modeling tools, such as JuMP or AMPL, in the following ways:","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"Modeling Interface: ExaModels.jl requires users to specify the model equations always in the form of Generators. This restrictive structure allows ExaModels.jl to preserve the SIMD-compatible structure in the model equations. This unique feature distinguishes ExaModels.jl from other algebraic modeling tools.\nPerformance: ExaModels.jl compiles (via Julia's compiler) derivative evaluation codes tailored to each computation pattern. Through reverse-mode automatic differentiation using these tailored codes, ExaModels.jl achieves significantly faster derivative evaluation speeds, even when using CPU.\nPortability: ExaModels.jl goes beyond traditional boundaries of","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"algebraic modeling systems by enabling derivative evaluation on GPU accelerators. Implementation of GPU kernels is accomplished using the portable programming paradigm offered by KernelAbstractions.jl. With ExaModels.jl, you can run your code on various devices, including multi-threaded CPUs, NVIDIA GPUs, AMD GPUs, and Intel GPUs. Note that Apple's Metal is currently not supported due to its lack of support for double-precision arithmetic.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"Thus, ExaModels.jl shines when your model has","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"nonlinear objective and constraints;\na large number of variables and constraints;\nhighly repetitive structure;\nsparse Hessian and Jacobian.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"These features are often exhibited in optimization problems associated with first-principle physics-based models. Primary examples include optimal control problems formulated with direct subscription method [1] and network system optimization problems, such as optimal power flow [2] and gas network control/estimation problems.","category":"page"},{"location":"#Performance-Highlights","page":"Introduction","title":"Performance Highlights","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"ExaModels.jl significantly enhances the performance of derivative evaluations for nonlinear optimization problems that can benefit from SIMD abstraction. Recent benchmark results demonstrate this notable improvement. Notably, when solving the AC OPF problem for a 9241 bus system, derivative evaluation using ExaModels.jl on GPUs can be up to two orders of magnitude faster compared to JuMP or AMPL. Some benchmark results are available below. The following problems are used for benchmarking:","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"LuksanVlcek problem\nQuadrotor control problem\nDistillation column control problem\nAC optimal power flow problem","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"(Image: benchmark)","category":"page"},{"location":"#Supported-Solvers","page":"Introduction","title":"Supported Solvers","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"ExaModels can be used with any solver that can handle NLPModel data type, but several callbacks are not currently implemented, and cause some errors. Currently, it is tested with the following solvers:","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"Ipopt (via NLPModelsIpopt.jl)\nMadNLP.jl","category":"page"},{"location":"#Documentation-Structure","page":"Introduction","title":"Documentation Structure","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"This documentation is structured in the following way.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"The remainder of this page highlights several key aspects of ExaModels.jl.\nThe mathematical abstraction–-SIMD abstraction of nonlinear programming–-of ExaModels.jl is discussed in Mathematical Abstraction page.\nThe step-by-step tutorial of using ExaModels.jl can be found in Tutorial page.\nThis documentation does not intend to discuss the engineering behind the implementation of ExaModels.jl. Some high-level idea is discussed in a recent publication, but the full details of the engineering behind it will be discussed in the future publications.","category":"page"},{"location":"#Citing-ExaModels.jl","page":"Introduction","title":"Citing ExaModels.jl","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"If you use ExaModels.jl in your research, we would greatly appreciate your citing this preprint.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"@misc{shin2023accelerating,\n title={Accelerating Optimal Power Flow with {GPU}s: {SIMD} Abstraction of Nonlinear Programs and Condensed-Space Interior-Point Methods}, \n author={Sungho Shin and Fran{\\c{c}}ois Pacaud and Mihai Anitescu},\n year={2023},\n eprint={2307.16830},\n archivePrefix={arXiv},\n primaryClass={math.OC}\n}","category":"page"},{"location":"#Supporting-ExaModels.jl","page":"Introduction","title":"Supporting ExaModels.jl","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"Please report issues and feature requests via the GitHub issue tracker.\nQuestions are welcome at GitHub discussion forum.","category":"page"}] +[{"location":"develop/#Developing-Extensions","page":"Developing Extensions","title":"Developing Extensions","text":"","category":"section"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"ExaModels.jl's API only uses simple julia funcitons, and thus, implementing the extensions is straightforward. Below, we suggest a good practice for implementing an extension package.","category":"page"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"Let's say that we want to implement an extension package for the example problem in Getting Started. An extension package may look like:","category":"page"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"Root\n├───Project.toml\n├── src\n│ └── LuksanVlcekModels.jl\n└── test\n └── runtest.jl","category":"page"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"Each of the files containing","category":"page"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"# Project.toml\n\nname = \"LuksanVlcekModels\"\nuuid = \"0c5951a0-f777-487f-ad29-fac2b9a21bf1\"\nauthors = [\"Sungho Shin \"]\nversion = \"0.1.0\"\n\n[deps]\nExaModels = \"1037b233-b668-4ce9-9b63-f9f681f55dd2\"\n\n[extras]\nNLPModelsIpopt = \"f4238b75-b362-5c4c-b852-0801c9a21d71\"\nTest = \"8dfed614-e22c-5e08-85e1-65c5234f0b40\"\n\n[targets]\ntest = [\"Test\", \"NLPModelsIpopt\"]","category":"page"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"# src/LuksanVlcekModels.jl\n\nmodule LuksanVlcekModels\n\nimport ExaModels\n\nfunction luksan_vlcek_obj(x,i)\n return 100*(x[i-1]^2-x[i])^2+(x[i-1]-1)^2\nend\n\nfunction luksan_vlcek_con(x,i)\n return 3x[i+1]^3+2*x[i+2]-5+sin(x[i+1]-x[i+2])sin(x[i+1]+x[i+2])+4x[i+1]-x[i]exp(x[i]-x[i+1])-3\nend\n\nfunction luksan_vlcek_x0(i)\n return mod(i,2)==1 ? -1.2 : 1.0\nend\n\nfunction luksan_vlcek_model(N; backend = nothing)\n \n c = ExaModels.ExaCore(backend)\n x = ExaModels.variable(\n c, N;\n start = (luksan_vlcek_x0(i) for i=1:N)\n )\n ExaModels.constraint(\n c,\n luksan_vlcek_con(x,i)\n for i in 1:N-2)\n ExaModels.objective(c, luksan_vlcek_obj(x,i) for i in 2:N)\n \n return ExaModels.ExaModel(c) # returns the model\nend\n\nexport luksan_vlcek_model\n\nend # module LuksanVlcekModels","category":"page"},{"location":"develop/","page":"Developing Extensions","title":"Developing Extensions","text":"# test/runtest.jl\n\nusing Test, LuksanVlcekModels, NLPModelsIpopt\n\n@testset \"LuksanVlcekModelsTest\" begin\n m = luksan_vlcek_model(10)\n result = ipopt(m)\n\n @test result.status == :first_order\n @test result.solution ≈ [\n -0.9505563573613093\n 0.9139008176388945\n 0.9890905176644905\n 0.9985592422681151\n 0.9998087408802769\n 0.9999745932450963\n 0.9999966246997642\n 0.9999995512524277\n 0.999999944919307\n 0.999999930070643\n ]\n @test result.multipliers ≈ [\n 4.1358568305002255\n -1.876494903703342\n -0.06556333356358675\n -0.021931863018312875\n -0.0019537261317119302\n -0.00032910445671233547\n -3.8788212776372465e-5\n -7.376592164341867e-6\n ]\nend","category":"page"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"EditURL = \"distillation.jl\"","category":"page"},{"location":"distillation/#distillation","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"","category":"section"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"function distillation_column_model(T = 3; backend = nothing)\n\n NT = 30\n FT = 17\n Ac = 0.5\n At = 0.25\n Ar = 1.0\n D = 0.2\n F = 0.4\n ybar = 0.8958\n ubar = 2.0\n alpha = 1.6\n dt = 10 / T\n xAf = 0.5\n xA0s = ExaModels.convert_array([(i, 0.5) for i = 0:NT+1], backend)\n\n itr0 = ExaModels.convert_array(collect(Iterators.product(1:T, 1:FT-1)), backend)\n itr1 = ExaModels.convert_array(collect(Iterators.product(1:T, FT+1:NT)), backend)\n itr2 = ExaModels.convert_array(collect(Iterators.product(0:T, 0:NT+1)), backend)\n\n c = ExaCore(backend)\n\n xA = variable(c, 0:T, 0:NT+1; start = 0.5)\n yA = variable(c, 0:T, 0:NT+1; start = 0.5)\n u = variable(c, 0:T; start = 1.0)\n V = variable(c, 0:T; start = 1.0)\n L2 = variable(c, 0:T; start = 1.0)\n\n objective(c, (yA[t, 1] - ybar)^2 for t = 0:T)\n objective(c, (u[t] - ubar)^2 for t = 0:T)\n\n constraint(c, xA[0, i] - xA0 for (i, xA0) in xA0s)\n constraint(\n c,\n (xA[t, 0] - xA[t-1, 0]) / dt - (1 / Ac) * (yA[t, 1] - xA[t, 0]) for t = 1:T\n )\n constraint(\n c,\n (xA[t, i] - xA[t-1, i]) / dt -\n (1 / At) * (u[t] * D * (yA[t, i-1] - xA[t, i]) - V[t] * (yA[t, i] - yA[t, i+1])) for\n (t, i) in itr0\n )\n constraint(\n c,\n (xA[t, FT] - xA[t-1, FT]) / dt -\n (1 / At) * (\n F * xAf + u[t] * D * xA[t, FT-1] - L2[t] * xA[t, FT] -\n V[t] * (yA[t, FT] - yA[t, FT+1])\n ) for t = 1:T\n )\n constraint(\n c,\n (xA[t, i] - xA[t-1, i]) / dt -\n (1 / At) * (L2[t] * (yA[t, i-1] - xA[t, i]) - V[t] * (yA[t, i] - yA[t, i+1])) for\n (t, i) in itr1\n )\n constraint(\n c,\n (xA[t, NT+1] - xA[t-1, NT+1]) / dt -\n (1 / Ar) * (L2[t] * xA[t, NT] - (F - D) * xA[t, NT+1] - V[t] * yA[t, NT+1]) for\n t = 1:T\n )\n constraint(c, V[t] - u[t] * D - D for t = 0:T)\n constraint(c, L2[t] - u[t] * D - F for t = 0:T)\n constraint(\n c,\n yA[t, i] * (1 - xA[t, i]) - alpha * xA[t, i] * (1 - yA[t, i]) for (t, i) in itr2\n )\n\n return ExaModel(c)\nend","category":"page"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"distillation_column_model (generic function with 2 methods)","category":"page"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"using ExaModels, NLPModelsIpopt\n\nm = distillation_column_model(10)\nipopt(m)","category":"page"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"\"Execution stats: first-order stationary\"","category":"page"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"","category":"page"},{"location":"distillation/","page":"Example: Distillation Column","title":"Example: Distillation Column","text":"This page was generated using Literate.jl.","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"EditURL = \"opf.jl\"","category":"page"},{"location":"opf/#opf","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"","category":"section"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"function parse_ac_power_data(filename)\n data = PowerModels.parse_file(filename)\n PowerModels.standardize_cost_terms!(data, order = 2)\n PowerModels.calc_thermal_limits!(data)\n ref = PowerModels.build_ref(data)[:it][:pm][:nw][0]\n\n arcdict = Dict(a => k for (k, a) in enumerate(ref[:arcs]))\n busdict = Dict(k => i for (i, (k, v)) in enumerate(ref[:bus]))\n gendict = Dict(k => i for (i, (k, v)) in enumerate(ref[:gen]))\n branchdict = Dict(k => i for (i, (k, v)) in enumerate(ref[:branch]))\n\n return (\n bus = [\n begin\n bus_loads = [ref[:load][l] for l in ref[:bus_loads][k]]\n bus_shunts = [ref[:shunt][s] for s in ref[:bus_shunts][k]]\n pd = sum(load[\"pd\"] for load in bus_loads; init = 0.0)\n gs = sum(shunt[\"gs\"] for shunt in bus_shunts; init = 0.0)\n qd = sum(load[\"qd\"] for load in bus_loads; init = 0.0)\n bs = sum(shunt[\"bs\"] for shunt in bus_shunts; init = 0.0)\n (i = busdict[k], pd = pd, gs = gs, qd = qd, bs = bs)\n end for (k, v) in ref[:bus]\n ],\n gen = [\n (\n i = gendict[k],\n cost1 = v[\"cost\"][1],\n cost2 = v[\"cost\"][2],\n cost3 = v[\"cost\"][3],\n bus = busdict[v[\"gen_bus\"]],\n ) for (k, v) in ref[:gen]\n ],\n arc = [\n (i = k, rate_a = ref[:branch][l][\"rate_a\"], bus = busdict[i]) for\n (k, (l, i, j)) in enumerate(ref[:arcs])\n ],\n branch = [\n begin\n f_idx = arcdict[i, branch[\"f_bus\"], branch[\"t_bus\"]]\n t_idx = arcdict[i, branch[\"t_bus\"], branch[\"f_bus\"]]\n g, b = PowerModels.calc_branch_y(branch)\n tr, ti = PowerModels.calc_branch_t(branch)\n ttm = tr^2 + ti^2\n g_fr = branch[\"g_fr\"]\n b_fr = branch[\"b_fr\"]\n g_to = branch[\"g_to\"]\n b_to = branch[\"b_to\"]\n c1 = (-g * tr - b * ti) / ttm\n c2 = (-b * tr + g * ti) / ttm\n c3 = (-g * tr + b * ti) / ttm\n c4 = (-b * tr - g * ti) / ttm\n c5 = (g + g_fr) / ttm\n c6 = (b + b_fr) / ttm\n c7 = (g + g_to)\n c8 = (b + b_to)\n (\n i = branchdict[i],\n j = 1,\n f_idx = f_idx,\n t_idx = t_idx,\n f_bus = busdict[branch[\"f_bus\"]],\n t_bus = busdict[branch[\"t_bus\"]],\n c1 = c1,\n c2 = c2,\n c3 = c3,\n c4 = c4,\n c5 = c5,\n c6 = c6,\n c7 = c7,\n c8 = c8,\n rate_a_sq = branch[\"rate_a\"]^2,\n )\n end for (i, branch) in ref[:branch]\n ],\n ref_buses = [busdict[i] for (i, k) in ref[:ref_buses]],\n vmax = [v[\"vmax\"] for (k, v) in ref[:bus]],\n vmin = [v[\"vmin\"] for (k, v) in ref[:bus]],\n pmax = [v[\"pmax\"] for (k, v) in ref[:gen]],\n pmin = [v[\"pmin\"] for (k, v) in ref[:gen]],\n qmax = [v[\"qmax\"] for (k, v) in ref[:gen]],\n qmin = [v[\"qmin\"] for (k, v) in ref[:gen]],\n rate_a = [ref[:branch][l][\"rate_a\"] for (k, (l, i, j)) in enumerate(ref[:arcs])],\n angmax = [b[\"angmax\"] for (i, b) in ref[:branch]],\n angmin = [b[\"angmin\"] for (i, b) in ref[:branch]],\n )\nend\n\nconvert_data(data::N, backend) where {names,N<:NamedTuple{names}} =\n NamedTuple{names}(ExaModels.convert_array(d, backend) for d in data)\n\nparse_ac_power_data(filename, backend) =\n convert_data(parse_ac_power_data(filename), backend)\n\nfunction ac_power_model(filename; backend = nothing, T = Float64)\n\n data = parse_ac_power_data(filename, backend)\n\n w = ExaCore(T; backend = backend)\n\n va = variable(w, length(data.bus);)\n\n vm = variable(\n w,\n length(data.bus);\n start = fill!(similar(data.bus, Float64), 1.0),\n lvar = data.vmin,\n uvar = data.vmax,\n )\n pg = variable(w, length(data.gen); lvar = data.pmin, uvar = data.pmax)\n\n qg = variable(w, length(data.gen); lvar = data.qmin, uvar = data.qmax)\n\n p = variable(w, length(data.arc); lvar = -data.rate_a, uvar = data.rate_a)\n\n q = variable(w, length(data.arc); lvar = -data.rate_a, uvar = data.rate_a)\n\n o = objective(w, g.cost1 * pg[g.i]^2 + g.cost2 * pg[g.i] + g.cost3 for g in data.gen)\n\n c1 = constraint(w, va[i] for i in data.ref_buses)\n\n c2 = constraint(\n w,\n p[b.f_idx] - b.c5 * vm[b.f_bus]^2 -\n b.c3 * (vm[b.f_bus] * vm[b.t_bus] * cos(va[b.f_bus] - va[b.t_bus])) -\n b.c4 * (vm[b.f_bus] * vm[b.t_bus] * sin(va[b.f_bus] - va[b.t_bus])) for\n b in data.branch\n )\n\n c3 = constraint(\n w,\n q[b.f_idx] +\n b.c6 * vm[b.f_bus]^2 +\n b.c4 * (vm[b.f_bus] * vm[b.t_bus] * cos(va[b.f_bus] - va[b.t_bus])) -\n b.c3 * (vm[b.f_bus] * vm[b.t_bus] * sin(va[b.f_bus] - va[b.t_bus])) for\n b in data.branch\n )\n\n c4 = constraint(\n w,\n p[b.t_idx] - b.c7 * vm[b.t_bus]^2 -\n b.c1 * (vm[b.t_bus] * vm[b.f_bus] * cos(va[b.t_bus] - va[b.f_bus])) -\n b.c2 * (vm[b.t_bus] * vm[b.f_bus] * sin(va[b.t_bus] - va[b.f_bus])) for\n b in data.branch\n )\n\n c5 = constraint(\n w,\n q[b.t_idx] +\n b.c8 * vm[b.t_bus]^2 +\n b.c2 * (vm[b.t_bus] * vm[b.f_bus] * cos(va[b.t_bus] - va[b.f_bus])) -\n b.c1 * (vm[b.t_bus] * vm[b.f_bus] * sin(va[b.t_bus] - va[b.f_bus])) for\n b in data.branch\n )\n\n c6 = constraint(\n w,\n va[b.f_bus] - va[b.t_bus] for b in data.branch;\n lcon = data.angmin,\n ucon = data.angmax,\n )\n c7 = constraint(\n w,\n p[b.f_idx]^2 + q[b.f_idx]^2 - b.rate_a_sq for b in data.branch;\n lcon = fill!(similar(data.branch, Float64, length(data.branch)), -Inf),\n )\n c8 = constraint(\n w,\n p[b.t_idx]^2 + q[b.t_idx]^2 - b.rate_a_sq for b in data.branch;\n lcon = fill!(similar(data.branch, Float64, length(data.branch)), -Inf),\n )\n\n c9 = constraint(w, b.pd + b.gs * vm[b.i]^2 for b in data.bus)\n\n c10 = constraint(w, b.qd - b.bs * vm[b.i]^2 for b in data.bus)\n\n c11 = constraint!(w, c9, a.bus => p[a.i] for a in data.arc)\n c12 = constraint!(w, c10, a.bus => q[a.i] for a in data.arc)\n\n c13 = constraint!(w, c9, g.bus => -pg[g.i] for g in data.gen)\n c14 = constraint!(w, c10, g.bus => -qg[g.i] for g in data.gen)\n\n return ExaModel(w)\n\nend","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"ac_power_model (generic function with 1 method)","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"We first download the case file.","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"using Downloads\n\ncase = tempname() * \".m\"\n\nDownloads.download(\n \"https://raw.githubusercontent.com/power-grid-lib/pglib-opf/dc6be4b2f85ca0e776952ec22cbd4c22396ea5a3/pglib_opf_case3_lmbd.m\",\n case,\n)","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"\"/tmp/jl_I7GVghKTyB.m\"","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"Then, we can model/sovle the problem.","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"using PowerModels, ExaModels, NLPModelsIpopt\n\nm = ac_power_model(case)\nipopt(m)","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"\"Execution stats: first-order stationary\"","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"","category":"page"},{"location":"opf/","page":"Example: Optimal Power Flow","title":"Example: Optimal Power Flow","text":"This page was generated using Literate.jl.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"EditURL = \"performance.jl\"","category":"page"},{"location":"performance/#Performance-Tips","page":"Performance Tips","title":"Performance Tips","text":"","category":"section"},{"location":"performance/#Use-a-function-to-create-a-model","page":"Performance Tips","title":"Use a function to create a model","text":"","category":"section"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"It is always better to use functions to create ExaModels. This in this way, the functions used for specifing objective/constraint functions are not recreated over all over, and thus, we can take advantage of the previously compiled model creation code. Let's consider the following example.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"using ExaModels\n\nt = @elapsed begin\n c = ExaCore()\n N = 10\n x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))\n objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)\n constraint(\n c,\n 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] - x[i]exp(x[i] - x[i+1]) - 3 for i = 1:N-2\n )\n m = ExaModel(c)\nend\n\nprintln(\"$t seconds elapsed\")","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"0.100709978 seconds elapsed\n","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"Even at the second call,","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"t = @elapsed begin\n c = ExaCore()\n N = 10\n x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))\n objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)\n constraint(\n c,\n 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] - x[i]exp(x[i] - x[i+1]) - 3 for i = 1:N-2\n )\n m = ExaModel(c)\nend\n\nprintln(\"$t seconds elapsed\")","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"0.096844354 seconds elapsed\n","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"the model creation time can be slightly reduced but the compilation time is still quite significant.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"But instead, if you create a function, we can significantly reduce the model creation time.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"function luksan_vlcek_model(N)\n c = ExaCore()\n x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))\n objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)\n constraint(\n c,\n 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -\n x[i]exp(x[i] - x[i+1]) - 3 for i = 1:N-2\n )\n m = ExaModel(c)\nend\n\nt = @elapsed luksan_vlcek_model(N)\nprintln(\"$t seconds elapsed\")","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"0.111111729 seconds elapsed\n","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"t = @elapsed luksan_vlcek_model(N)\nprintln(\"$t seconds elapsed\")","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"0.000101106 seconds elapsed\n","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"So, the model creation time can be essentially nothing. Thus, if you care about the model creation time, always make sure to write a function for creating the model, and do not directly create a model from the REPL.","category":"page"},{"location":"performance/#Make-sure-your-array's-eltype-is-concrete","page":"Performance Tips","title":"Make sure your array's eltype is concrete","text":"","category":"section"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"In order for ExaModels to run for loops over the array you provided without any overhead caused by type inference, the eltype of the data array should always be a concrete type. Furthermore, this is required if you want to run ExaModels on GPU accelerators.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"Let's take an example.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"using ExaModels\n\nN = 1000\n\nfunction luksan_vlcek_model_concrete(N)\n c = ExaCore()\n\n arr1 = Array(2:N)\n arr2 = Array(1:N-2)\n\n x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))\n objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i in arr1)\n constraint(\n c,\n 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -\n x[i]exp(x[i] - x[i+1]) - 3 for i in arr2\n )\n m = ExaModel(c)\nend\n\nfunction luksan_vlcek_model_non_concrete(N)\n c = ExaCore()\n\n arr1 = Array{Any}(2:N)\n arr2 = Array{Any}(1:N-2)\n\n x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))\n objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i in arr1)\n constraint(\n c,\n 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -\n x[i]exp(x[i] - x[i+1]) - 3 for i in arr2\n )\n m = ExaModel(c)\nend","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"luksan_vlcek_model_non_concrete (generic function with 1 method)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"Here, observe that","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"isconcretetype(eltype(Array(2:N)))","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"true","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"isconcretetype(eltype(Array{Any}(2:N)))","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"false","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"As you can see, the first array type has concrete eltypes, whereas the second array type has non concrete eltypes. Due to this, the array stored in the model created by luksan_vlcek_model_non_concrete will have non-concrete eltypes.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"Now let's compare the performance. We will use the following benchmark function here.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"using NLPModels\n\nfunction benchmark_callbacks(m; N = 100)\n nvar = m.meta.nvar\n ncon = m.meta.ncon\n nnzj = m.meta.nnzj\n nnzh = m.meta.nnzh\n\n x = copy(m.meta.x0)\n y = similar(m.meta.x0, ncon)\n c = similar(m.meta.x0, ncon)\n g = similar(m.meta.x0, nvar)\n jac = similar(m.meta.x0, nnzj)\n hess = similar(m.meta.x0, nnzh)\n jrows = similar(m.meta.x0, Int, nnzj)\n jcols = similar(m.meta.x0, Int, nnzj)\n hrows = similar(m.meta.x0, Int, nnzh)\n hcols = similar(m.meta.x0, Int, nnzh)\n\n GC.enable(false)\n\n NLPModels.obj(m, x) # to compile\n\n tobj = (1 / N) * @elapsed for t = 1:N\n NLPModels.obj(m, x)\n end\n\n NLPModels.cons!(m, x, c) # to compile\n tcon = (1 / N) * @elapsed for t = 1:N\n NLPModels.cons!(m, x, c)\n end\n\n NLPModels.grad!(m, x, g) # to compile\n tgrad = (1 / N) * @elapsed for t = 1:N\n NLPModels.grad!(m, x, g)\n end\n\n NLPModels.jac_coord!(m, x, jac) # to compile\n tjac = (1 / N) * @elapsed for t = 1:N\n NLPModels.jac_coord!(m, x, jac)\n end\n\n NLPModels.hess_coord!(m, x, y, hess) # to compile\n thess = (1 / N) * @elapsed for t = 1:N\n NLPModels.hess_coord!(m, x, y, hess)\n end\n\n NLPModels.jac_structure!(m, jrows, jcols) # to compile\n tjacs = (1 / N) * @elapsed for t = 1:N\n NLPModels.jac_structure!(m, jrows, jcols)\n end\n\n NLPModels.hess_structure!(m, hrows, hcols) # to compile\n thesss = (1 / N) * @elapsed for t = 1:N\n NLPModels.hess_structure!(m, hrows, hcols)\n end\n\n GC.enable(true)\n\n return (\n tobj = tobj,\n tcon = tcon,\n tgrad = tgrad,\n tjac = tjac,\n thess = thess,\n tjacs = tjacs,\n thesss = thesss,\n )\nend","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"benchmark_callbacks (generic function with 1 method)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"The performance comparison is here:","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"m1 = luksan_vlcek_model_concrete(N)\nm2 = luksan_vlcek_model_non_concrete(N)\n\nbenchmark_callbacks(m1)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"(tobj = 1.2981000000000001e-6, tcon = 2.47138e-5, tgrad = 2.63309e-6, tjac = 5.055966000000001e-5, thess = 0.00044733208000000005, tjacs = 1.076116e-5, thesss = 1.753494e-5)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"benchmark_callbacks(m2)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"(tobj = 0.00013450468, tcon = 0.00019564444, tgrad = 4.413147e-5, tjac = 0.00024392682, thess = 0.0007955851899999999, tjacs = 0.000195035, thesss = 0.00035830939)","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"As can be seen here, having concrete eltype dramatically improves the performance. This is because when all the data arrays' eltypes are concrete, the AD evaluations can be performed without any type inferernce, and this should be as fast as highly optimized C/C++/Fortran code.","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"When you're using GPU accelerators, the eltype of the array should always be concrete. In fact, non-concrete etlype will already cause an error when creating the array. For example,","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"using CUDA\n\ntry\n arr1 = CuArray(Array{Any}(2:N))\ncatch e\n showerror(stdout, e)\nend","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"CuArray only supports element types that are allocated inline.\nAny is not allocated inline\n","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"","category":"page"},{"location":"performance/","page":"Performance Tips","title":"Performance Tips","text":"This page was generated using Literate.jl.","category":"page"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"EditURL = \"quad.jl\"","category":"page"},{"location":"quad/#quad","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"","category":"section"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"function quadrotor_model(N = 3; backend = nothing)\n\n n = 9\n p = 4\n nd = 9\n d(i, j, N) =\n (j == 1 ? 1 * sin(2 * pi / N * i) : 0.0) +\n (j == 3 ? 2 * sin(4 * pi / N * i) : 0.0) +\n (j == 5 ? 2 * i / N : 0.0)\n dt = 0.01\n R = fill(1 / 10, 4)\n Q = [1, 0, 1, 0, 1, 0, 1, 1, 1]\n Qf = [1, 0, 1, 0, 1, 0, 1, 1, 1] / dt\n\n x0s = [(i, 0.0) for i = 1:n]\n itr0 = [(i, j, R[j]) for (i, j) in Base.product(1:N, 1:p)]\n itr1 = [(i, j, Q[j], d(i, j, N)) for (i, j) in Base.product(1:N, 1:n)]\n itr2 = [(j, Qf[j], d(N + 1, j, N)) for j = 1:n]\n\n c = ExaCore(; backend = backend)\n\n x = variable(c, 1:N+1, 1:n)\n u = variable(c, 1:N, 1:p)\n\n constraint(c, x[1, i] - x0 for (i, x0) in x0s)\n constraint(c, -x[i+1, 1] + x[i, 1] + (x[i, 2]) * dt for i = 1:N)\n constraint(\n c,\n -x[i+1, 2] +\n x[i, 2] +\n (\n u[i, 1] * cos(x[i, 7]) * sin(x[i, 8]) * cos(x[i, 9]) +\n u[i, 1] * sin(x[i, 7]) * sin(x[i, 9])\n ) * dt for i = 1:N\n )\n constraint(c, -x[i+1, 3] + x[i, 3] + (x[i, 4]) * dt for i = 1:N)\n constraint(\n c,\n -x[i+1, 4] +\n x[i, 4] +\n (\n u[i, 1] * cos(x[i, 7]) * sin(x[i, 8]) * sin(x[i, 9]) -\n u[i, 1] * sin(x[i, 7]) * cos(x[i, 9])\n ) * dt for i = 1:N\n )\n constraint(c, -x[i+1, 5] + x[i, 5] + (x[i, 6]) * dt for i = 1:N)\n constraint(\n c,\n -x[i+1, 6] + x[i, 6] + (u[i, 1] * cos(x[i, 7]) * cos(x[i, 8]) - 9.8) * dt for\n i = 1:N\n )\n constraint(\n c,\n -x[i+1, 7] +\n x[i, 7] +\n (u[i, 2] * cos(x[i, 7]) / cos(x[i, 8]) + u[i, 3] * sin(x[i, 7]) / cos(x[i, 8])) * dt\n for i = 1:N\n )\n constraint(\n c,\n -x[i+1, 8] + x[i, 8] + (-u[i, 2] * sin(x[i, 7]) + u[i, 3] * cos(x[i, 7])) * dt for\n i = 1:N\n )\n constraint(\n c,\n -x[i+1, 9] +\n x[i, 9] +\n (\n u[i, 2] * cos(x[i, 7]) * tan(x[i, 8]) +\n u[i, 3] * sin(x[i, 7]) * tan(x[i, 8]) +\n u[i, 4]\n ) * dt for i = 1:N\n )\n\n objective(c, 0.5 * R * (u[i, j]^2) for (i, j, R) in itr0)\n objective(c, 0.5 * Q * (x[i, j] - d)^2 for (i, j, Q, d) in itr1)\n objective(c, 0.5 * Qf * (x[N+1, j] - d)^2 for (j, Qf, d) in itr2)\n\n m = ExaModel(c)\n\nend","category":"page"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"quadrotor_model (generic function with 2 methods)","category":"page"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"using ExaModels, NLPModelsIpopt\n\nm = quadrotor_model(100)\nresult = ipopt(m)","category":"page"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"\"Execution stats: first-order stationary\"","category":"page"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"","category":"page"},{"location":"quad/","page":"Example: Quadrotor","title":"Example: Quadrotor","text":"This page was generated using Literate.jl.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"EditURL = \"guide.jl\"","category":"page"},{"location":"guide/#guide","page":"Getting Started","title":"Getting Started","text":"","category":"section"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"ExaModels can create nonlinear prgogramming models and allows solving the created models using NLP solvers (in particular, those that are interfaced with NLPModels, such as NLPModelsIpopt and MadNLP. This documentation page will describe how to use ExaModels to model and solve nonlinear optimization problems.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"We will first consider the following simple nonlinear program [3]:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"beginaligned\nmin_x_i_i=0^N sum_i=2^N 100(x_i-1^2-x_i)^2+(x_i-1-1)^2\ntextst 3x_i+1^3+2x_i+2-5+sin(x_i+1-x_i+2)sin(x_i+1+x_i+2)+4x_i+1-x_i e^x_i-x_i+1-3 = 0\nendaligned","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"We will follow the following Steps to create the model/solve this optimization problem.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Step 0: import ExaModels.jl\nStep 1: create a ExaCore object, wherein we can progressively build an optimization model.\nStep 2: create optimization variables with variable, while attaching it to previously created ExaCore.\nStep 3 (interchangable with Step 3): create objective function with objective, while attaching it to previously created ExaCore.\nStep 4 (interchangable with Step 2): create constraints with constraint, while attaching it to previously created ExaCore.\nStep 5: create an ExaModel based on the ExaCore.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Now, let's jump right in. We import ExaModels via (Step 0):","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"using ExaModels","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Now, all the functions that are necessary for creating model are imported to into Main.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"An ExaCore object can be created simply by (Step 1):","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"c = ExaCore()","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"An ExaCore\n\n Float type: ...................... Float64\n Array type: ...................... Vector{Float64}\n Backend: ......................... Nothing\n\n number of objective patterns: .... 0\n number of constraint patterns: ... 0\n","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"This is where our optimziation model information will be progressively stored. This object is not yet an NLPModel, but it will essentially store all the necessary information.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Now, let's create the optimziation variables. From the problem definition, we can see that we will need N scalar variables. We will choose N=10, and create the variable xinmathbbR^N with the follwoing command:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"N = 10\nx = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Variable\n\n x ∈ R^{10}\n","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"This creates the variable x, which we will be able to refer to when we create constraints/objective constraionts. Also, this modifies the information in the ExaCore object properly so that later an optimization model can be properly created with the necessary information. Observe that we have used the keyword argument start to specify the initial guess for the solution. The variable upper and lower bounds can be specified in a similar manner.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"The objective can be set as follows:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Objective\n\n min (...) + ∑_{p ∈ P} f(x,p)\n\n where |P| = 9\n","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"note: Note\nNote that the terms here are summed, without explicitly using sum( ... ) syntax.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"The constraints can be set as follows:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"constraint(\n c,\n 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -\n x[i]exp(x[i] - x[i+1]) - 3 for i = 1:N-2\n)","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Constraint\n\n s.t. (...)\n g♭ ≤ [g(x,p)]_{p ∈ P} ≤ g♯\n\n where |P| = 8\n","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Finally, we are ready to create an ExaModel from the data we have collected in ExaCore. Since ExaCore includes all the necessary information, we can do this simply by:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"m = ExaModel(c)","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"An ExaModel{Float64, Vector{Float64}, ...}\n\n Problem name: Generic\n All variables: ████████████████████ 10 All constraints: ████████████████████ 8 \n free: ████████████████████ 10 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ████████████████████ 8 \n infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n nnzh: (-36.36% sparsity) 75 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n nonlinear: ████████████████████ 8 \n nnzj: ( 70.00% sparsity) 24 \n\n","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Now, we got an optimization model ready to be solved. This problem can be solved with for example, with the Ipopt solver, as follows.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"using NLPModelsIpopt\nresult = ipopt(m)","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"\"Execution stats: first-order stationary\"","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Here, result is an AbstractExecutionStats, which typically contains the solution information. We can check several information as follows.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"println(\"Status: $(result.status)\")\nprintln(\"Number of iterations: $(result.iter)\")","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Status: first_order\nNumber of iterations: 6\n","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"The solution values for variable x can be inquired by:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"sol = solution(result, x)","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"10-element view(::Vector{Float64}, 1:10) with eltype Float64:\n -0.9505563573613093\n 0.9139008176388945\n 0.9890905176644905\n 0.9985592422681151\n 0.9998087408802769\n 0.9999745932450963\n 0.9999966246997642\n 0.9999995512524277\n 0.999999944919307\n 0.999999930070643","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"ExaModels provide several APIs similar to this:","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"solution inquires the primal solution.\nmultipliers inquires the dual solution.\nmultipliers_L inquires the lower bound dual solution.\nmultipliers_U inquires the upper bound dual solution.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"This concludes a short tutorial on how to use ExaModels to model and solve optimization problems. Want to learn more? Take a look at the following examples, which provide further tutorial on how to use ExaModels.jl. Each of the examples are designed to instruct a few additional techniques.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"Example: Quadrotor: modeling multiple types of objective values and constraints.\nExample: Distillation Column: using two-dimensional index sets for variables.\nExample: Optimal Power Flow: handling complex data and using constraint augmentation.","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"","category":"page"},{"location":"guide/","page":"Getting Started","title":"Getting Started","text":"This page was generated using Literate.jl.","category":"page"},{"location":"core/#ExaModels","page":"API Manual","title":"ExaModels","text":"","category":"section"},{"location":"core/","page":"API Manual","title":"API Manual","text":"Modules = [ExaModels]","category":"page"},{"location":"core/#ExaModels.ExaModels","page":"API Manual","title":"ExaModels.ExaModels","text":"ExaModels\n\nAn algebraic modeling and automatic differentiation tool in Julia Language, specialized for SIMD abstraction of nonlinear programs.\n\nFor more information, please visit https://github.com/exanauts/ExaModels.jl\n\n\n\n\n\n","category":"module"},{"location":"core/#ExaModels.AdjointNode1","page":"API Manual","title":"ExaModels.AdjointNode1","text":"AdjointNode1{F, T, I}\n\nA node with one child for first-order forward pass tree\n\nFields:\n\nx::T: function value\ny::T: first-order sensitivity\ninner::I: children\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.AdjointNode2","page":"API Manual","title":"ExaModels.AdjointNode2","text":"AdjointNode2{F, T, I1, I2}\n\nA node with two children for first-order forward pass tree\n\nFields:\n\nx::T: function value\ny1::T: first-order sensitivity w.r.t. first argument\ny2::T: first-order sensitivity w.r.t. second argument\ninner1::I1: children #1\ninner2::I2: children #2\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.AdjointNodeSource","page":"API Manual","title":"ExaModels.AdjointNodeSource","text":"AdjointNodeSource{VT}\n\nA source of AdjointNode. adjoint_node_source[i] returns an AdjointNodeVar at index i.\n\nFields:\n\ninner::VT: variable vector\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.AdjointNodeVar","page":"API Manual","title":"ExaModels.AdjointNodeVar","text":"AdjointNodeVar{I, T}\n\nA variable node for first-order forward pass tree\n\nFields:\n\ni::I: index\nx::T: value\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.AdjointNull","page":"API Manual","title":"ExaModels.AdjointNull","text":"Null\n\nA null node\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.Compressor","page":"API Manual","title":"ExaModels.Compressor","text":"Compressor{I}\n\nData structure for the sparse index\n\nFields:\n\ninner::I: stores the sparse index as a tuple form\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.ExaCore","page":"API Manual","title":"ExaModels.ExaCore","text":"ExaCore([array_eltype::Type; backend = backend, minimize = true])\n\nReturns an intermediate data object ExaCore, which later can be used for creating ExaModel\n\nExample\n\njulia> using ExaModels\n\njulia> c = ExaCore()\nAn ExaCore\n\n Float type: ...................... Float64\n Array type: ...................... Vector{Float64}\n Backend: ......................... Nothing\n\n number of objective patterns: .... 0\n number of constraint patterns: ... 0\n\njulia> c = ExaCore(Float32)\nAn ExaCore\n\n Float type: ...................... Float32\n Array type: ...................... Vector{Float32}\n Backend: ......................... Nothing\n\n number of objective patterns: .... 0\n number of constraint patterns: ... 0\n\njulia> using CUDA\n\njulia> c = ExaCore(Float32; backend = CUDABackend())\nAn ExaCore\n\n Float type: ...................... Float32\n Array type: ...................... CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}\n Backend: ......................... CUDA.CUDAKernels.CUDABackend\n\n number of objective patterns: .... 0\n number of constraint patterns: ... 0\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.ExaModel-Tuple{C} where C<:ExaCore","page":"API Manual","title":"ExaModels.ExaModel","text":"ExaModel(core)\n\nReturns an ExaModel object, which can be solved by nonlinear optimization solvers within JuliaSmoothOptimizer ecosystem, such as NLPModelsIpopt or MadNLP.\n\nExample\n\njulia> using ExaModels\n\njulia> c = ExaCore(); # create an ExaCore object\n\njulia> x = variable(c, 1:10); # create variables\n\njulia> objective(c, x[i]^2 for i in 1:10); # set objective function\n\njulia> m = ExaModel(c) # creat an ExaModel object\nAn ExaModel{Float64, Vector{Float64}, ...}\n\n Problem name: Generic\n All variables: ████████████████████ 10 All constraints: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n free: ████████████████████ 10 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n nnzh: ( 81.82% sparsity) 10 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n nonlinear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0\n nnzj: (------% sparsity)\n\njulia> using NLPModelsIpopt\n\njulia> result = ipopt(m; print_level=0) # solve the problem\n\"Execution stats: first-order stationary\"\n\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.Node1","page":"API Manual","title":"ExaModels.Node1","text":"Node1{F, I}\n\nA node with one child for symbolic expression tree\n\nFields:\n\ninner::I: children\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.Node2","page":"API Manual","title":"ExaModels.Node2","text":"Node2{F, I1, I2}\n\nA node with two children for symbolic expression tree\n\nFields:\n\ninner1::I1: children #1\ninner2::I2: children #2\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.Null","page":"API Manual","title":"ExaModels.Null","text":"Null\n\nA null node\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.ParIndexed","page":"API Manual","title":"ExaModels.ParIndexed","text":"ParIndexed{I, J}\n\nA parameterized data node\n\nFields:\n\ninner::I: parameter for the data\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.ParSource","page":"API Manual","title":"ExaModels.ParSource","text":"ParSource\n\nA source of parameterized data\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.SIMDFunction","page":"API Manual","title":"ExaModels.SIMDFunction","text":"SIMDFunction(gen::Base.Generator, o0 = 0, o1 = 0, o2 = 0)\n\nReturns a SIMDFunction using the gen.\n\nArguments:\n\ngen: an iterable function specified in Base.Generator format\no0: offset for the function evaluation\no1: offset for the derivative evalution\no2: offset for the second-order derivative evalution\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.SecondAdjointNode1","page":"API Manual","title":"ExaModels.SecondAdjointNode1","text":"SecondAdjointNode1{F, T, I}\n\nA node with one child for second-order forward pass tree\n\nFields:\n\nx::T: function value\ny::T: first-order sensitivity\nh::T: second-order sensitivity\ninner::I: DESCRIPTION\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.SecondAdjointNode2","page":"API Manual","title":"ExaModels.SecondAdjointNode2","text":"SecondAdjointNode2{F, T, I1, I2}\n\nA node with one child for second-order forward pass tree\n\nFields:\n\nx::T: function value\ny1::T: first-order sensitivity w.r.t. first argument\ny2::T: first-order sensitivity w.r.t. first argument\nh11::T: second-order sensitivity w.r.t. first argument\nh12::T: second-order sensitivity w.r.t. first and second argument\nh22::T: second-order sensitivity w.r.t. second argument\ninner1::I1: children #1\ninner2::I2: children #2\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.SecondAdjointNodeSource","page":"API Manual","title":"ExaModels.SecondAdjointNodeSource","text":"SecondAdjointNodeSource{VT}\n\nA source of AdjointNode. adjoint_node_source[i] returns an AdjointNodeVar at index i.\n\nFields:\n\ninner::VT: variable vector\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.SecondAdjointNodeVar","page":"API Manual","title":"ExaModels.SecondAdjointNodeVar","text":"SecondAdjointNodeVar{I, T}\n\nA variable node for first-order forward pass tree\n\nFields:\n\ni::I: index\nx::T: value\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.SecondAdjointNull","page":"API Manual","title":"ExaModels.SecondAdjointNull","text":"Null\n\nA null node\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.Var","page":"API Manual","title":"ExaModels.Var","text":"Var{I}\n\nA variable node for symbolic expression tree\n\nFields:\n\ni::I: (parameterized) index \n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.VarSource","page":"API Manual","title":"ExaModels.VarSource","text":"VarSource\n\nA source of variable nodes\n\n\n\n\n\n","category":"type"},{"location":"core/#ExaModels.WrapperNLPModel-Tuple{Any, Any}","page":"API Manual","title":"ExaModels.WrapperNLPModel","text":"WrapperNLPModel(VT, m)\n\nReturns a WrapperModel{T,VT} wrapping m <: AbstractNLPModel{T}\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.WrapperNLPModel-Tuple{Any}","page":"API Manual","title":"ExaModels.WrapperNLPModel","text":"WrapperNLPModel(m)\n\nReturns a WrapperModel{Float64,Vector{64}} wrapping m\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.constraint-Union{Tuple{C}, Tuple{T}, Tuple{C, Any}} where {T, C<:(ExaCore{T, VT} where VT<:AbstractVector{T})}","page":"API Manual","title":"ExaModels.constraint","text":"constraint(core, n; start = 0, lcon = 0, ucon = 0)\n\nAdds empty constraints of dimension n, so that later the terms can be added with constraint!. \n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.constraint-Union{Tuple{C}, Tuple{T}, Tuple{C, Base.Generator}} where {T, C<:(ExaCore{T, VT} where VT<:AbstractVector{T})}","page":"API Manual","title":"ExaModels.constraint","text":"constraint(core, generator; start = 0, lcon = 0, ucon = 0)\n\nAdds constraints specified by a generator to core, and returns an Constraint object. \n\nKeyword Arguments\n\nstart: The initial guess of the solution. Can either be Number, AbstractArray, or Generator.\nlcon : The constraint lower bound. Can either be Number, AbstractArray, or Generator.\nucon : The constraint upper bound. Can either be Number, AbstractArray, or Generator.\n\nExample\n\njulia> using ExaModels\n\njulia> c = ExaCore();\n\njulia> x = variable(c, 10);\n\njulia> constraint(c, x[i] + x[i+1] for i=1:9; lcon = -1, ucon = (1+i for i=1:9))\nConstraint\n\n s.t. (...)\n g♭ ≤ [g(x,p)]_{p ∈ P} ≤ g♯\n\n where |P| = 9\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.constraint-Union{Tuple{N}, Tuple{C}, Tuple{T}, Tuple{C, N}, Tuple{C, N, Any}} where {T, C<:(ExaCore{T, VT} where VT<:AbstractVector{T}), N<:ExaModels.AbstractNode}","page":"API Manual","title":"ExaModels.constraint","text":"constraint(core, expr [, pars]; start = 0, lcon = 0, ucon = 0)\n\nAdds constraints specified by a expr and pars to core, and returns an Constraint object. \n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.drpass-Union{Tuple{D}, Tuple{D, Any, Any}} where D<:ExaModels.AdjointNull","page":"API Manual","title":"ExaModels.drpass","text":"drpass(d::D, y, adj)\n\nPerforms dense gradient evaluation via the reverse pass on the computation (sub)graph formed by forward pass\n\nArguments:\n\nd: first-order computation (sub)graph\ny: result vector\nadj: adjoint propagated up to the current node\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.gradient!-NTuple{4, Any}","page":"API Manual","title":"ExaModels.gradient!","text":"gradient!(y, f, x, adj)\n\nPerforms dense gradient evalution\n\nArguments:\n\ny: result vector\nf: the function to be differentiated in SIMDFunction format\nx: variable vector\nadj: initial adjoint\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.grpass-Union{Tuple{D}, Tuple{D, Vararg{Any, 5}}} where D<:Union{ExaModels.AdjointNull, ExaModels.ParIndexed}","page":"API Manual","title":"ExaModels.grpass","text":"grpass(d::D, comp, y, o1, cnt, adj)\n\nPerforms dsparse gradient evaluation via the reverse pass on the computation (sub)graph formed by forward pass\n\nArguments:\n\nd: first-order computation (sub)graph\ncomp: a Compressor, which helps map counter to sparse vector index\ny: result vector\no1: index offset\ncnt: counter\nadj: adjoint propagated up to the current node\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.hdrpass-Union{Tuple{T2}, Tuple{T1}, Tuple{T1, T2, Vararg{Any, 6}}} where {T1<:ExaModels.SecondAdjointNode1, T2<:ExaModels.SecondAdjointNode1}","page":"API Manual","title":"ExaModels.hdrpass","text":"hdrpass(t1::T1, t2::T2, comp, y1, y2, o2, cnt, adj)\n\nPerforms sparse hessian evaluation ((df1/dx)(df2/dx)' portion) via the reverse pass on the computation (sub)graph formed by second-order forward pass\n\nArguments:\n\nt1: second-order computation (sub)graph regarding f1\nt2: second-order computation (sub)graph regarding f2\ncomp: a Compressor, which helps map counter to sparse vector index\ny1: result vector #1\ny2: result vector #2 (only used when evaluating sparsity)\no2: index offset\ncnt: counter\nadj: second adjoint propagated up to the current node\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.jrpass-Tuple{ExaModels.AdjointNull, Vararg{Any, 7}}","page":"API Manual","title":"ExaModels.jrpass","text":"jrpass(d::D, comp, i, y1, y2, o1, cnt, adj)\n\nPerforms sparse jacobian evaluation via the reverse pass on the computation (sub)graph formed by forward pass\n\nArguments:\n\nd: first-order computation (sub)graph\ncomp: a Compressor, which helps map counter to sparse vector index\ni: constraint index (this is i-th constraint)\ny1: result vector #1\ny2: result vector #2 (only used when evaluating sparsity)\no1: index offset\ncnt: counter\nadj: adjoint propagated up to the current node\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.multipliers-Tuple{SolverCore.AbstractExecutionStats, ExaModels.Constraint}","page":"API Manual","title":"ExaModels.multipliers","text":"multipliers(result, y)\n\nReturns the multipliers for constraints y associated with result, obtained by solving the model.\n\nExample\n\njulia> using ExaModels, NLPModelsIpopt\n\njulia> c = ExaCore(); \n\njulia> x = variable(c, 1:10, lvar = -1, uvar = 1);\n\njulia> objective(c, (x[i]-2)^2 for i in 1:10);\n\njulia> y = constraint(c, x[i] + x[i+1] for i=1:9; lcon = -1, ucon = (1+i for i=1:9));\n\njulia> m = ExaModel(c); \n\njulia> result = ipopt(m; print_level=0);\n\njulia> val = multipliers(result, y);\n\n\njulia> val[1] ≈ 0.81933930\ntrue\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.multipliers_L-Tuple{SolverCore.AbstractExecutionStats, Any}","page":"API Manual","title":"ExaModels.multipliers_L","text":"multipliers_L(result, x)\n\nReturns the multipliers_L for variable x associated with result, obtained by solving the model.\n\nExample\n\njulia> using ExaModels, NLPModelsIpopt\n\njulia> c = ExaCore(); \n\njulia> x = variable(c, 1:10, lvar = -1, uvar = 1);\n\njulia> objective(c, (x[i]-2)^2 for i in 1:10);\n\njulia> m = ExaModel(c); \n\njulia> result = ipopt(m; print_level=0);\n\njulia> val = multipliers_L(result, x);\n\njulia> isapprox(val, fill(0, 10), atol=sqrt(eps(Float64)), rtol=Inf)\ntrue\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.multipliers_U-Tuple{SolverCore.AbstractExecutionStats, Any}","page":"API Manual","title":"ExaModels.multipliers_U","text":"multipliers_U(result, x)\n\nReturns the multipliers_U for variable x associated with result, obtained by solving the model.\n\nExample\n\njulia> using ExaModels, NLPModelsIpopt\n\njulia> c = ExaCore(); \n\njulia> x = variable(c, 1:10, lvar = -1, uvar = 1);\n\njulia> objective(c, (x[i]-2)^2 for i in 1:10);\n\njulia> m = ExaModel(c); \n\njulia> result = ipopt(m; print_level=0);\n\njulia> val = multipliers_U(result, x);\n\njulia> isapprox(val, fill(2, 10), atol=sqrt(eps(Float64)), rtol=Inf)\ntrue\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.objective-Union{Tuple{C}, Tuple{C, Any}} where C<:ExaCore","page":"API Manual","title":"ExaModels.objective","text":"objective(core::ExaCore, generator)\n\nAdds objective terms specified by a generator to core, and returns an Objective object. Note: it is assumed that the terms are summed.\n\nExample\n\njulia> using ExaModels\n\njulia> c = ExaCore();\n\njulia> x = variable(c, 10);\n\njulia> objective(c, x[i]^2 for i=1:10)\nObjective\n\n min (...) + ∑_{p ∈ P} f(x,p)\n\n where |P| = 10\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.objective-Union{Tuple{N}, Tuple{C}, Tuple{C, N}, Tuple{C, N, Any}} where {C<:ExaCore, N<:ExaModels.AbstractNode}","page":"API Manual","title":"ExaModels.objective","text":"objective(core::ExaCore, expr [, pars])\n\nAdds objective terms specified by a expr and pars to core, and returns an Objective object.\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.sgradient!-NTuple{4, Any}","page":"API Manual","title":"ExaModels.sgradient!","text":"sgradient!(y, f, x, adj)\n\nPerforms sparse gradient evalution\n\nArguments:\n\ny: result vector\nf: the function to be differentiated in SIMDFunction format\nx: variable vector\nadj: initial adjoint\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.shessian!-NTuple{6, Any}","page":"API Manual","title":"ExaModels.shessian!","text":"shessian!(y1, y2, f, x, adj1, adj2)\n\nPerforms sparse jacobian evalution\n\nArguments:\n\ny1: result vector #1\ny2: result vector #2 (only used when evaluating sparsity)\nf: the function to be differentiated in SIMDFunction format\nx: variable vector\nadj1: initial first adjoint\nadj2: initial second adjoint\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.sjacobian!-NTuple{5, Any}","page":"API Manual","title":"ExaModels.sjacobian!","text":"sjacobian!(y1, y2, f, x, adj)\n\nPerforms sparse jacobian evalution\n\nArguments:\n\ny1: result vector #1\ny2: result vector #2 (only used when evaluating sparsity)\nf: the function to be differentiated in SIMDFunction format\nx: variable vector\nadj: initial adjoint\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.solution-Tuple{SolverCore.AbstractExecutionStats, Any}","page":"API Manual","title":"ExaModels.solution","text":"solution(result, x)\n\nReturns the solution for variable x associated with result, obtained by solving the model.\n\nExample\n\njulia> using ExaModels, NLPModelsIpopt\n\njulia> c = ExaCore(); \n\njulia> x = variable(c, 1:10, lvar = -1, uvar = 1);\n\njulia> objective(c, (x[i]-2)^2 for i in 1:10);\n\njulia> m = ExaModel(c); \n\njulia> result = ipopt(m; print_level=0);\n\njulia> val = solution(result, x);\n\njulia> isapprox(val, fill(1, 10), atol=sqrt(eps(Float64)), rtol=Inf)\ntrue\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.variable-Union{Tuple{C}, Tuple{T}, Tuple{C, Vararg{Any}}} where {T, C<:(ExaCore{T, VT} where VT<:AbstractVector{T})}","page":"API Manual","title":"ExaModels.variable","text":"variable(core, dims...; start = 0, lvar = -Inf, uvar = Inf)\n\nAdds variables with dimensions specified by dims to core, and returns Variable object. dims can be either Integer or UnitRange.\n\nKeyword Arguments\n\nstart: The initial guess of the solution. Can either be Number, AbstractArray, or Generator.\nlvar : The variable lower bound. Can either be Number, AbstractArray, or Generator.\nuvar : The variable upper bound. Can either be Number, AbstractArray, or Generator.\n\nExample\n\njulia> using ExaModels\n\njulia> c = ExaCore();\n\njulia> x = variable(c, 10; start = (sin(i) for i=1:10))\nVariable\n\n x ∈ R^{10}\n\njulia> y = variable(c, 2:10, 3:5; lvar = zeros(9,3), uvar = ones(9,3))\nVariable\n\n x ∈ R^{9 × 3}\n\n\n\n\n\n\n","category":"method"},{"location":"core/#ExaModels.@register_bivariate-NTuple{6, Any}","page":"API Manual","title":"ExaModels.@register_bivariate","text":"register_bivariate(f, df1, df2, ddf11, ddf12, ddf22)\n\nRegister a bivariate function f to ExaModels, so that it can be used within objective and constraint expressions\n\nArguments:\n\nf: function\ndf1: derivative function (w.r.t. first argument)\ndf2: derivative function (w.r.t. second argument)\nddf11: second-order derivative funciton (w.r.t. first argument)\nddf12: second-order derivative funciton (w.r.t. first and second argument)\nddf22: second-order derivative funciton (w.r.t. second argument)\n\nExample\n\njulia> using ExaModels\n\njulia> relu23(x) = (x > 0 || y > 0) ? (x + y)^3 : zero(x)\nrelu23 (generic function with 1 method)\n\njulia> drelu231(x) = (x > 0 || y > 0) ? 3 * (x + y)^2 : zero(x)\ndrelu231 (generic function with 1 method)\n\njulia> drelu232(x) = (x > 0 || y > 0) ? 3 * (x + y)^2 : zero(x)\ndrelu232 (generic function with 1 method)\n\njulia> ddrelu2311(x) = (x > 0 || y > 0) ? 6 * (x + y) : zero(x)\nddrelu2311 (generic function with 1 method)\n\njulia> ddrelu2312(x) = (x > 0 || y > 0) ? 6 * (x + y) : zero(x)\nddrelu2312 (generic function with 1 method)\n\njulia> ddrelu2322(x) = (x > 0 || y > 0) ? 6 * (x + y) : zero(x)\nddrelu2322 (generic function with 1 method)\n\njulia> @register_bivariate(relu23, drelu231, drelu232, ddrelu2311, ddrelu2312, ddrelu2322)\n\n\n\n\n\n","category":"macro"},{"location":"core/#ExaModels.@register_univariate-Tuple{Any, Any, Any}","page":"API Manual","title":"ExaModels.@register_univariate","text":"@register_univariate(f, df, ddf)\n\nRegister a univariate function f to ExaModels, so that it can be used within objective and constraint expressions\n\nArguments:\n\nf: function\ndf: derivative function\nddf: second-order derivative funciton\n\nExample\n\njulia> using ExaModels\n\njulia> relu3(x) = x > 0 ? x^3 : zero(x)\nrelu3 (generic function with 1 method)\n\njulia> drelu3(x) = x > 0 ? 3*x^2 : zero(x)\ndrelu3 (generic function with 1 method)\n\njulia> ddrelu3(x) = x > 0 ? 6*x : zero(x)\nddrelu3 (generic function with 1 method)\n\njulia> @register_univariate(relu3, drelu3, ddrelu3)\n\n\n\n\n\n","category":"macro"},{"location":"simd/#simd","page":"Mathematical Abstraction","title":"SIMD Abstraction","text":"","category":"section"},{"location":"simd/","page":"Mathematical Abstraction","title":"Mathematical Abstraction","text":"In this page, we explain what SIMD abstraction of nonlinear program is, and why it can be beneficial for scalable optimization of large-scale optimization problems. More discussion can be found in our paper.","category":"page"},{"location":"simd/#What-is-SIMD-abstraction?","page":"Mathematical Abstraction","title":"What is SIMD abstraction?","text":"","category":"section"},{"location":"simd/","page":"Mathematical Abstraction","title":"Mathematical Abstraction","text":"The mathematical statement of the problem formulation is as follows.","category":"page"},{"location":"simd/","page":"Mathematical Abstraction","title":"Mathematical Abstraction","text":"beginaligned\n min_x^flatleq x leq x^sharp\n sum_linLsum_iin I_l f^(l)(x p^(l)_i)\n textst leftg^(m)(x q_j)right_jin J_m +sum_nin N_msum_kin K_nh^(n)(x s^(n)_k) =0quad forall minM\nendaligned","category":"page"},{"location":"simd/","page":"Mathematical Abstraction","title":"Mathematical Abstraction","text":"where f^(ell)(cdotcdot), g^(m)(cdotcdot), and h^(n)(cdotcdot) are twice differentiable functions with respect to the first argument, whereas p^(k)_i_iin N_k_kinK, q^(k)_i_iin M_l_minM, and s^(n)_k_kinK_n_ninN_m_minM are problem data, which can either be discrete or continuous. It is also assumed that our functions f^(l)(cdotcdot), g^(m)(cdotcdot), and h^(n)(cdotcdot) can be expressed with computational graphs of moderate length. ","category":"page"},{"location":"simd/#Why-SIMD-abstraction?","page":"Mathematical Abstraction","title":"Why SIMD abstraction?","text":"","category":"section"},{"location":"simd/","page":"Mathematical Abstraction","title":"Mathematical Abstraction","text":"Many physics-based models, such as AC OPF, have a highly repetitive structure. One of the manifestations of it is that the mathematical statement of the model is concise, even if the practical model may contain millions of variables and constraints. This is possible due to the use of repetition over a certain index and data sets. For example, it suffices to use 15 computational patterns to fully specify the AC OPF model. These patterns arise from (1) generation cost, (2) reference bus voltage angle constraint, (3-6) active and reactive power flow (from and to), (7) voltage angle difference constraint, (8-9) apparent power flow limits (from and to), (10-11) power balance equations, (12-13) generators' contributions to the power balance equations, and (14-15) in/out flows contributions to the power balance equations. However, such repetitive structure is not well exploited in the standard NLP modeling paradigms. In fact, without the SIMD abstraction, it is difficult for the AD package to detect the parallelizable structure within the model, as it will require the full inspection of the computational graph over all expressions. By preserving the repetitive structures in the model, the repetitive structure can be directly available in AD implementation.","category":"page"},{"location":"simd/","page":"Mathematical Abstraction","title":"Mathematical Abstraction","text":"Using the multiple dispatch feature of Julia, ExaModels.jl generates highly efficient derivative computation code, specifically compiled for each computational pattern in the model. These derivative evaluation codes can be run over the data in various GPU array formats, and implemented via array and kernel programming in Julia Language. In turn, ExaModels.jl has the capability to efficiently evaluate first and second-order derivatives using GPU accelerators.","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"EditURL = \"jump.jl\"","category":"page"},{"location":"jump/#JuMP-Interface-(Experimental)","page":"JuMP Interface (experimental)","title":"JuMP Interface (Experimental)","text":"","category":"section"},{"location":"jump/#JuMP-to-an-ExaModel","page":"JuMP Interface (experimental)","title":"JuMP to an ExaModel","text":"","category":"section"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"We have an experimental interface to JuMP model. A JuMP model can be directly converted to a ExaModel. It is as simple as this:","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"using ExaModels, JuMP, CUDA\n\nN = 10\njm = Model()\n\n@variable(jm, x[i = 1:N], start = mod(i, 2) == 1 ? -1.2 : 1.0)\n@constraint(\n jm,\n s[i = 1:N-2],\n 3x[i+1]^3 + 2x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -\n x[i]exp(x[i] - x[i+1]) - 3 == 0.0\n)\n@objective(jm, Min, sum(100(x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N))\n\nem = ExaModel(jm; backend = CUDABackend())","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"An ExaModel{Float64, CUDA.CuArray{Float64, 1, CUDA.DeviceMemory}, ...}\n\n Problem name: Generic\n All variables: ████████████████████ 10 All constraints: ████████████████████ 8 \n free: ████████████████████ 10 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ████████████████████ 8 \n infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n nnzh: (-212.73% sparsity) 172 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 \n nonlinear: ████████████████████ 8 \n nnzj: ( 0.00% sparsity) 80 \n\n","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"Here, note that only scalar objective/constraints created via @constraint and @objective API are supported. Older syntax like @NLconstraint and @NLobjective are not supported. We can solve the model using any of the solvers supported by ExaModels. For example, we can use MadNLP:","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"using MadNLPGPU\n\nresult = madnlp(em)","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"\"Execution stats: Optimal Solution Found (tol = 1.0e-04).\"","category":"page"},{"location":"jump/#JuMP-Optimizer","page":"JuMP Interface (experimental)","title":"JuMP Optimizer","text":"","category":"section"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"Alternatively, one can use the Optimizer interface provided by ExaModels. This feature can be used as follows.","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"using ExaModels, JuMP, CUDA\nusing MadNLPGPU\n\nset_optimizer(jm, () -> ExaModels.MadNLPOptimizer(CUDABackend()))\noptimize!(jm)","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"This is MadNLP version v0.8.4, running with cuDSS v0.3.0\n\nNumber of nonzeros in constraint Jacobian............: 80\nNumber of nonzeros in Lagrangian Hessian.............: 172\n\nTotal number of variables............................: 10\n variables with only lower bounds: 0\n variables with lower and upper bounds: 0\n variables with only upper bounds: 0\nTotal number of equality constraints.................: 8\nTotal number of inequality constraints...............: 0\n inequality constraints with only lower bounds: 0\n inequality constraints with lower and upper bounds: 0\n inequality constraints with only upper bounds: 0\n\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 0 2.0570000e+03 2.48e+01 1.00e+02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0\n 1 2.0474072e+03 2.47e+01 2.97e+01 -1.0 2.27e+00 - 1.00e+00 4.00e-03h 1\n 2 1.1009058e+03 1.49e+01 2.24e+01 -1.0 2.24e+00 - 1.00e+00 1.00e+00h 1\n 3 1.1598223e+02 2.15e+00 5.34e+01 -1.0 2.14e+00 - 1.00e+00 1.00e+00h 1\n 4 6.5263510e+00 1.12e-01 4.74e+00 -1.0 1.72e-01 - 1.00e+00 1.00e+00h 1\n 5 6.2326771e+00 1.64e-03 2.08e-02 -1.0 5.91e-02 - 1.00e+00 1.00e+00h 1\n 6 6.2324576e+00 1.18e-06 1.22e-05 -3.8 1.40e-03 - 9.98e-01 1.00e+00h 1\n 7 6.2323021e+00 5.36e-11 1.98e-06 -5.0 3.12e-05 - 8.90e-01 1.00e+00h 1\n\nNumber of Iterations....: 7\n\n (scaled) (unscaled)\nObjective...............: 7.8690682927808731e-01 6.2323020878824522e+00\nDual infeasibility......: 1.9831098326816843e-06 1.5706229874838943e-05\nConstraint violation....: 5.3644585532052792e-11 5.3644585532052792e-11\nComplementarity.........: 1.1122043961252076e-05 8.8086588173116463e-05\nOverall NLP error.......: 8.8086588173116463e-05 8.8086588173116463e-05\n\nNumber of objective function evaluations = 8\nNumber of objective gradient evaluations = 8\nNumber of constraint evaluations = 8\nNumber of constraint Jacobian evaluations = 8\nNumber of Lagrangian Hessian evaluations = 7\nTotal wall-clock secs in solver (w/o fun. eval./lin. alg.) = 0.052\nTotal wall-clock secs in linear solver = 0.009\nTotal wall-clock secs in NLP function evaluations = 0.008\nTotal wall-clock secs = 0.069\n\nEXIT: Optimal Solution Found (tol = 1.0e-04).\n","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"Again, only scalar objective/constraints created via @constraint and @objective API are supported. Older syntax like @NLconstraint and @NLobjective are not supported.","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"","category":"page"},{"location":"jump/","page":"JuMP Interface (experimental)","title":"JuMP Interface (experimental)","text":"This page was generated using Literate.jl.","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"EditURL = \"gpu.jl\"","category":"page"},{"location":"gpu/#Accelerations","page":"Accelerations","title":"Accelerations","text":"","category":"section"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"One of the key features of ExaModels.jl is being able to evaluate derivatives either on multi-threaded CPUs or GPU accelerators. Currently, GPU acceleration is only tested for NVIDIA GPUs. If you'd like to use multi-threaded CPU acceleration, start julia with","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"$ julia -t 4 # using 4 threads","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"Also, if you're using NVIDIA GPUs, make sure to have installed appropriate drivers.","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"Let's say that our CPU code is as follows.","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"function luksan_vlcek_obj(x, i)\n return 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2\nend\n\nfunction luksan_vlcek_con(x, i)\n return 3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -\n x[i]exp(x[i] - x[i+1]) - 3\nend\n\nfunction luksan_vlcek_x0(i)\n return mod(i, 2) == 1 ? -1.2 : 1.0\nend\n\nfunction luksan_vlcek_model(N)\n\n c = ExaCore()\n x = variable(c, N; start = (luksan_vlcek_x0(i) for i = 1:N))\n constraint(c, luksan_vlcek_con(x, i) for i = 1:N-2)\n objective(c, luksan_vlcek_obj(x, i) for i = 2:N)\n\n return ExaModel(c)\nend","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"luksan_vlcek_model (generic function with 1 method)","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"Now we simply modify this by","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"function luksan_vlcek_model(N, backend = nothing)\n\n c = ExaCore(; backend = backend) # specify the backend\n x = variable(c, N; start = (luksan_vlcek_x0(i) for i = 1:N))\n constraint(c, luksan_vlcek_con(x, i) for i = 1:N-2)\n objective(c, luksan_vlcek_obj(x, i) for i = 2:N)\n\n return ExaModel(c)\nend","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"luksan_vlcek_model (generic function with 2 methods)","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"The acceleration can be done simply by specifying the backend. In particular, for multi-threaded CPUs,","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"using ExaModels, NLPModelsIpopt, KernelAbstractions\n\nm = luksan_vlcek_model(10, CPU())\nipopt(m)","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"\"Execution stats: first-order stationary\"","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"For NVIDIA GPUs, we can use CUDABackend. However, currently, there are not many optimization solvers that are capable of solving problems on GPUs. The only option right now is using MadNLP.jl. To use this, first install","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"import Pkg; Pkg.add(\"MadNLPGPU\")","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"Then, we can run:","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"using CUDA, MadNLPGPU\n\nm = luksan_vlcek_model(10, CUDABackend())\nmadnlp(m)","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"In the case we have arrays for the data, what we need to do is to simply convert the array types to the corresponding device array types. In particular,","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"function cuda_luksan_vlcek_model(N)\n c = ExaCore(; backend = CUDABackend())\n d1 = CuArray(1:N-2)\n d2 = CuArray(2:N)\n d3 = CuArray([luksan_vlcek_x0(i) for i = 1:N])\n\n x = variable(c, N; start = d3)\n constraint(c, luksan_vlcek_con(x, i) for i in d1)\n objective(c, luksan_vlcek_obj(x, i) for i in d2)\n\n return ExaModel(c)\nend","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"cuda_luksan_vlcek_model (generic function with 1 method)","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"m = cuda_luksan_vlcek_model(10)\nmadnlp(m)","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"","category":"page"},{"location":"gpu/","page":"Accelerations","title":"Accelerations","text":"This page was generated using Literate.jl.","category":"page"},{"location":"ref/#References","page":"References","title":"References","text":"","category":"section"},{"location":"ref/","page":"References","title":"References","text":"L. T. Biegler. Nonlinear programming: concepts, algorithms, and applications to chemical processes (SIAM, 2010).\n\n\n\nC. Coffrin, R. Bent, K. Sundar, Y. Ng and M. Lubin. PowerModels.jl: An open-source framework for exploring power flow formulations. In: 2018 Power Systems Computation Conference (PSCC) (IEEE, 2018); pp. 1–8.\n\n\n\nL. Lukšan and J. Vlček. Indefinitely preconditioned inexact Newton method for large sparse equality constrained non-linear programming problems. Numerical linear algebra with applications 5, 219–247 (1998).\n\n\n\n","category":"page"},{"location":"#Introduction","page":"Introduction","title":"Introduction","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"Welcome to the documentation of ExaModels.jl\t","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"note: Note\nExaModels runs on julia VERSION ≥ v\"1.9\"","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"warning: Warning\nPlease help us improve ExaModels and this documentation! ExaModels is in the early stage of development, and you may encounter unintended behaviors or missing documentations. If you find anything is not working as intended or documentation is missing, please open issues or pull requests or start discussions. ","category":"page"},{"location":"#What-is-ExaModels.jl?","page":"Introduction","title":"What is ExaModels.jl?","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"ExaModels.jl is an algebraic modeling and automatic differentiation tool in Julia Language, specialized for SIMD abstraction of nonlinear programs. ExaModels.jl employs what we call SIMD abstraction for nonlinear programs (NLPs), which allows for the preservation of the parallelizable structure within the model equations, facilitating efficient automatic differentiation either on the single-thread CPUs, multi-threaded CPUs, as well as GPU accelerators. More details about SIMD abstraction can be found here.","category":"page"},{"location":"#Key-differences-from-other-algebraic-modeling-tools","page":"Introduction","title":"Key differences from other algebraic modeling tools","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"ExaModels.jl is different from other algebraic modeling tools, such as JuMP or AMPL, in the following ways:","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"Modeling Interface: ExaModels.jl requires users to specify the model equations always in the form of Generators. This restrictive structure allows ExaModels.jl to preserve the SIMD-compatible structure in the model equations. This unique feature distinguishes ExaModels.jl from other algebraic modeling tools.\nPerformance: ExaModels.jl compiles (via Julia's compiler) derivative evaluation codes tailored to each computation pattern. Through reverse-mode automatic differentiation using these tailored codes, ExaModels.jl achieves significantly faster derivative evaluation speeds, even when using CPU.\nPortability: ExaModels.jl goes beyond traditional boundaries of","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"algebraic modeling systems by enabling derivative evaluation on GPU accelerators. Implementation of GPU kernels is accomplished using the portable programming paradigm offered by KernelAbstractions.jl. With ExaModels.jl, you can run your code on various devices, including multi-threaded CPUs, NVIDIA GPUs, AMD GPUs, and Intel GPUs. Note that Apple's Metal is currently not supported due to its lack of support for double-precision arithmetic.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"Thus, ExaModels.jl shines when your model has","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"nonlinear objective and constraints;\na large number of variables and constraints;\nhighly repetitive structure;\nsparse Hessian and Jacobian.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"These features are often exhibited in optimization problems associated with first-principle physics-based models. Primary examples include optimal control problems formulated with direct subscription method [1] and network system optimization problems, such as optimal power flow [2] and gas network control/estimation problems.","category":"page"},{"location":"#Performance-Highlights","page":"Introduction","title":"Performance Highlights","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"ExaModels.jl significantly enhances the performance of derivative evaluations for nonlinear optimization problems that can benefit from SIMD abstraction. Recent benchmark results demonstrate this notable improvement. Notably, when solving the AC OPF problem for a 9241 bus system, derivative evaluation using ExaModels.jl on GPUs can be up to two orders of magnitude faster compared to JuMP or AMPL. Some benchmark results are available below. The following problems are used for benchmarking:","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"LuksanVlcek problem\nQuadrotor control problem\nDistillation column control problem\nAC optimal power flow problem","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"(Image: benchmark)","category":"page"},{"location":"#Supported-Solvers","page":"Introduction","title":"Supported Solvers","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"ExaModels can be used with any solver that can handle NLPModel data type, but several callbacks are not currently implemented, and cause some errors. Currently, it is tested with the following solvers:","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"Ipopt (via NLPModelsIpopt.jl)\nMadNLP.jl","category":"page"},{"location":"#Documentation-Structure","page":"Introduction","title":"Documentation Structure","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"This documentation is structured in the following way.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"The remainder of this page highlights several key aspects of ExaModels.jl.\nThe mathematical abstraction–-SIMD abstraction of nonlinear programming–-of ExaModels.jl is discussed in Mathematical Abstraction page.\nThe step-by-step tutorial of using ExaModels.jl can be found in Tutorial page.\nThis documentation does not intend to discuss the engineering behind the implementation of ExaModels.jl. Some high-level idea is discussed in a recent publication, but the full details of the engineering behind it will be discussed in the future publications.","category":"page"},{"location":"#Citing-ExaModels.jl","page":"Introduction","title":"Citing ExaModels.jl","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"If you use ExaModels.jl in your research, we would greatly appreciate your citing this preprint.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"@misc{shin2023accelerating,\n title={Accelerating Optimal Power Flow with {GPU}s: {SIMD} Abstraction of Nonlinear Programs and Condensed-Space Interior-Point Methods}, \n author={Sungho Shin and Fran{\\c{c}}ois Pacaud and Mihai Anitescu},\n year={2023},\n eprint={2307.16830},\n archivePrefix={arXiv},\n primaryClass={math.OC}\n}","category":"page"},{"location":"#Supporting-ExaModels.jl","page":"Introduction","title":"Supporting ExaModels.jl","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"Please report issues and feature requests via the GitHub issue tracker.\nQuestions are welcome at GitHub discussion forum.","category":"page"}] } diff --git a/previews/PR97/simd/index.html b/previews/PR97/simd/index.html index cce3649..21faf74 100644 --- a/previews/PR97/simd/index.html +++ b/previews/PR97/simd/index.html @@ -3,4 +3,4 @@ \min_{x^\flat\leq x \leq x^\sharp} & \sum_{l\in[L]}\sum_{i\in [I_l]} f^{(l)}(x; p^{(l)}_i)\\ \text{s.t.}\; &\left[g^{(m)}(x; q_j)\right]_{j\in [J_m]} +\sum_{n\in [N_m]}\sum_{k\in [K_n]}h^{(n)}(x; s^{(n)}_{k}) =0,\quad \forall m\in[M] -\end{aligned}\]

where $f^{(\ell)}(\cdot,\cdot)$, $g^{(m)}(\cdot,\cdot)$, and $h^{(n)}(\cdot,\cdot)$ are twice differentiable functions with respect to the first argument, whereas $\{\{p^{(k)}_i\}_{i\in [N_k]}\}_{k\in[K]}$, $\{\{q^{(k)}_{i}\}_{i\in [M_l]}\}_{m\in[M]}$, and $\{\{\{s^{(n)}_{k}\}_{k\in[K_n]}\}_{n\in[N_m]}\}_{m\in[M]}$ are problem data, which can either be discrete or continuous. It is also assumed that our functions $f^{(l)}(\cdot,\cdot)$, $g^{(m)}(\cdot,\cdot)$, and $h^{(n)}(\cdot,\cdot)$ can be expressed with computational graphs of moderate length.

Why SIMD abstraction?

Many physics-based models, such as AC OPF, have a highly repetitive structure. One of the manifestations of it is that the mathematical statement of the model is concise, even if the practical model may contain millions of variables and constraints. This is possible due to the use of repetition over a certain index and data sets. For example, it suffices to use 15 computational patterns to fully specify the AC OPF model. These patterns arise from (1) generation cost, (2) reference bus voltage angle constraint, (3-6) active and reactive power flow (from and to), (7) voltage angle difference constraint, (8-9) apparent power flow limits (from and to), (10-11) power balance equations, (12-13) generators' contributions to the power balance equations, and (14-15) in/out flows contributions to the power balance equations. However, such repetitive structure is not well exploited in the standard NLP modeling paradigms. In fact, without the SIMD abstraction, it is difficult for the AD package to detect the parallelizable structure within the model, as it will require the full inspection of the computational graph over all expressions. By preserving the repetitive structures in the model, the repetitive structure can be directly available in AD implementation.

Using the multiple dispatch feature of Julia, ExaModels.jl generates highly efficient derivative computation code, specifically compiled for each computational pattern in the model. These derivative evaluation codes can be run over the data in various GPU array formats, and implemented via array and kernel programming in Julia Language. In turn, ExaModels.jl has the capability to efficiently evaluate first and second-order derivatives using GPU accelerators.

+\end{aligned}\]

where $f^{(\ell)}(\cdot,\cdot)$, $g^{(m)}(\cdot,\cdot)$, and $h^{(n)}(\cdot,\cdot)$ are twice differentiable functions with respect to the first argument, whereas $\{\{p^{(k)}_i\}_{i\in [N_k]}\}_{k\in[K]}$, $\{\{q^{(k)}_{i}\}_{i\in [M_l]}\}_{m\in[M]}$, and $\{\{\{s^{(n)}_{k}\}_{k\in[K_n]}\}_{n\in[N_m]}\}_{m\in[M]}$ are problem data, which can either be discrete or continuous. It is also assumed that our functions $f^{(l)}(\cdot,\cdot)$, $g^{(m)}(\cdot,\cdot)$, and $h^{(n)}(\cdot,\cdot)$ can be expressed with computational graphs of moderate length.

Why SIMD abstraction?

Many physics-based models, such as AC OPF, have a highly repetitive structure. One of the manifestations of it is that the mathematical statement of the model is concise, even if the practical model may contain millions of variables and constraints. This is possible due to the use of repetition over a certain index and data sets. For example, it suffices to use 15 computational patterns to fully specify the AC OPF model. These patterns arise from (1) generation cost, (2) reference bus voltage angle constraint, (3-6) active and reactive power flow (from and to), (7) voltage angle difference constraint, (8-9) apparent power flow limits (from and to), (10-11) power balance equations, (12-13) generators' contributions to the power balance equations, and (14-15) in/out flows contributions to the power balance equations. However, such repetitive structure is not well exploited in the standard NLP modeling paradigms. In fact, without the SIMD abstraction, it is difficult for the AD package to detect the parallelizable structure within the model, as it will require the full inspection of the computational graph over all expressions. By preserving the repetitive structures in the model, the repetitive structure can be directly available in AD implementation.

Using the multiple dispatch feature of Julia, ExaModels.jl generates highly efficient derivative computation code, specifically compiled for each computational pattern in the model. These derivative evaluation codes can be run over the data in various GPU array formats, and implemented via array and kernel programming in Julia Language. In turn, ExaModels.jl has the capability to efficiently evaluate first and second-order derivatives using GPU accelerators.