Mathieu Besançon
Zuse Institute Berlin
Run the notebooks (local or online): https://github.com/matbesancon/Julia_JuMP_workshop
Ideally:
$$\begin{align} \min_{x}\,\, & f(x)\\ \text{s.t.} & \\ & A x \leq b \\ & x \in \mathcal{X} \end{align}$$A form of optimization-focused Domain-Specific Language (DSL).
Close enough to the original syntax:
set rows;
set cols;
param A{rows, cols};
param b{rows};
var x {ncols} in Xset;
minimize f(x)
subject to:
forall {i in rows}
sum {j in cols} A{i,j} * x{j} <= b{i}
Limits of AMLs $\Rightarrow$ programming interfaces
Example from CPLEX tutorials:
# instantiating
IloModel model(env);
IloNumVarArray vars(env);
vars.add(IloNumVar(env, 0.0, 40.0));
vars.add(IloNumVar(env));
vars.add(IloNumVar(env));
model.add(IloMaximize(env, vars[0] + 2 * vars[1] + 3 * vars[2]));
model.add( - vars[0] + vars[1] + vars[2] <= 20);
model.add( vars[0] - 3 * vars[1] + vars[2] <= 30);
# solving
IloCplex cplex(model);
if ( !cplex.solve() ) {
env.error() << "Failed to optimize LP." << endl;
throw(-1);
}
# fetching results
IloNumArray vals(env);
env.out() << "Solution status = " << cplex.getStatus() << endl;
env.out() << "Solution value = " << cplex.getObjValue() << endl;
cplex.getValues(vals, vars);
env.out() << "Values = " << vals << endl;
Embed a modelling language within another programming language.
Aim: unified modelling interface for structured, constrained optimization
@show(1 + 3 + rand());
1 + 3 + rand() = 4.247647748607024
Notion of "element-wise" operation.
A = [
0 1
1 0
]
@show A^2; # A * A
@show A.^2; # A_ij^2 ∀ ij
A ^ 2 = [1 0; 0 1] A .^ 2 = [0 1; 1 0]
using JuMP
import Clp
import Cbc
m = Model()
# scalars
@variable(m, x)
@variable(m, 1 <= y <= π)
@variable(m, z[i=1:5,j=1:10], Bin)
@variable(m, 2.5 <= Δ[i=1:5,j=i:2i, k=2:5] <= 3.5 + i)
# fancier indexing
product_type = ["flour", "eggs", "milk"]
@variable(m, grocery_list[p = product_type] >= 0, Int);
@variable(m, number_pancakes >= 0, Int)
@constraint(m, x <= 3 * y)
@constraint(m, recipe_constraint_egg,
number_pancakes <= 3 * grocery_list["eggs"]
)
@constraint(m, recipe_constraint_flour,
number_pancakes <= 0.5 * grocery_list["flour"]
)
@constraint(m, recipe_constraint_milk,
number_pancakes <= 0.2 * grocery_list["milk"]
);
println(m)
Feasibility Subject to x - 3 y ≤ 0.0 recipe_constraint_egg : number_pancakes - 3 grocery_list[eggs] ≤ 0.0 recipe_constraint_flour : number_pancakes - 0.5 grocery_list[flour] ≤ 0.0 recipe_constraint_milk : number_pancakes - 0.2 grocery_list[milk] ≤ 0.0 y ≥ 1.0 Δ[1,1,2] ≥ 2.5 Δ[1,1,3] ≥ 2.5 Δ[1,1,4] ≥ 2.5 Δ[1,1,5] ≥ 2.5 Δ[1,2,2] ≥ 2.5 Δ[1,2,3] ≥ 2.5 Δ[1,2,4] ≥ 2.5 Δ[1,2,5] ≥ 2.5 Δ[2,2,2] ≥ 2.5 Δ[2,2,3] ≥ 2.5 Δ[2,2,4] ≥ 2.5 Δ[2,2,5] ≥ 2.5 Δ[2,3,2] ≥ 2.5 Δ[2,3,3] ≥ 2.5 Δ[2,3,4] ≥ 2.5 Δ[2,3,5] ≥ 2.5 Δ[2,4,2] ≥ 2.5 Δ[2,4,3] ≥ 2.5 Δ[2,4,4] ≥ 2.5 Δ[2,4,5] ≥ 2.5 Δ[3,3,2] ≥ 2.5 Δ[3,3,3] ≥ 2.5 Δ[3,3,4] ≥ 2.5 Δ[3,3,5] ≥ 2.5 Δ[3,4,2] ≥ 2.5 Δ[3,4,3] ≥ 2.5 Δ[3,4,4] ≥ 2.5 Δ[3,4,5] ≥ 2.5 Δ[3,5,2] ≥ 2.5 Δ[3,5,3] ≥ 2.5 Δ[3,5,4] ≥ 2.5 Δ[3,5,5] ≥ 2.5 Δ[3,6,2] ≥ 2.5 Δ[3,6,3] ≥ 2.5 Δ[3,6,4] ≥ 2.5 Δ[3,6,5] ≥ 2.5 Δ[4,4,2] ≥ 2.5 Δ[4,4,3] ≥ 2.5 Δ[4,4,4] ≥ 2.5 Δ[4,4,5] ≥ 2.5 Δ[4,5,2] ≥ 2.5 Δ[4,5,3] ≥ 2.5 Δ[4,5,4] ≥ 2.5 Δ[4,5,5] ≥ 2.5 Δ[4,6,2] ≥ 2.5 Δ[4,6,3] ≥ 2.5 Δ[4,6,4] ≥ 2.5 Δ[4,6,5] ≥ 2.5 Δ[4,7,2] ≥ 2.5 Δ[4,7,3] ≥ 2.5 Δ[4,7,4] ≥ 2.5 Δ[4,7,5] ≥ 2.5 Δ[4,8,2] ≥ 2.5 Δ[4,8,3] ≥ 2.5 Δ[4,8,4] ≥ 2.5 Δ[4,8,5] ≥ 2.5 Δ[5,5,2] ≥ 2.5 Δ[5,5,3] ≥ 2.5 Δ[5,5,4] ≥ 2.5 Δ[5,5,5] ≥ 2.5 Δ[5,6,2] ≥ 2.5 Δ[5,6,3] ≥ 2.5 Δ[5,6,4] ≥ 2.5 Δ[5,6,5] ≥ 2.5 Δ[5,7,2] ≥ 2.5 Δ[5,7,3] ≥ 2.5 Δ[5,7,4] ≥ 2.5 Δ[5,7,5] ≥ 2.5 Δ[5,8,2] ≥ 2.5 Δ[5,8,3] ≥ 2.5 Δ[5,8,4] ≥ 2.5 Δ[5,8,5] ≥ 2.5 Δ[5,9,2] ≥ 2.5 Δ[5,9,3] ≥ 2.5 Δ[5,9,4] ≥ 2.5 Δ[5,9,5] ≥ 2.5 Δ[5,10,2] ≥ 2.5 Δ[5,10,3] ≥ 2.5 Δ[5,10,4] ≥ 2.5 Δ[5,10,5] ≥ 2.5 grocery_list[flour] ≥ 0.0 grocery_list[eggs] ≥ 0.0 grocery_list[milk] ≥ 0.0 number_pancakes ≥ 0.0 y ≤ 3.141592653589793 Δ[1,1,2] ≤ 4.5 Δ[1,1,3] ≤ 4.5 Δ[1,1,4] ≤ 4.5 Δ[1,1,5] ≤ 4.5 Δ[1,2,2] ≤ 4.5 Δ[1,2,3] ≤ 4.5 Δ[1,2,4] ≤ 4.5 Δ[1,2,5] ≤ 4.5 Δ[2,2,2] ≤ 5.5 Δ[2,2,3] ≤ 5.5 Δ[2,2,4] ≤ 5.5 Δ[2,2,5] ≤ 5.5 Δ[2,3,2] ≤ 5.5 Δ[2,3,3] ≤ 5.5 Δ[2,3,4] ≤ 5.5 Δ[2,3,5] ≤ 5.5 Δ[2,4,2] ≤ 5.5 Δ[2,4,3] ≤ 5.5 Δ[2,4,4] ≤ 5.5 Δ[2,4,5] ≤ 5.5 Δ[3,3,2] ≤ 6.5 Δ[3,3,3] ≤ 6.5 Δ[3,3,4] ≤ 6.5 Δ[3,3,5] ≤ 6.5 Δ[3,4,2] ≤ 6.5 Δ[3,4,3] ≤ 6.5 Δ[3,4,4] ≤ 6.5 Δ[3,4,5] ≤ 6.5 Δ[3,5,2] ≤ 6.5 Δ[3,5,3] ≤ 6.5 Δ[3,5,4] ≤ 6.5 Δ[3,5,5] ≤ 6.5 Δ[3,6,2] ≤ 6.5 Δ[3,6,3] ≤ 6.5 Δ[3,6,4] ≤ 6.5 Δ[3,6,5] ≤ 6.5 Δ[4,4,2] ≤ 7.5 Δ[4,4,3] ≤ 7.5 Δ[4,4,4] ≤ 7.5 Δ[4,4,5] ≤ 7.5 Δ[4,5,2] ≤ 7.5 Δ[4,5,3] ≤ 7.5 Δ[4,5,4] ≤ 7.5 Δ[4,5,5] ≤ 7.5 Δ[4,6,2] ≤ 7.5 Δ[4,6,3] ≤ 7.5 Δ[4,6,4] ≤ 7.5 Δ[4,6,5] ≤ 7.5 Δ[4,7,2] ≤ 7.5 Δ[4,7,3] ≤ 7.5 Δ[4,7,4] ≤ 7.5 Δ[4,7,5] ≤ 7.5 Δ[4,8,2] ≤ 7.5 Δ[4,8,3] ≤ 7.5 Δ[4,8,4] ≤ 7.5 Δ[4,8,5] ≤ 7.5 Δ[5,5,2] ≤ 8.5 Δ[5,5,3] ≤ 8.5 Δ[5,5,4] ≤ 8.5 Δ[5,5,5] ≤ 8.5 Δ[5,6,2] ≤ 8.5 Δ[5,6,3] ≤ 8.5 Δ[5,6,4] ≤ 8.5 Δ[5,6,5] ≤ 8.5 Δ[5,7,2] ≤ 8.5 Δ[5,7,3] ≤ 8.5 Δ[5,7,4] ≤ 8.5 Δ[5,7,5] ≤ 8.5 Δ[5,8,2] ≤ 8.5 Δ[5,8,3] ≤ 8.5 Δ[5,8,4] ≤ 8.5 Δ[5,8,5] ≤ 8.5 Δ[5,9,2] ≤ 8.5 Δ[5,9,3] ≤ 8.5 Δ[5,9,4] ≤ 8.5 Δ[5,9,5] ≤ 8.5 Δ[5,10,2] ≤ 8.5 Δ[5,10,3] ≤ 8.5 Δ[5,10,4] ≤ 8.5 Δ[5,10,5] ≤ 8.5 grocery_list[flour] integer grocery_list[eggs] integer grocery_list[milk] integer number_pancakes integer z[1,1] binary z[2,1] binary z[3,1] binary z[4,1] binary z[5,1] binary z[1,2] binary z[2,2] binary z[3,2] binary z[4,2] binary z[5,2] binary z[1,3] binary z[2,3] binary z[3,3] binary z[4,3] binary z[5,3] binary z[1,4] binary z[2,4] binary z[3,4] binary z[4,4] binary z[5,4] binary z[1,5] binary z[2,5] binary z[3,5] binary z[4,5] binary z[5,5] binary z[1,6] binary z[2,6] binary z[3,6] binary z[4,6] binary z[5,6] binary z[1,7] binary z[2,7] binary z[3,7] binary z[4,7] binary z[5,7] binary z[1,8] binary z[2,8] binary z[3,8] binary z[4,8] binary z[5,8] binary z[1,9] binary z[2,9] binary z[3,9] binary z[4,9] binary z[5,9] binary z[1,10] binary z[2,10] binary z[3,10] binary z[4,10] binary z[5,10] binary
# alternative syntax
product_quantities = Dict(
"flour" => 0.5, "eggs" => 3, "milk" => 0.2,
)
for (product, quantity) in product_quantities
@constraint(m, number_pancakes <= quantity * grocery_list[product])
end
product_prices = Dict(
"flour" => 0.3, "eggs" => 1, "milk" => 2,
)
@objective(m, Min,
sum(price * grocery_list[product] for (product, price) in product_prices)
)
optimize!(m)
NoOptimizer() Stacktrace: [1] optimize!(::Model, ::Nothing; bridge_constraints::Bool, ignore_optimize_hook::Bool, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/matbesancon/.julia/packages/JuMP/MnJQc/src/optimizer_interface.jl:127 [2] optimize! at /home/matbesancon/.julia/packages/JuMP/MnJQc/src/optimizer_interface.jl:107 [inlined] (repeats 2 times) [3] top-level scope at In[9]:1
set_optimizer(m, Clp.Optimizer)
optimize!(m)
MathOptInterface.UnsupportedConstraint{MathOptInterface.SingleVariable,MathOptInterface.ZeroOne}: `MathOptInterface.SingleVariable`-in-`MathOptInterface.ZeroOne` constraint is not supported by the model. Stacktrace: [1] bridge_type(::MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Clp.Optimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, ::Type{MathOptInterface.ZeroOne}) at /home/matbesancon/.julia/packages/MathOptInterface/bygN7/src/Bridges/lazy_bridge_optimizer.jl:323 [2] concrete_bridge_type at /home/matbesancon/.julia/packages/MathOptInterface/bygN7/src/Bridges/Variable/bridge.jl:183 [inlined] [3] add_constrained_variable(::MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Clp.Optimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, ::MathOptInterface.ZeroOne) at /home/matbesancon/.julia/packages/MathOptInterface/bygN7/src/Bridges/bridge_optimizer.jl:1177 [4] copy_single_variable(::MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Clp.Optimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, ::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}, ::MathOptInterface.Utilities.IndexMap, ::Type{MathOptInterface.ZeroOne}) at /home/matbesancon/.julia/packages/MathOptInterface/bygN7/src/Utilities/copy.jl:203 [5] (::MathOptInterface.Utilities.var"#131#139"{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Clp.Optimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}},MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}},MathOptInterface.Utilities.IndexMap})(::Type{T} where T) at ./none:0 [6] iterate at ./generator.jl:47 [inlined] [7] collect_to!(::Array{Array{T,1} where T,1}, ::Base.Generator{Array{DataType,1},MathOptInterface.Utilities.var"#131#139"{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Clp.Optimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}},MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}},MathOptInterface.Utilities.IndexMap}}, ::Int64, ::Int64) at ./array.jl:710 [8] collect_to!(::Array{Array{MathOptInterface.ConstraintIndex{MathOptInterface.SingleVariable,MathOptInterface.GreaterThan{Float64}},1},1}, ::Base.Generator{Array{DataType,1},MathOptInterface.Utilities.var"#131#139"{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Clp.Optimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}},MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}},MathOptInterface.Utilities.IndexMap}}, ::Int64, ::Int64) at ./array.jl:718 [9] collect_to_with_first!(::Array{Array{MathOptInterface.ConstraintIndex{MathOptInterface.SingleVariable,MathOptInterface.GreaterThan{Float64}},1},1}, ::Array{MathOptInterface.ConstraintIndex{MathOptInterface.SingleVariable,MathOptInterface.GreaterThan{Float64}},1}, ::Base.Generator{Array{DataType,1},MathOptInterface.Utilities.var"#131#139"{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Clp.Optimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}},MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}},MathOptInterface.Utilities.IndexMap}}, ::Int64) at ./array.jl:689 [10] collect(::Base.Generator{Array{DataType,1},MathOptInterface.Utilities.var"#131#139"{MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Clp.Optimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}},MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}},MathOptInterface.Utilities.IndexMap}}) at ./array.jl:670 [11] default_copy_to(::MathOptInterface.Bridges.LazyBridgeOptimizer{MathOptInterface.Utilities.CachingOptimizer{Clp.Optimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}}, ::MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}, ::Bool) at /home/matbesancon/.julia/packages/MathOptInterface/bygN7/src/Utilities/copy.jl:325 [12] #automatic_copy_to#113 at /home/matbesancon/.julia/packages/MathOptInterface/bygN7/src/Utilities/copy.jl:15 [inlined] [13] #copy_to#3 at /home/matbesancon/.julia/packages/MathOptInterface/bygN7/src/Bridges/bridge_optimizer.jl:268 [inlined] [14] attach_optimizer(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}) at /home/matbesancon/.julia/packages/MathOptInterface/bygN7/src/Utilities/cachingoptimizer.jl:149 [15] optimize!(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}) at /home/matbesancon/.julia/packages/MathOptInterface/bygN7/src/Utilities/cachingoptimizer.jl:185 [16] optimize!(::Model, ::Nothing; bridge_constraints::Bool, ignore_optimize_hook::Bool, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/matbesancon/.julia/packages/JuMP/MnJQc/src/optimizer_interface.jl:131 [17] optimize! at /home/matbesancon/.julia/packages/JuMP/MnJQc/src/optimizer_interface.jl:107 [inlined] (repeats 2 times) [18] top-level scope at In[10]:2
# 3rd attempt, appropriate solver
set_optimizer(m, Cbc.Optimizer)
optimize!(m)
Welcome to the CBC MILP Solver Version: 2.10.3 Build Date: Jan 1 1970 command line - Cbc_C_Interface -solve -quit (default strategy 1) Continuous objective value is 0 - 0.00 seconds Cgl0004I processed model has 0 rows, 0 columns (0 integer (0 of which binary)) and 0 elements Cbc3007W No integer variables - nothing to do Cuts at root node changed objective from 0 to -1.79769e+308 Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) ZeroHalf was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds) Result - Optimal solution found Objective value: 0.00000000 Enumerated nodes: 0 Total iterations: 1 Time (CPU seconds): 0.00 Time (Wallclock seconds): 0.01 Total time (CPU seconds): 0.00 (Wallclock seconds): 0.02
# reduce verbosity
MOI.set(m, MOI.Silent(), true)
optimize!(m)
# did we solve it optimally
@show termination_status(m)
@show primal_status(m)
@show dual_status(m);
termination_status(m) = MathOptInterface.OPTIMAL primal_status(m) = MathOptInterface.FEASIBLE_POINT dual_status(m) = MathOptInterface.NO_SOLUTION
# single value
JuMP.value(x)
0.0
# objective value
JuMP.objective_value(m)
0.0
# . before () for element-wise call
JuMP.value.(z)
5×10 Array{Float64,2}: -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0
x - sum(z)
JuMP.value(x - sum(z))
0.0
JuMP.has_duals(m)
false
$a x_i + b = y_i + \varepsilon_i$
using Plots
x = randn(100)
y = randn(100) .+ 2.5 * x .+ 3;
scatter(x, y, legend = nothing)
import Ipopt # quadratic terms
a_values = Float64[]
b_values = Float64[]
penalty_values = [0.5, 10.0, 100.0]
for λ in penalty_values
regression = Model(Ipopt.Optimizer)
@variable(regression, a)
@variable(regression, b)
@expression(regression,
quad_error[i=1:100],
(a * x[i] + b - y[i])^2
)
@objective(regression, Min, sum(quad_error) + λ * (a^2 + b^2))
MOI.set(regression, MOI.Silent(), true)
optimize!(regression)
push!(a_values, JuMP.value(a))
push!(b_values, JuMP.value(b))
end
****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License (EPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** This is Ipopt version 3.13.2, running with linear solver mumps. NOTE: Other linear solvers might be more efficient (see Ipopt documentation). Number of nonzeros in equality constraint Jacobian...: 0 Number of nonzeros in inequality constraint Jacobian.: 0 Number of nonzeros in Lagrangian Hessian.............: 3 Total number of variables............................: 2 variables with only lower bounds: 0 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 0 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 1.9350740e+03 0.00e+00 1.00e+02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 1.0369895e+02 0.00e+00 6.15e-15 -1.0 3.06e+00 - 1.00e+00 1.00e+00f 1 Number of Iterations....: 1 (scaled) (unscaled) Objective...............: 1.4951724244355837e+01 1.0369894665666095e+02 Dual infeasibility......: 6.1469317045755001e-15 4.2632564145606011e-14 Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00 Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00 Overall NLP error.......: 6.1469317045755001e-15 4.2632564145606011e-14 Number of objective function evaluations = 2 Number of objective gradient evaluations = 2 Number of equality constraint evaluations = 0 Number of inequality constraint evaluations = 0 Number of equality constraint Jacobian evaluations = 0 Number of inequality constraint Jacobian evaluations = 0 Number of Lagrangian Hessian evaluations = 1 Total CPU secs in IPOPT (w/o function evaluations) = 1.581 Total CPU secs in NLP function evaluations = 0.576 EXIT: Optimal Solution Found. This is Ipopt version 3.13.2, running with linear solver mumps. NOTE: Other linear solvers might be more efficient (see Ipopt documentation). Number of nonzeros in equality constraint Jacobian...: 0 Number of nonzeros in inequality constraint Jacobian.: 0 Number of nonzeros in Lagrangian Hessian.............: 3 Total number of variables............................: 2 variables with only lower bounds: 0 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 0 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 1.9350740e+03 0.00e+00 1.00e+02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 2.4297094e+02 0.00e+00 6.15e-15 -1.0 2.83e+00 - 1.00e+00 1.00e+00f 1 Number of Iterations....: 1 (scaled) (unscaled) Objective...............: 3.5032510793435371e+01 2.4297093824404374e+02 Dual infeasibility......: 6.1469317045755001e-15 4.2632564145606011e-14 Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00 Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00 Overall NLP error.......: 6.1469317045755001e-15 4.2632564145606011e-14 Number of objective function evaluations = 2 Number of objective gradient evaluations = 2 Number of equality constraint evaluations = 0 Number of inequality constraint evaluations = 0 Number of equality constraint Jacobian evaluations = 0 Number of inequality constraint Jacobian evaluations = 0 Number of Lagrangian Hessian evaluations = 1 Total CPU secs in IPOPT (w/o function evaluations) = 0.001 Total CPU secs in NLP function evaluations = 0.000 EXIT: Optimal Solution Found. This is Ipopt version 3.13.2, running with linear solver mumps. NOTE: Other linear solvers might be more efficient (see Ipopt documentation). Number of nonzeros in equality constraint Jacobian...: 0 Number of nonzeros in inequality constraint Jacobian.: 0 Number of nonzeros in Lagrangian Hessian.............: 3 Total number of variables............................: 2 variables with only lower bounds: 0 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 0 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 1.9350740e+03 0.00e+00 1.00e+02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 9.5144333e+02 0.00e+00 1.23e-14 -1.0 1.63e+00 - 1.00e+00 1.00e+00f 1 Number of Iterations....: 1 (scaled) (unscaled) Objective...............: 1.3718286154989943e+02 9.5144332583857181e+02 Dual infeasibility......: 1.2293863409151000e-14 8.5265128291212022e-14 Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00 Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00 Overall NLP error.......: 1.2293863409151000e-14 8.5265128291212022e-14 Number of objective function evaluations = 2 Number of objective gradient evaluations = 2 Number of equality constraint evaluations = 0 Number of inequality constraint evaluations = 0 Number of equality constraint Jacobian evaluations = 0 Number of inequality constraint Jacobian evaluations = 0 Number of Lagrangian Hessian evaluations = 1 Total CPU secs in IPOPT (w/o function evaluations) = 0.001 Total CPU secs in NLP function evaluations = 0.000 EXIT: Optimal Solution Found.
p = scatter(x, y, label="data points", legend=:topleft)
for j in eachindex(penalty_values)
λ = penalty_values[j]
plot!(p, [-2, 2], [-2 * a_values[j] + b_values[j], 2 * a_values[j] + b_values[j]], label = "λ=$λ", width=4)
end
p
a_values = Float64[]
b_values = Float64[]
penalty_values = [0.01, 0.5, 10.0, 100.0]
regression = Model(Ipopt.Optimizer)
@variable(regression, a)
@variable(regression, b)
@expression(regression,
quad_error[i=1:100],
(a * x[i] + b - y[i])^2
)
MOI.set(regression, MOI.Silent(), true)
for λ in penalty_values
@objective(regression, Min, sum(quad_error) + λ * (a^2 + b^2))
optimize!(regression)
push!(a_values, JuMP.value(a))
push!(b_values, JuMP.value(b))
end
This is Ipopt version 3.13.2, running with linear solver mumps. NOTE: Other linear solvers might be more efficient (see Ipopt documentation). Number of nonzeros in equality constraint Jacobian...: 0 Number of nonzeros in inequality constraint Jacobian.: 0 Number of nonzeros in Lagrangian Hessian.............: 3 Total number of variables............................: 2 variables with only lower bounds: 0 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 0 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 1.9350740e+03 0.00e+00 1.00e+02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 9.5890592e+01 0.00e+00 1.64e-14 -1.0 3.08e+00 - 1.00e+00 1.00e+00f 1 Number of Iterations....: 1 (scaled) (unscaled) Objective...............: 1.3825884742905746e+01 9.5890591680551324e+01 Dual infeasibility......: 1.6391817878868001e-14 1.1368683772161603e-13 Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00 Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00 Overall NLP error.......: 1.6391817878868001e-14 1.1368683772161603e-13 Number of objective function evaluations = 2 Number of objective gradient evaluations = 2 Number of equality constraint evaluations = 0 Number of inequality constraint evaluations = 0 Number of equality constraint Jacobian evaluations = 0 Number of inequality constraint Jacobian evaluations = 0 Number of Lagrangian Hessian evaluations = 1 Total CPU secs in IPOPT (w/o function evaluations) = 0.001 Total CPU secs in NLP function evaluations = 0.000 EXIT: Optimal Solution Found. This is Ipopt version 3.13.2, running with linear solver mumps. NOTE: Other linear solvers might be more efficient (see Ipopt documentation). Number of nonzeros in equality constraint Jacobian...: 0 Number of nonzeros in inequality constraint Jacobian.: 0 Number of nonzeros in Lagrangian Hessian.............: 3 Total number of variables............................: 2 variables with only lower bounds: 0 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 0 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 1.9350740e+03 0.00e+00 1.00e+02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 1.0369895e+02 0.00e+00 6.15e-15 -1.0 3.06e+00 - 1.00e+00 1.00e+00f 1 Number of Iterations....: 1 (scaled) (unscaled) Objective...............: 1.4951724244355837e+01 1.0369894665666095e+02 Dual infeasibility......: 6.1469317045755001e-15 4.2632564145606011e-14 Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00 Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00 Overall NLP error.......: 6.1469317045755001e-15 4.2632564145606011e-14 Number of objective function evaluations = 2 Number of objective gradient evaluations = 2 Number of equality constraint evaluations = 0 Number of inequality constraint evaluations = 0 Number of equality constraint Jacobian evaluations = 0 Number of inequality constraint Jacobian evaluations = 0 Number of Lagrangian Hessian evaluations = 1 Total CPU secs in IPOPT (w/o function evaluations) = 0.001 Total CPU secs in NLP function evaluations = 0.000 EXIT: Optimal Solution Found. This is Ipopt version 3.13.2, running with linear solver mumps. NOTE: Other linear solvers might be more efficient (see Ipopt documentation). Number of nonzeros in equality constraint Jacobian...: 0 Number of nonzeros in inequality constraint Jacobian.: 0 Number of nonzeros in Lagrangian Hessian.............: 3 Total number of variables............................: 2 variables with only lower bounds: 0 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 0 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 1.9350740e+03 0.00e+00 1.00e+02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 2.4297094e+02 0.00e+00 6.15e-15 -1.0 2.83e+00 - 1.00e+00 1.00e+00f 1 Number of Iterations....: 1 (scaled) (unscaled) Objective...............: 3.5032510793435371e+01 2.4297093824404374e+02 Dual infeasibility......: 6.1469317045755001e-15 4.2632564145606011e-14 Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00 Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00 Overall NLP error.......: 6.1469317045755001e-15 4.2632564145606011e-14 Number of objective function evaluations = 2 Number of objective gradient evaluations = 2 Number of equality constraint evaluations = 0 Number of inequality constraint evaluations = 0 Number of equality constraint Jacobian evaluations = 0 Number of inequality constraint Jacobian evaluations = 0 Number of Lagrangian Hessian evaluations = 1 Total CPU secs in IPOPT (w/o function evaluations) = 0.001 Total CPU secs in NLP function evaluations = 0.000 EXIT: Optimal Solution Found. This is Ipopt version 3.13.2, running with linear solver mumps. NOTE: Other linear solvers might be more efficient (see Ipopt documentation). Number of nonzeros in equality constraint Jacobian...: 0 Number of nonzeros in inequality constraint Jacobian.: 0 Number of nonzeros in Lagrangian Hessian.............: 3 Total number of variables............................: 2 variables with only lower bounds: 0 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 0 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 1.9350740e+03 0.00e+00 1.00e+02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 9.5144333e+02 0.00e+00 1.23e-14 -1.0 1.63e+00 - 1.00e+00 1.00e+00f 1 Number of Iterations....: 1 (scaled) (unscaled) Objective...............: 1.3718286154989943e+02 9.5144332583857181e+02 Dual infeasibility......: 1.2293863409151000e-14 8.5265128291212022e-14 Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00 Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00 Overall NLP error.......: 1.2293863409151000e-14 8.5265128291212022e-14 Number of objective function evaluations = 2 Number of objective gradient evaluations = 2 Number of equality constraint evaluations = 0 Number of inequality constraint evaluations = 0 Number of equality constraint Jacobian evaluations = 0 Number of inequality constraint Jacobian evaluations = 0 Number of Lagrangian Hessian evaluations = 1 Total CPU secs in IPOPT (w/o function evaluations) = 0.001 Total CPU secs in NLP function evaluations = 0.000 EXIT: Optimal Solution Found.
Estimating a correlation matrix from data.
A vintage paper with familiar names:
http://www.optimization-online.org/DB_FILE/2003/09/729.pdf
A Semidefinite Programming Approach, for the Nearest Correlation Matrix Problem
M. F. Anjos, N. J. Higham, P.L. Takouda, H. Wolkowicz, 2003
Let's add sparsity:
$$\begin{align} \min_C & \|C - cor(X)\|_F^2 \\ \text{s.t.} &\\ & C_{ii} = 1 \,\,\,\,\forall i \\ & C \succeq 0 \\ & C_{ij} = 0 \,\,\,\,\forall (i,j)\in \mathcal{S} \end{align}$$$X_{1,2,3}$ correlated with $X_{4, 5, 6}$
using Statistics
n = 6
X1 = randn(1000, 3)
X2 = randn(1000, 3) .+ 1.1 .* X1
X = [X1 X2]
cor(X)
6×6 Array{Float64,2}: 1.0 -0.00327027 0.0570165 0.752861 -0.030622 0.0337707 -0.00327027 1.0 0.0301278 0.0226206 0.747055 0.0469962 0.0570165 0.0301278 1.0 0.00476154 0.0634156 0.748426 0.752861 0.0226206 0.00476154 1.0 -0.0196301 -0.0291189 -0.030622 0.747055 0.0634156 -0.0196301 1.0 0.0673077 0.0337707 0.0469962 0.748426 -0.0291189 0.0673077 1.0
import SCS # conic solver
using LinearAlgebra
sparse_corr = Model(SCS.Optimizer)
MOI.set(sparse_corr, MOI.Silent(), true)
@variable(sparse_corr, C[1:n, 1:n] in PSDCone())
@constraint(sparse_corr, unit_diagonal[i=1:6], diag(C)[i] == 1);
# add sparsity pattern
for (i, j) in [(1, 2), (1, 3), (5, 6)]
con = @constraint(sparse_corr, C[i,j] == 0)
JuMP.set_name(con, "sparse_$(i)_$(j)")
end
@objective(sparse_corr, Min,
sum((C - cor(X)).^2)
);
optimize!(sparse_corr)
JuMP.value.(C)
6×6 Array{Float64,2}: 1.0 6.70515e-7 4.56314e-8 0.752859 -0.0306217 0.0337707 6.70515e-7 1.0 0.0301277 0.0226207 0.747054 0.0469958 4.56314e-8 0.0301277 0.999999 0.00476165 0.0634151 0.748425 0.752859 0.0226207 0.00476165 1.0 -0.0196301 -0.0291187 -0.0306217 0.747054 0.0634151 -0.0196301 1.0 -3.12266e-7 0.0337707 0.0469958 0.748425 -0.0291187 -3.12266e-7 0.999999
Dualization.jl for automatic computation of dual models.
$$ \begin{align} & \text{Variable-in-cone model:}\\ \min_x\,\,\, & c \cdot x\\ & A x = b\\ & x \in \mathcal{K}\\ &\\ & \text{Affine-map-in-cone model:}\\ &\\ \min_x\,\,\, & c \cdot x\\ & A x - b \in \mathcal{K} \end{align}$$using Dualization
sparse_corr
dual_sparse_corr = Dualization.dualize(sparse_corr, SCS.Optimizer, dual_names = DualNames("", ""))
print(dual_sparse_corr)
Max -quadslack_C[1,1]² - 2 quadslack_C[1,2]² - quadslack_C[2,2]² - 2 quadslack_C[1,3]² - 2 quadslack_C[2,3]² - quadslack_C[3,3]² - 2 quadslack_C[1,4]² - 2 quadslack_C[2,4]² - 2 quadslack_C[3,4]² - quadslack_C[4,4]² - 2 quadslack_C[1,5]² - 2 quadslack_C[2,5]² - 2 quadslack_C[3,5]² - 2 quadslack_C[4,5]² - quadslack_C[5,5]² - 2 quadslack_C[1,6]² - 2 quadslack_C[2,6]² - 2 quadslack_C[3,6]² - 2 quadslack_C[4,6]² - 2 quadslack_C[5,6]² - quadslack_C[6,6]² + unit_diagonal[4]_1 + unit_diagonal[2]_1 + unit_diagonal[3]_1 + unit_diagonal[5]_1 + unit_diagonal[6]_1 + unit_diagonal[1]_1 + 9.407615018214509 Subject to C[1,1] : unit_diagonal[1]_1 + _1 - 2 quadslack_C[1,1] = -2.0 C[1,2] : sparse_1_2_1 + 2 _2 - 4 quadslack_C[1,2] = 0.013081094025689716 C[2,2] : unit_diagonal[2]_1 + _3 - 2 quadslack_C[2,2] = -2.0 C[1,3] : sparse_1_3_1 + 2 _4 - 4 quadslack_C[1,3] = -0.22806609540203193 C[2,3] : 2 _5 - 4 quadslack_C[2,3] = -0.12051130305305092 C[3,3] : unit_diagonal[3]_1 + _6 - 2 quadslack_C[3,3] = -2.0 C[1,4] : 2 _7 - 4 quadslack_C[1,4] = -3.011442944099582 C[2,4] : 2 _8 - 4 quadslack_C[2,4] = -0.09048258822649892 C[3,4] : 2 _9 - 4 quadslack_C[3,4] = -0.019046142538331493 C[4,4] : unit_diagonal[4]_1 + _10 - 2 quadslack_C[4,4] = -2.0 C[1,5] : 2 _11 - 4 quadslack_C[1,5] = 0.12248815411260384 C[2,5] : 2 _12 - 4 quadslack_C[2,5] = -2.988220418670109 C[3,5] : 2 _13 - 4 quadslack_C[3,5] = -0.25366259749723524 C[4,5] : 2 _14 - 4 quadslack_C[4,5] = 0.07852026262185992 C[5,5] : unit_diagonal[5]_1 + _15 - 2 quadslack_C[5,5] = -2.0 C[1,6] : 2 _16 - 4 quadslack_C[1,6] = -0.1350829021186865 C[2,6] : 2 _17 - 4 quadslack_C[2,6] = -0.18798463518124522 C[3,6] : 2 _18 - 4 quadslack_C[3,6] = -2.9937036080028734 C[4,6] : 2 _19 - 4 quadslack_C[4,6] = 0.11647554681792545 C[5,6] : sparse_5_6_1 + 2 _20 - 4 quadslack_C[5,6] = -0.26923081217361 C[6,6] : unit_diagonal[6]_1 + _21 - 2 quadslack_C[6,6] = -2.0 [_1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, _16, _17, _18, _19, _20, _21] ∈ MathOptInterface.PositiveSemidefiniteConeTriangle(6)
MOI.set(dual_sparse_corr, MOI.Silent(), true)
optimize!(dual_sparse_corr)
@show (objective_value(dual_sparse_corr)-objective_value(sparse_corr));
objective_value(dual_sparse_corr) - objective_value(sparse_corr) = -2.591275373831081e-6
What are the extreme vertices of my feasible set? JuliaPolyhedra & computational geometry
using Polyhedra
m = Model()
@variable(m, 0 ≤ x[1:3] ≤ 1)
@constraint(m, sum(x) ≤ 1)
poly = polyhedron(m)
vrep(poly)
V-representation Polyhedra.Hull{Float64,Array{Float64,1},Int64}: 4-element iterator of Array{Float64,1}: [0.0, 0.0, 0.0] [1.0, 0.0, 0.0] [0.0, 1.0, 0.0] [0.0, 0.0, 1.0]
Advanced tutorials (conic, approximations, non-linear, graph-based problems) on https://jump.dev/JuMP.jl/stable/examples/basic/