Optimization Julia-Gurobi对跳转变量数组的回调

Optimization Julia-Gurobi对跳转变量数组的回调,optimization,callback,julia,gurobi,julia-jump,Optimization,Callback,Julia,Gurobi,Julia Jump,在Gurobi和JuMP 0.21中,有关于如何通过回调访问变量的详细文档: using JuMP, Gurobi, Test model = direct_model(Gurobi.Optimizer()) @variable(model, 0 <= x <= 2.5, Int) @variable(model, 0 <= y <= 2.5, Int) @objective(model, Max, y) cb_calls = Cint[] function my_ca

在Gurobi和JuMP 0.21中,有关于如何通过回调访问变量的详细文档:

using JuMP, Gurobi, Test

model = direct_model(Gurobi.Optimizer())
@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
cb_calls = Cint[]
function my_callback_function(cb_data, cb_where::Cint)
    # You can reference variables outside the function as normal
    push!(cb_calls, cb_where)
    # You can select where the callback is run
    if cb_where != GRB_CB_MIPSOL && cb_where != GRB_CB_MIPNODE
        return
    end
    # You can query a callback attribute using GRBcbget
    if cb_where == GRB_CB_MIPNODE
        resultP = Ref{Cint}()
        GRBcbget(cb_data, cb_where, GRB_CB_MIPNODE_STATUS, resultP)
        if resultP[] != GRB_OPTIMAL
            return  # Solution is something other than optimal.
        end
    end
    # Before querying `callback_value`, you must call:
    Gurobi.load_callback_variable_primal(cb_data, cb_where)
    x_val = callback_value(cb_data, x)
    y_val = callback_value(cb_data, y)
    # You can submit solver-independent MathOptInterface attributes such as
    # lazy constraints, user-cuts, and heuristic solutions.
    if y_val - x_val > 1 + 1e-6
        con = @build_constraint(y - x <= 1)
        MOI.submit(model, MOI.LazyConstraint(cb_data), con)
    elseif y_val + x_val > 3 + 1e-6
        con = @build_constraint(y + x <= 3)
        MOI.submit(model, MOI.LazyConstraint(cb_data), con)
    end
    if rand() < 0.1
        # You can terminate the callback as follows:
        GRBterminate(backend(model))
    end
    return
end
# You _must_ set this parameter if using lazy constraints.
MOI.set(model, MOI.RawParameter("LazyConstraints"), 1)
MOI.set(model, Gurobi.CallbackFunction(), my_callback_function)
optimize!(model)
@test termination_status(model) == MOI.OPTIMAL
@test primal_status(model) == MOI.FEASIBLE_POINT
@test value(x) == 1
@test value(y) == 2
我是否应该在其二维上使用双for循环访问
x
,并多次调用
callback\u value
?如果是这样的话,
j
的索引将不一样,不是吗?

使用广播:

x_val = callback_value.(Ref(cb_data), x)
或者在需要值时只需调用
callback\u值(cb\u数据,x[i,j])

例如:

using JuMP, Gurobi
model = Model(Gurobi.Optimizer)
@variable(model, 0 <= x[i=1:3, j=i+1:3] <= 2.5, Int)
function my_callback_function(cb_data)
    x_val = callback_value.(Ref(cb_data), x)
    display(x_val)
    for i=1:3, j=i+1:3
        con = @build_constraint(x[i, j] <= floor(Int, x_val[i, j]))
        MOI.submit(model, MOI.LazyConstraint(cb_data), con)
    end
end
MOI.set(model, MOI.LazyConstraintCallback(), my_callback_function)
optimize!(model)

我认为这是行不通的。无法使用
在函数的第二个参数上广播。(
operator,是吗?您的第二个选项是我当前的解决方法。据我所知,您每次都必须以这种方式调用该函数,并且不能将其存储到相同索引的数组中。试试看,它确实有效。这是Julia文档:是的,它可以工作,但索引不会相同,对吗?对于x_val,它们将从1开始它变成了一个数组,不再是一个跳转变量。请再次尝试。它将返回一个
SparseAxisArray
。如果不返回,这是一个我们需要修复的错误。确认它有效。我添加了一个示例。
using JuMP, Gurobi
model = Model(Gurobi.Optimizer)
@variable(model, 0 <= x[i=1:3, j=i+1:3] <= 2.5, Int)
function my_callback_function(cb_data)
    x_val = callback_value.(Ref(cb_data), x)
    display(x_val)
    for i=1:3, j=i+1:3
        con = @build_constraint(x[i, j] <= floor(Int, x_val[i, j]))
        MOI.submit(model, MOI.LazyConstraint(cb_data), con)
    end
end
MOI.set(model, MOI.LazyConstraintCallback(), my_callback_function)
optimize!(model)
julia> optimize!(model)
Gurobi Optimizer version 9.1.0 build v9.1.0rc0 (mac64)
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 0 rows, 3 columns and 0 nonzeros
Model fingerprint: 0x5d543c3a
Variable types: 0 continuous, 3 integer (0 binary)
Coefficient statistics:
  Matrix range     [0e+00, 0e+00]
  Objective range  [0e+00, 0e+00]
  Bounds range     [2e+00, 2e+00]
  RHS range        [0e+00, 0e+00]
JuMP.Containers.SparseAxisArray{Float64,2,Tuple{Int64,Int64}} with 3 entries:
  [1, 2]  =  -0.0
  [2, 3]  =  -0.0
  [1, 3]  =  -0.0
JuMP.Containers.SparseAxisArray{Float64,2,Tuple{Int64,Int64}} with 3 entries:
  [1, 2]  =  2.0
  [2, 3]  =  2.0
  [1, 3]  =  2.0
JuMP.Containers.SparseAxisArray{Float64,2,Tuple{Int64,Int64}} with 3 entries:
  [1, 2]  =  2.0
  [2, 3]  =  2.0
  [1, 3]  =  2.0
JuMP.Containers.SparseAxisArray{Float64,2,Tuple{Int64,Int64}} with 3 entries:
  [1, 2]  =  2.0
  [2, 3]  =  -0.0
  [1, 3]  =  -0.0
Presolve time: 0.00s
Presolved: 0 rows, 3 columns, 0 nonzeros
Variable types: 0 continuous, 3 integer (0 binary)
JuMP.Containers.SparseAxisArray{Float64,2,Tuple{Int64,Int64}} with 3 entries:
  [1, 2]  =  -0.0
  [2, 3]  =  -0.0
  [1, 3]  =  -0.0
Found heuristic solution: objective 0.0000000

Explored 0 nodes (0 simplex iterations) in 0.14 seconds
Thread count was 8 (of 8 available processors)

Solution count 1: 0 

Optimal solution found (tolerance 1.00e-04)
Best objective 0.000000000000e+00, best bound 0.000000000000e+00, gap 0.0000%

User-callback calls 31, time in user-callback 0.14 sec