Ipopt 如何在CyiPot中将选项传递给目标

Ipopt 如何在CyiPot中将选项传递给目标,ipopt,cyipopt,Ipopt,Cyipopt,我试图将选项传递给我在Cyipot中解决的NLP 在每次迭代中,这些选项将以相同的方式影响目标。例如,教程的问题是最小化 x_1*x_4*(x_1+x_2+_3)+x_3 受到一些限制(请参阅) 我想解决相关的问题 比例尺*x_1*x_4*(x_1+x_2+\u 3)+x_3 其中,比例是在优化之前设置的参数。下面的代码显示了如何在PyiPot中设置问题,但标尺硬编码为2。如何将其设置为一个选项,以便可以灵活更改 import ipopt import numpy as np class hs

我试图将选项传递给我在Cyipot中解决的NLP

在每次迭代中,这些选项将以相同的方式影响目标。例如,教程的问题是最小化

x_1*x_4*(x_1+x_2+_3)+x_3

受到一些限制(请参阅)

我想解决相关的问题

比例尺*x_1*x_4*(x_1+x_2+\u 3)+x_3

其中,比例是在优化之前设置的参数。下面的代码显示了如何在PyiPot中设置问题,但标尺硬编码为2。如何将其设置为一个选项,以便可以灵活更改

import ipopt
import numpy as np

class hs071(object):
    def __init__(self):
        pass

    def objective(self, x, scale):
        # The callback for calculating the objective
        scale = 2
        
        return scale * x[0] * x[3] * np.sum(x[0:3]) + x[2]

    def gradient(self, x, scale):
        # The callback for calculating the gradient
        scale = 2
        
        return np.array([
                    scale * x[0] * x[3] + scale * x[3] * np.sum(x[0:3]),
                    scale * x[0] * x[3],
                    scale * x[0] * x[3] + 1.0,
                    scale * x[0] * np.sum(x[0:3])
                    ])

    def constraints(self, x):
        # The callback for calculating the constraints
        return np.array((np.prod(x), np.dot(x, x)))

    def jacobian(self, x):
        # The callback for calculating the Jacobian
        return np.concatenate((np.prod(x) / x, 2*x))

x0 = [1.0, 5.0, 5.0, 1.0]

lb = [1.0, 1.0, 1.0, 1.0]
ub = [5.0, 5.0, 5.0, 5.0]

cl = [25.0, 40.0]
cu = [2.0e19, 40.0]

nlp = ipopt.problem(
            n=len(x0),
            m=len(cl),
            problem_obj=hs071(),
            lb=lb,
            ub=ub,
            cl=cl,
            cu=cu
            )

x, info = nlp.solve(x0)


注意:定义globals是可行的,但是很草率。必须有一种更干净的方法来做到这一点,因为这是向优化问题添加数据的方法。

将它们添加到类本身:

import ipopt
import numpy as np

class hs071(object):
    def __init__(self):
        pass

    def objective(self, x):
        # The callback for calculating the objective
        scale = self.scale
        
        return scale * x[0] * x[3] * np.sum(x[0:3]) + x[2]

    def gradient(self, x):
        # The callback for calculating the gradient
        scale = self.scale
        
        return np.array([
                    scale * x[0] * x[3] + scale * x[3] * np.sum(x[0:3]),
                    scale * x[0] * x[3],
                    scale * x[0] * x[3] + 1.0,
                    scale * x[0] * np.sum(x[0:3])
                    ])

    def constraints(self, x):
        # The callback for calculating the constraints
        return np.array((np.prod(x), np.dot(x, x)))

    def jacobian(self, x):
        # The callback for calculating the Jacobian
        return np.concatenate((np.prod(x) / x, 2*x))

x0 = [1.0, 5.0, 5.0, 1.0]

lb = [1.0, 1.0, 1.0, 1.0]
ub = [5.0, 5.0, 5.0, 5.0]

cl = [25.0, 40.0]
cu = [2.0e19, 40.0]

model = hs071()
model.scale = 2

nlp = ipopt.problem(
            n=len(x0),
            m=len(cl),
            problem_obj=model,
            lb=lb,
            ub=ub,
            cl=cl,
            cu=cu
            )

x, info = nlp.solve(x0)