Python 函数最大化
专家。我试图用Nelder-Mead算法最大化函数my_obj,以适应我的数据。为此,我从scipy的optimize.fmin获得了帮助。我认为我非常接近解决方案,但遗漏了一些内容并出现了错误,如:如中所述,您应该使用一维数组(或一维列表,因为它兼容)作为目标函数的输入,而不是多个参数:Python 函数最大化,python,Python,专家。我试图用Nelder-Mead算法最大化函数my_obj,以适应我的数据。为此,我从scipy的optimize.fmin获得了帮助。我认为我非常接近解决方案,但遗漏了一些内容并出现了错误,如:如中所述,您应该使用一维数组(或一维列表,因为它兼容)作为目标函数的输入,而不是多个参数: #!/usr/bin/env python import numpy as np from scipy.optimize import minimize d1 = np.array([ 5.0, 10.0
#!/usr/bin/env python
import numpy as np
from scipy.optimize import minimize
d1 = np.array([ 5.0, 10.0, 15.0, 20.0, 25.0])
h = np.array([10000720600.0, 10011506200.0, 10057741200.0, 10178305100.0,10415318500.0])
b = 2.0
cx = 2.0
#objective function
def obj_function(x): # EDIT: Input is a list
m,n,r= x
pw = 1/cx
c = b*cx
x1 = 1+(d1/n)**c
x2 = 1+(d1/m)**c
x3 = (x1/x2)**pw
dcal = (r)*x3
dobs = (h)
deld=((np.log10(dcal)-np.log10(dobs)))**2
return np.sum(deld)
print(obj_function([5.0,10.0,15.0])) # EDIT: Input is a list
x0 = [5.0,10.0,15.0]
print(obj_function(x0))
res = minimize(obj_function, x0, method='nelder-mead')
print(res)
输出:
% python3 script.py
432.6485766651165
432.6485766651165
final_simplex: (array([[7.76285924e+00, 3.02470699e-04, 1.93396980e+01],
[7.76286507e+00, 3.02555020e-04, 1.93397231e+01],
[7.76285178e+00, 3.01100639e-04, 1.93397381e+01],
[7.76286445e+00, 3.01025402e-04, 1.93397169e+01]]), array([0.12196442, 0.12196914, 0.12197448, 0.12198028]))
fun: 0.12196441986340725
message: 'Optimization terminated successfully.'
nfev: 130
nit: 67
status: 0
success: True
x: array([7.76285924e+00, 3.02470699e-04, 1.93396980e+01])
如中所述,您应该使用一维数组(或一维列表,因为它兼容)作为目标函数的输入,而不是多个参数:
#!/usr/bin/env python
import numpy as np
from scipy.optimize import minimize
d1 = np.array([ 5.0, 10.0, 15.0, 20.0, 25.0])
h = np.array([10000720600.0, 10011506200.0, 10057741200.0, 10178305100.0,10415318500.0])
b = 2.0
cx = 2.0
#objective function
def obj_function(x): # EDIT: Input is a list
m,n,r= x
pw = 1/cx
c = b*cx
x1 = 1+(d1/n)**c
x2 = 1+(d1/m)**c
x3 = (x1/x2)**pw
dcal = (r)*x3
dobs = (h)
deld=((np.log10(dcal)-np.log10(dobs)))**2
return np.sum(deld)
print(obj_function([5.0,10.0,15.0])) # EDIT: Input is a list
x0 = [5.0,10.0,15.0]
print(obj_function(x0))
res = minimize(obj_function, x0, method='nelder-mead')
print(res)
输出:
% python3 script.py
432.6485766651165
432.6485766651165
final_simplex: (array([[7.76285924e+00, 3.02470699e-04, 1.93396980e+01],
[7.76286507e+00, 3.02555020e-04, 1.93397231e+01],
[7.76285178e+00, 3.01100639e-04, 1.93397381e+01],
[7.76286445e+00, 3.01025402e-04, 1.93397169e+01]]), array([0.12196442, 0.12196914, 0.12197448, 0.12198028]))
fun: 0.12196441986340725
message: 'Optimization terminated successfully.'
nfev: 130
nit: 67
status: 0
success: True
x: array([7.76285924e+00, 3.02470699e-04, 1.93396980e+01])
不客气。请注意,您可以通过在目标函数之外计算一次
pw
、c
和dobs
来提高代码的性能,因为它是在Nelder-Mead算法的每次迭代中进行评估的。是的,它应该是相同的结果,但计算时间较小。欢迎您。请注意,您可以通过在目标函数之外计算一次pw
、c
和dobs
来提高代码的性能,因为它是在Nelder-Mead算法的每次迭代中进行评估的。是的,它应该是相同的结果,但计算时间较小。