Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/objective-c/22.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 基本优化的numpy和sympy lambdify结果的差异_Python_Python 3.x_Numpy_Sympy_Mathematical Optimization - Fatal编程技术网

Python 基本优化的numpy和sympy lambdify结果的差异

Python 基本优化的numpy和sympy lambdify结果的差异,python,python-3.x,numpy,sympy,mathematical-optimization,Python,Python 3.x,Numpy,Sympy,Mathematical Optimization,我基于牛顿法编写了以下基本优化代码,通过显式编写导数并使用Symphy计算它们。为什么结果不同 明确地写衍生品: import numpy as np def g(x): return 1.95 - np.exp(-2/x) - 2*np.exp(-np.power(x,4)) # gd: Derivative of g def gd(x): return -2*np.power(x,-2)*np.exp(-2/x) + 8*np.power(x,3)*np.exp(-np.p

我基于牛顿法编写了以下基本优化代码,通过显式编写导数并使用Symphy计算它们。为什么结果不同

明确地写衍生品:

import numpy as np
def g(x):
    return 1.95 - np.exp(-2/x) - 2*np.exp(-np.power(x,4))
# gd: Derivative of g

def gd(x):
    return -2*np.power(x,-2)*np.exp(-2/x) + 8*np.power(x,3)*np.exp(-np.power(x,4))

def gdd(x):
return -4*np.power(x,-3)*np.exp(-2/x)-4*np.power(x,-4)*np.exp(-2/x)+24*np.power(x,2)*np.exp(-np.power(x,4))-32*np.power(x,6)*np.exp(-np.power(x,4))

# Newton's
def newton_update(x0,g,gd):
    return x0 - g(x0)/gd(x0)
# Main func
x0 = 1.00
condition = True
loops = 1
max_iter = 20
while condition and loops<max_iter:   
    x1 = newton_update(x0,gd,gdd)
    loops += 1
    condition = np.abs(x0-x1) >= 0.001 
    x0 = x1
    print('x =',x0)


if loops == max_iter:
    print('Solution failed to converge. Try another starting value!')
使用sympy和lambdify:

import sympy as sp
x = sp.symbols('x',real=True)
f_expr = 1.95 - exp(-2/x) - 2*exp(-x**4)
dfdx_expr = sp.diff(f_expr, x)
ddfdx_expr = sp.diff(dfdx_expr, x)

# lambidify
f = sp.lambdify([x],f_expr,"numpy")
dfdx = sp.lambdify([x], dfdx_expr,"numpy")
ddfdx = sp.lambdify([x], ddfdx_expr,"numpy")

# Newton's
x0 = 1.0
condition = True
loops = 1
max_iter = 20
while condition and loops<max_iter:   
    x1 = newton_update(x0,dfdx,ddfdx)
    loops += 1
    condition = np.abs(x0-x1) >= 0.001 
    x0 = x1
    print('x =',x0)


if loops == max_iter:
    print('Solution failed to converge. Try another starting value!')

每当导数的符号在更新中发生变化时,我就在牛顿更新函数中一步一步地将其减半。但我不明白为什么在相同的起点上结果会如此不同。也有可能从两者得到相同的结果吗?

函数gdd中的二阶导数公式有错误。改变

def gdd(x):
     return -4*np.power(x,-3)*np.exp(-2/x)-4*np.power(x,-4)*np.exp(-2/x)+24*np.power(x,2)*np.exp(-np.power(x,4))-32*np.power(x,6)*np.exp(-np.power(x,4))

应该解决问题,并在两种情况下产生相同的结果,这将是

x = 1.90803013971
x = 3.96640484492
x = 6.6181614689
x = 10.5162392894
x = 16.3269006983
x = 25.0229734288
x = 38.0552735534
x = 57.5964036862
x = 86.9034400129
x = 130.860980508
x = 196.795321033
x = 295.695535237
x = 444.044999522
x = 666.568627836
x = 1000.35369299
x = 1501.03103981
x = 2252.04689304
x = 3378.57056168
x = 5068.35599056
Solution failed to converge. Try another starting value!
这表明根据注释选择步长存在问题

def gdd(x):
     return -4*np.power(x,-3)*np.exp(-2/x)-4*np.power(x,-4)*np.exp(-2/x)+24*np.power(x,2)*np.exp(-np.power(x,4))-32*np.power(x,6)*np.exp(-np.power(x,4))
def gdd(x):
    return 4*np.power(x,-3)*np.exp(-2/x)-4*np.power(x,-4)*np.exp(-2/x)+24*np.power(x,2)*np.exp(-np.power(x,4))-32*np.power(x,6)*np.exp(-np.power(x,4))
x = 1.90803013971
x = 3.96640484492
x = 6.6181614689
x = 10.5162392894
x = 16.3269006983
x = 25.0229734288
x = 38.0552735534
x = 57.5964036862
x = 86.9034400129
x = 130.860980508
x = 196.795321033
x = 295.695535237
x = 444.044999522
x = 666.568627836
x = 1000.35369299
x = 1501.03103981
x = 2252.04689304
x = 3378.57056168
x = 5068.35599056
Solution failed to converge. Try another starting value!