Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/iphone/42.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x Scipy minimize给了我一个;由于精度损失,不一定达到预期误差;,我的代码似乎是正确的_Python 3.x_Scipy_Data Science_Scipy Optimize - Fatal编程技术网

Python 3.x Scipy minimize给了我一个;由于精度损失,不一定达到预期误差;,我的代码似乎是正确的

Python 3.x Scipy minimize给了我一个;由于精度损失,不一定达到预期误差;,我的代码似乎是正确的,python-3.x,scipy,data-science,scipy-optimize,Python 3.x,Scipy,Data Science,Scipy Optimize,基本上就是标题所说的。使用scipy minimize进行一些初步实践,但无法找出它为什么不能收敛 我的预测模型如下: def predict(X,betas): y_hat = np.dot(X,betas) return y_hat X = np.array([[1,0],[1,-1],[1,2]]) betas = np.array([0.1,0.3]) y_hat = predict(X,betas) print(y_hat) def lossRSS(betas,X,y

基本上就是标题所说的。使用scipy minimize进行一些初步实践,但无法找出它为什么不能收敛

我的预测模型如下:

def predict(X,betas):
    y_hat = np.dot(X,betas)
    return y_hat
X = np.array([[1,0],[1,-1],[1,2]])
betas = np.array([0.1,0.3])
y_hat = predict(X,betas)
print(y_hat)
def lossRSS(betas,X,y):
    y_hat = predict(X, betas)
    res = y_hat-y
    rss = np.sum(res * res)
    gradient = -2 * np.transpose(X).dot(res)
    return (rss, gradient)
X = np.array([[1,0],[1,-1],[1,2]])
betas = np.array([0.1,0.3])
y = np.array([0,0.4,2])
lossRSS(betas,X,y)
def minimization(X, y, lossfuncn):
    betas = np.array([0.1,0.3])
    result = so.minimize(lossfuncn, betas, args=(X, y), jac=True)
    print(result)

X = np.array([[1,0],[1,-1],[1,2]])
y = np.array([0,0.4,2]) 
minimization(X,y,lossRSS)
这和预期的一样

然后,我的损失/梯度函数如下:

def predict(X,betas):
    y_hat = np.dot(X,betas)
    return y_hat
X = np.array([[1,0],[1,-1],[1,2]])
betas = np.array([0.1,0.3])
y_hat = predict(X,betas)
print(y_hat)
def lossRSS(betas,X,y):
    y_hat = predict(X, betas)
    res = y_hat-y
    rss = np.sum(res * res)
    gradient = -2 * np.transpose(X).dot(res)
    return (rss, gradient)
X = np.array([[1,0],[1,-1],[1,2]])
betas = np.array([0.1,0.3])
y = np.array([0,0.4,2])
lossRSS(betas,X,y)
def minimization(X, y, lossfuncn):
    betas = np.array([0.1,0.3])
    result = so.minimize(lossfuncn, betas, args=(X, y), jac=True)
    print(result)

X = np.array([[1,0],[1,-1],[1,2]])
y = np.array([0,0.4,2]) 
minimization(X,y,lossRSS)
这也和预期的一样有效

最后,我实现了最小化函数,如下所示:

def predict(X,betas):
    y_hat = np.dot(X,betas)
    return y_hat
X = np.array([[1,0],[1,-1],[1,2]])
betas = np.array([0.1,0.3])
y_hat = predict(X,betas)
print(y_hat)
def lossRSS(betas,X,y):
    y_hat = predict(X, betas)
    res = y_hat-y
    rss = np.sum(res * res)
    gradient = -2 * np.transpose(X).dot(res)
    return (rss, gradient)
X = np.array([[1,0],[1,-1],[1,2]])
betas = np.array([0.1,0.3])
y = np.array([0,0.4,2])
lossRSS(betas,X,y)
def minimization(X, y, lossfuncn):
    betas = np.array([0.1,0.3])
    result = so.minimize(lossfuncn, betas, args=(X, y), jac=True)
    print(result)

X = np.array([[1,0],[1,-1],[1,2]])
y = np.array([0,0.4,2]) 
minimization(X,y,lossRSS)
但我得到以下输出:

fun: 2.06
 hess_inv: array([[1, 0],
       [0, 1]])
      jac: array([3.6, 4. ])
  message: 'Desired error not necessarily achieved due to precision loss.'
     nfev: 53
      nit: 0
     njev: 41
   status: 2
  success: False
        x: array([0.1, 0.3])
我不知道为什么。优化函数中是否有我误用的参数?我对最小化方法背后的理论不是太敏锐,但从我对最小化和优化操作的知识来看,它应该是有效的


任何见解都将不胜感激

我的问题是

res=y_hat-y
而不是

res=y-y_hat
一个基本的错误,这可能就是我忽略它的原因。决定回答这个问题,而不是删除,以提醒人们,也许这个错误是一些超级愚蠢的东西,他们认为他们是以上