出现Python scipy.optimize.fmin_l_bfgs_b错误

出现Python scipy.optimize.fmin_l_bfgs_b错误,python,numpy,scipy,mathematical-optimization,Python,Numpy,Scipy,Mathematical Optimization,我的代码是实现一个主动学习算法,使用L-BFGS优化。我想优化四个参数:alpha、beta、w和gamma 但是,当我运行下面的代码时,出现了一个错误: optimLogitLBFGS = sp.optimize.fmin_l_bfgs_b(func, x0 = x0, args = (X,Y,Z), fprime = func_grad) File "C:\Python27\lib\site-packa

我的代码是实现一个主动学习算法,使用L-BFGS优化。我想优化四个参数:
alpha
beta
w
gamma

但是,当我运行下面的代码时,出现了一个错误:

optimLogitLBFGS = sp.optimize.fmin_l_bfgs_b(func, x0 = x0, args = (X,Y,Z), fprime = func_grad)                                           
  File "C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py", line 188, in fmin_l_bfgs_b
    **opts)
  File "C:\Python27\lib\site-packages\scipy\optimize\lbfgsb.py", line 311, in _minimize_lbfgsb
    isave, dsave)
    _lbfgsb.error: failed in converting 7th argument ``g' of _lbfgsb.setulb to C/Fortran array 
    0-th dimension must be fixed to 22 but got 4
我的代码是:

# -*- coding: utf-8 -*-
import numpy as np
import scipy as sp
import scipy.stats as sps

num_labeler = 3
num_instance = 5

X = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5]])
Z = np.array([1,0,1,0,1])
Y = np.array([[1,0,1],[0,1,0],[0,0,0],[1,1,1],[1,0,0]])

W = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3]])
gamma = np.array([1,1,1,1,1])
alpha = np.array([1,1,1,1])
beta = 1
para = np.array([1,1,1,1,1,1,1,1,1,2,2,2,2,3,3,3,3,1,1,1,1,1])

def get_params(para):
    # extract parameters from 1D parameter vector
    assert len(para) == 22
    alpha = para[0:4]
    beta = para[4]
    W = para[5:17].reshape(3, 4)
    gamma = para[17:]
    return alpha, beta, gamma, W

def log_p_y_xz(yit,zi,sigmati): #log P(y_it|x_i,z_i)
    return np.log(sps.norm(zi,sigmati).pdf(yit))#tested

def log_p_z_x(alpha,beta,xi): #log P(z_i=1|x_i)
    return -np.log(1+np.exp(-np.dot(alpha,xi)-beta))#tested

def sigma_eta_ti(xi, w_t, gamma_t): # 1+exp(-w_t x_i -gamma_t)^-1
    return 1/(1+np.exp(-np.dot(xi,w_t)-gamma_t)) #tested

def df_alpha(X,Y,Z,W,alpha,beta,gamma):#df/dalpha
    return np.sum((2/(1+np.exp(-np.dot(alpha,X[i])-beta))-1)*np.exp(-np.dot(alpha,X[i])-beta)*X[i]/(1+np.exp(-np.dot(alpha,X[i])-beta))**2 for i in range (num_instance))
    #tested
def df_beta(X,Y,Z,W,alpha,beta,gamma):#df/dbelta
    return np.sum((2/(1+np.exp(-np.dot(alpha,X[i])-beta))-1)*np.exp(-np.dot(alpha,X[i])-beta)/(1+np.exp(-np.dot(alpha,X[i])-beta))**2 for i in range (num_instance))

def df_w(X,Y,Z,W,alpha,beta,gamma):#df/sigma * sigma/dw
    return np.sum(np.sum((-3)*(Y[i][t]**2-(-np.log(1+np.exp(-np.dot(alpha,X[i])-beta)))*(2*Y[i][t]-1))*(1/(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))**4)*(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))*(1-(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t]))))*X[i]+(1/(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))**2)*(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))*(1-(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t]))))*X[i]for t in range(num_labeler)) for i in range (num_instance))

def df_gamma(X,Y,Z,W,alpha,beta,gamma):#df/sigma * sigma/dgamma
    return np.sum(np.sum((-3)*(Y[i][t]**2-(-np.log(1+np.exp(-np.dot(alpha,X[i])-beta)))*(2*Y[i][t]-1))*(1/(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))**4)*(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))*(1-(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t]))))+(1/(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))**2)*(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))*(1-(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t]))))for t in range(num_labeler)) for i in range (num_instance))

def func(para, *args):
    alpha, beta, gamma, W = get_params(para)
    #args
    X = args [0]
    Y = args[1]
    Z = args[2]        
    return  np.sum(np.sum(log_p_y_xz(Y[i][t], Z[i], sigma_eta_ti(X[i],W[t],gamma[t]))+log_p_z_x(alpha, beta, X[i]) for t in range(num_labeler)) for i in range (num_instance))
    #tested

def func_grad(para, *args):
    alpha, beta, gamma, W = get_params(para)
    #args
    X = args [0]
    Y = args[1]
    Z = args[2]
    #gradiants
    d_f_a = df_alpha(X,Y,Z,W,alpha,beta,gamma)
    d_f_b = df_beta(X,Y,Z,W,alpha,beta,gamma)
    d_f_w = df_w(X,Y,Z,W,alpha,beta,gamma)
    d_f_g = df_gamma(X,Y,Z,W,alpha,beta,gamma)
    return np.array([d_f_a, d_f_b,d_f_w,d_f_g])

x0 = np.concatenate([np.ravel(alpha), np.ravel(beta), np.ravel(W), np.ravel(gamma)])

optimLogitLBFGS = sp.optimize.fmin_l_bfgs_b(func, x0 = x0, args = (X,Y,Z), fprime = func_grad)  

我不确定是什么问题。可能是
func_grad
导致了问题?谁能看一下吗?感谢您需要对
alpha、beta、w、gamma
参数的串联数组中的每个元素取
func
的导数,因此
func\u grad
应该返回一个与
x0
长度相同的1D数组(即22)。相反,它返回嵌套在
np.object
数组中的两个数组和两个标量浮点的混乱:

In [1]: func_grad(x0, X, Y, Z)
Out[1]: 
array([array([ 0.00681272,  0.00681272,  0.00681272,  0.00681272]),
       0.006684719133999417,
       array([-0.01351227, -0.01351227, -0.01351227, -0.01351227]),
       -0.013639910534587798], dtype=object)
部分问题在于
np.array([d_f_a,d_f_b,d_f_w,d_f_g])
没有将这些对象连接到单个1D数组中,因为有些是numpy数组,有些是Python浮点数组。这一部分很容易通过使用
np.hstack([d_f_a,d_f_b,d_f_w,d_f_g])
来解决


然而,这些对象的组合大小仍然只有10,而
func_grad
的输出需要是一个22长的向量。您需要再次查看
df.*
函数。特别是,
W
是一个
(3,4)
数组,但是
dfu W
只返回一个
(4,)
向量,
gamma
是一个
(4,)
向量,而
df\u gamma
只返回一个标量。

非常感谢您的回答。我检查了原始文件中算法中的公式,例如,
df_gamma
是一个标量。所以我不知道那篇论文的作者为什么成功地编码了这个算法。我的另一个担忧是,当我将代码的最后一行更改为
optimBFGS=sp.optimize.minimize(func,x0=x0,args=(X,Y,Z))
(没有func_grad)时,我可以得到一个结果。这有点奇怪,这是意料之中的。如果不将梯度函数传递给
最小化
,则它将尝试使用一阶有限差分近似它。这通常效率较低且数值稳定,但它仍然可以让您找到答案。
gamma
是一个
(4,)
向量或
df_gamma
是一个标量-两者都是真的没有意义。