Python Scipy的最小值函数能否优化类数据成员的值?

Python Scipy的最小值函数能否优化类数据成员的值?,python,class,optimization,scipy,neural-network,Python,Class,Optimization,Scipy,Neural Network,我正在研究一个,用梯度下降法来处理简单的and/XOR问题,但我发现为更复杂的问题找到最佳权重要困难得多。我的解决方案是使用Scipy的最小函数,但我遇到了一些麻烦 我的ANN是一个类,当创建ANN对象时,将在初始化时作为数据成员自身创建权重。\uu weights,这很有用,因为我实际上不必将权重参数传递到类上使用的任何方法中,它在我以前的非Scipy使用实现中起作用 我使用self._权重作为最小函数的初始猜测,并在其他参数输入数据中传递使函数工作所需的正确输出值。但是,优化器似乎在不需要将

我正在研究一个,用梯度下降法来处理简单的and/XOR问题,但我发现为更复杂的问题找到最佳权重要困难得多。我的解决方案是使用Scipy的最小函数,但我遇到了一些麻烦

我的ANN是一个类,当创建ANN对象时,将在初始化时作为数据成员自身创建权重。\uu weights,这很有用,因为我实际上不必将权重参数传递到类上使用的任何方法中,它在我以前的非Scipy使用实现中起作用

我使用self._权重作为最小函数的初始猜测,并在其他参数输入数据中传递使函数工作所需的正确输出值。但是,优化器似乎在不需要将权重作为参数传递给methodsCost函数/FeedForward/Backprop,因为它可以用self.\u权重调用。例如,当调用我的前馈方法时,它传入权重,并声明输入的大小不正确,这是因为传入了权重。输入数据被传递到最小值的可选参数“arg=”中

那么,所有的解释都完成了,要优化的值实际上必须是用于优化它的方法的参数,而不仅仅是可以从类中的任何位置调用的类数据成员吗?我试图找到一个例子,使用一个类,它们传入“权重”或其他值,根据使用情况进行优化,而不是类的数据成员

编辑:

这是我的train方法中的代码片段。它的参数是输入数据和目标

self.CostFunction和self.Weight_Grad都采用相同的两个参数。后两个函数都使用前馈方法获得网络的输出,并使用该信息执行相应的任务。每当我调用minimize函数时,它似乎会将self.\uuu权重作为输入参数传递到self.CostFunction中,然后作为输入传递到前馈方法中,我得到一个错误。如果我打印传入前馈的输入,它就是self.\uu权重的值。这就是我了解自我的方式。_权重是作为一个参数传入的,而它本不应该传入

所以,我想我会创造一个假人?参数,用于将所有方法的self.CostFunction、self.Weight、Grad、前馈和错误传递到self.weights值,但权重或输出没有变化。我需要做什么才能更新self.\u权重

如果有帮助,以下是一些方法:

def Feedforward(self,input):

        #Code to take self.__weights and convert to list of matrices. 
        weight_matrix_list = []
        prev_val = 0


        for i in range(len(self.__weight_sizes)):
            curr_w_size = self.__weight_sizes[i]
            weight_count = curr_w_size[0]*curr_w_size[1]
            matrix_elements = self.__weights[prev_val:prev_val+weight_count]
            weight_matrix_list.append(matrix_elements.reshape(curr_w_size))


        self.__input_cases = np.shape(input)[0]

        #Empty list to hold the output of every layer.
        output_list = []
        #Appends the output of the the 1st input layer.
        output_list.append(input)

        for i in range(self.__layers-1):
            if i == 0:

                print(self.__input_cases)
                print(input)
                X = np.concatenate((np.ones((self.__input_cases,1)),input),1)

                output = self.sigmoid(np.dot(X,weight_matrix_list[0].T))
                output_list.append(output)
            else:
                output = self.sigmoid(np.dot(np.concatenate((np.ones((self.__input_cases,1)),output),1),weight_matrix_list[i].T))                 
                output_list.append(output)


        return output_list

    def CostFunction(self,input_data,target,error_func=1):
        """Gives the cost of using a particular weight matrix 
        based off of the input and targeted output"""
        print("Cost")
        #Run the network to get output using current theta matrices.
        output = self.Feedforward(input_data)[-1]


        #Determines number of input/training examples
        m = np.shape(input_data)[0]

        #####Allows user to choose Cost Functions.##### 

        #
        #Log Based Error Function
        #
        if error_func == 0:
            error = np.multiply(-target,np.log(output))-np.multiply((1-target),np.log(1-output))
            total_error = np.sum(np.sum(error))
        #    
        #Squared Error Cost Function
        #
        elif error_func == 1:
            error = (target - output)**2
            total_error = (1/(2*m)) * np.sum(np.sum(error))

        return total_error

    def Weight_Grad(self,input_data,target):
        print('Grad')
        weight_matrix_list = []
        prev_val = 0

        for i in range(len(self.__weight_sizes)):
            curr_w_size = self.__weight_sizes[i]
            weight_count = curr_w_size[0]*curr_w_size[1]
            matrix_elements = self.__weights[prev_val:prev_val+weight_count]
            weight_matrix_list.append(matrix_elements.reshape(curr_w_size))        

        output_list = self.Feedforward(theta,input_data)

        #Finds the Deltas for Each Layer
        # 
        deltas = []
        for i in range(self.__layers - 1):
            #Finds Error Delta for the last layer
            if i == 0:

                error = (target-output_list[-1])

                error_delta = -1*np.multiply(error,np.multiply(output_list[-1],(1-output_list[-1])))
                deltas.append(error_delta)
            #Finds Error Delta for the hidden layers   
            else:
                #Weight matrices have bias values removed
                error_delta = np.multiply(np.dot(deltas[-1],weight_matrix_list[-i][:,1:]),output_list[-i-1]*(1-output_list[-i-1]))
                deltas.append(error_delta)

        #
        #Finds the Deltas for each Weight Matrix
        #
        Weight_Delta_List = []
        deltas.reverse()
        for i in range(len(weight_matrix_list)):

            current_weight_delta = (1/self.__input_cases) * np.dot(deltas[i].T,np.concatenate((np.ones((self.__input_cases,1)),output_list[i]),1))
            Weight_Delta_List.append(current_weight_delta)




        #
        #Converts Weight Delta List to a single vector
        #        
        Weight_Delta_Vector = np.array([])
        for i in range(len(Weight_Delta_List)):
            Weight_Delta_Vector = np.concatenate((Weight_Delta_Vector,Weight_Delta_List[i].flatten()))
        print("WDV Shape:",np.shape(Weight_Delta_Vector))
        return Weight_Delta_Vector

    def Train(self,input_data,target):           

        opt_theta = minimize(self.CostFunction,x0=self.__weights,args = (input_data,target),method='Newton-CG',jac= self.Weight_Grad)       
        print(opt_theta)
        self.__weights = opt_theta.x

        print("Done")

向我们展示一些演示问题的最小代码。我添加了一些代码并进行了更详细的介绍。
def Feedforward(self,input):

        #Code to take self.__weights and convert to list of matrices. 
        weight_matrix_list = []
        prev_val = 0


        for i in range(len(self.__weight_sizes)):
            curr_w_size = self.__weight_sizes[i]
            weight_count = curr_w_size[0]*curr_w_size[1]
            matrix_elements = self.__weights[prev_val:prev_val+weight_count]
            weight_matrix_list.append(matrix_elements.reshape(curr_w_size))


        self.__input_cases = np.shape(input)[0]

        #Empty list to hold the output of every layer.
        output_list = []
        #Appends the output of the the 1st input layer.
        output_list.append(input)

        for i in range(self.__layers-1):
            if i == 0:

                print(self.__input_cases)
                print(input)
                X = np.concatenate((np.ones((self.__input_cases,1)),input),1)

                output = self.sigmoid(np.dot(X,weight_matrix_list[0].T))
                output_list.append(output)
            else:
                output = self.sigmoid(np.dot(np.concatenate((np.ones((self.__input_cases,1)),output),1),weight_matrix_list[i].T))                 
                output_list.append(output)


        return output_list

    def CostFunction(self,input_data,target,error_func=1):
        """Gives the cost of using a particular weight matrix 
        based off of the input and targeted output"""
        print("Cost")
        #Run the network to get output using current theta matrices.
        output = self.Feedforward(input_data)[-1]


        #Determines number of input/training examples
        m = np.shape(input_data)[0]

        #####Allows user to choose Cost Functions.##### 

        #
        #Log Based Error Function
        #
        if error_func == 0:
            error = np.multiply(-target,np.log(output))-np.multiply((1-target),np.log(1-output))
            total_error = np.sum(np.sum(error))
        #    
        #Squared Error Cost Function
        #
        elif error_func == 1:
            error = (target - output)**2
            total_error = (1/(2*m)) * np.sum(np.sum(error))

        return total_error

    def Weight_Grad(self,input_data,target):
        print('Grad')
        weight_matrix_list = []
        prev_val = 0

        for i in range(len(self.__weight_sizes)):
            curr_w_size = self.__weight_sizes[i]
            weight_count = curr_w_size[0]*curr_w_size[1]
            matrix_elements = self.__weights[prev_val:prev_val+weight_count]
            weight_matrix_list.append(matrix_elements.reshape(curr_w_size))        

        output_list = self.Feedforward(theta,input_data)

        #Finds the Deltas for Each Layer
        # 
        deltas = []
        for i in range(self.__layers - 1):
            #Finds Error Delta for the last layer
            if i == 0:

                error = (target-output_list[-1])

                error_delta = -1*np.multiply(error,np.multiply(output_list[-1],(1-output_list[-1])))
                deltas.append(error_delta)
            #Finds Error Delta for the hidden layers   
            else:
                #Weight matrices have bias values removed
                error_delta = np.multiply(np.dot(deltas[-1],weight_matrix_list[-i][:,1:]),output_list[-i-1]*(1-output_list[-i-1]))
                deltas.append(error_delta)

        #
        #Finds the Deltas for each Weight Matrix
        #
        Weight_Delta_List = []
        deltas.reverse()
        for i in range(len(weight_matrix_list)):

            current_weight_delta = (1/self.__input_cases) * np.dot(deltas[i].T,np.concatenate((np.ones((self.__input_cases,1)),output_list[i]),1))
            Weight_Delta_List.append(current_weight_delta)




        #
        #Converts Weight Delta List to a single vector
        #        
        Weight_Delta_Vector = np.array([])
        for i in range(len(Weight_Delta_List)):
            Weight_Delta_Vector = np.concatenate((Weight_Delta_Vector,Weight_Delta_List[i].flatten()))
        print("WDV Shape:",np.shape(Weight_Delta_Vector))
        return Weight_Delta_Vector

    def Train(self,input_data,target):           

        opt_theta = minimize(self.CostFunction,x0=self.__weights,args = (input_data,target),method='Newton-CG',jac= self.Weight_Grad)       
        print(opt_theta)
        self.__weights = opt_theta.x

        print("Done")