Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/342.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Numpy神经网络中的权重不更新,误差是静态的_Python_Numpy_Machine Learning_Backpropagation - Fatal编程技术网

Python Numpy神经网络中的权重不更新,误差是静态的

Python Numpy神经网络中的权重不更新,误差是静态的,python,numpy,machine-learning,backpropagation,Python,Numpy,Machine Learning,Backpropagation,我正试图在Mnist数据集上为硬件分配构建一个神经网络。我不是要求任何人帮我做作业,我只是很难弄明白为什么每个时代的训练精度和测试精度似乎都是静态的 好像我更新权重的方法不起作用 Epoch: 0, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1% Epoch: 1, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1% Epoch: 2, Trai

我正试图在Mnist数据集上为硬件分配构建一个神经网络。我不是要求任何人帮我做作业,我只是很难弄明白为什么每个时代的训练精度和测试精度似乎都是静态的

好像我更新权重的方法不起作用

Epoch: 0, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1%
Epoch: 1, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1%
Epoch: 2, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1%
Epoch: 3, Train Accuracy: 10.22%, Train Cost: 3.86, Test Accuracy: 10.1%
.
.
.
然而,当我在循环中运行实际的前进和后退行时,没有任何类或方法的“绒毛”,成本就会降低。在当前的课程设置中,我似乎无法让它工作

我已经尝试构建自己的方法,在backprop和前馈方法之间显式地传递权重和偏差,但是,这些更改并没有解决这个梯度下降问题

我很确定这与下面的NeuralNetwork类中backprop方法的定义有关。我一直在努力寻找一种通过访问主训练循环中的权重和偏差变量来更新权重的方法

def backward(self, Y_hat, Y):
        '''
        Backward pass through network. Update parameters 

        INPUT
            Y_hat: Network predicted 
                shape: (?, 10)

            Y: Correct target
                shape: (?, 10)

        RETURN 
            cost: calculate J for errors 
                type: (float)

        '''

        #Naked Backprop
        dJ_dZ2 = Y_hat - Y
        dJ_dW2 = np.matmul(np.transpose(X2), dJ_dZ2)
        dJ_db2 = Y_hat - Y
        dJ_dX2 =  np.matmul(dJ_db2, np.transpose(NeuralNetwork.W2))
        dJ_dZ1 = dJ_dX2 * d_sigmoid(Z1)
        inner_mat = np.matmul(Y-Y_hat,np.transpose(NeuralNetwork.W2))
        dJ_dW1 = np.matmul(np.transpose(X),inner_mat) * d_sigmoid(Z1)
        dJ_db1 = np.matmul(Y - Y_hat, np.transpose(NeuralNetwork.W2)) * d_sigmoid(Z1)

        lr = 0.1

        # weight updates here
        #just line 'em up and do lr * the dJ_.. vars you found above
        NeuralNetwork.W2 = NeuralNetwork.W2 - lr * dJ_dW2
        NeuralNetwork.b2 = NeuralNetwork.b2 - lr * dJ_db2
        NeuralNetwork.W1 = NeuralNetwork.W1 - lr * dJ_dW1
        NeuralNetwork.b1 = NeuralNetwork.b1 - lr * dJ_db1

        # calculate the cost
        cost = -1 * np.sum(Y * np.log(Y_hat))

        # calc gradients

        # weight updates

        return cost#, W1, W2, b1, b2
我在这里真是不知所措,非常感谢您的帮助

完整的代码显示在这里

import keras
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist

np.random.seed(0)

"""### Load MNIST Dataset"""

(x_train, y_train), (x_test, y_test) = mnist.load_data()

X = x_train[0].reshape(1,-1)/255.; Y = y_train[0]
zeros = np.zeros(10); zeros[Y] = 1
Y = zeros

#Here we implement the forward pass for the network using the single example, $X$, from above

### Initialize weights and Biases

num_hidden_nodes = 200 
num_classes = 10

# init weights
#first set of weights (these are what the input matrix is multiplied by)
W1 = np.random.uniform(-1e-3,1e-3,size=(784,num_hidden_nodes))
#this is the first bias layer and i think it's a 200 dimensional vector of the biases that go into each neuron before the sigmoid function.
b1 = np.zeros((1,num_hidden_nodes))

#again this are the weights for the 2nd layer that are multiplied by the activation output of the 1st layer
W2 = np.random.uniform(-1e-3,1e-3,size=(num_hidden_nodes,num_classes))
#these are the biases that are added to each neuron before the final softmax activation.
b2 = np.zeros((1,num_classes))


# multiply input with weights
Z1 = np.add(np.matmul(X,W1), b1)

def sigmoid(z):
    return 1 / (1 + np.exp(- z))

def d_sigmoid(g):
    return sigmoid(g) * (1. - sigmoid(g))

# activation function of Z1
X2 = sigmoid(Z1)


Z2 = np.add(np.matmul(X2,W2), b2)

# softmax
def softmax(z):
    # subracting the max adds numerical stability
    shiftx = z - np.max(z)
    exps = np.exp(shiftx)
    return exps / np.sum(exps)

def d_softmax(Y_hat, Y):
    return Y_hat - Y

# the hypothesis, 
Y_hat = softmax(Z2)

"""Initially the network guesses all categories equally. As we perform backprop the network will get better at discerning images and their categories."""


"""### Calculate Cost"""

cost = -1 * np.sum(Y * np.log(Y_hat))


#so i think the main thing here is like a nested chain rule thing, where we find the change in the cost with respec to each 
# set of matrix weights and biases?

#here is probably the order of how we do things based on whats in math below...
'''
1. find the partial deriv of the cost function with respect to the output of the second layer, without the softmax it looks like for some reason?
2. find the partial deriv of the cost function with respect to the weights of the second layer, which is dope cause we can re-use the partial deriv from step 1
3. this one I know intuitively we're looking for the parial deriv of cost with respect to the bias term of the second layer, but how TF does that math translate into 
numpy? is that the same y_hat - Y from the first step? where is there anyother Y_hat - y?
4. This is also confusing cause I know where to get the weights for layer 2 from and how to transpose them, but again, where is the Y_hat - Y?
5. Here we take the missing partial deriv from step 4 and multiply it by the d_sigmoid function of the first layer outputs before activations.
6. In this step we multiply the first layer weights (transposed) by the var from 5
7. And this is weird too, this just seems like the same step as number 5 repeated for some reason but with y-y_hat instead of y_hat-y
'''
#look at tutorials like this https://www.youtube.com/watch?v=7qYtIveJ6hU
#I think the most backprop layer steps are fine without biases but how do we find the bias derivatives

#maybe just the hypothesis matrix minus the actual y matrix?
dJ_dZ2 = Y_hat - Y


#find partial deriv of cost w respect to 2nd layer weights
dJ_dW2 = np.matmul(np.transpose(X2), dJ_dZ2)


#finding the partial deriv of cost with respect to the 2nd layer biases
#I'm still not 100% sure why this is here and why it works out to Y_hat - Y
dJ_db2 = Y_hat - Y


#finding the partial deriv of cost with respect to 2nd layer inputs
dJ_dX2 =  np.matmul(dJ_db2, np.transpose(W2))



#finding the partial deriv of cost with respect to Activation of layer 1
dJ_dZ1 = dJ_dX2 * d_sigmoid(Z1)



#y-yhat matmul 2nd layer weights
#I added the transpose to the W2 var because the matrices were not compaible sizes without it
inner_mat = np.matmul(Y-Y_hat,np.transpose(W2))
dJ_dW1 = np.matmul(np.transpose(X),inner_mat) * d_sigmoid(Z1)


class NeuralNetwork:
    # set learning rate
    lr = 0.01

    # init weights
    W1 = np.random.uniform(-1e-3,1e-3,size=(784,num_hidden_nodes))
    b1 = np.zeros((1,num_hidden_nodes))

    W2 = np.random.uniform(-1e-3,1e-3,size=(num_hidden_nodes,num_classes))
    b2 = np.zeros((1,num_classes))


    def __init__(self, num_hidden_nodes, num_classes, lr=0.01):
        '''
        # set learning rate
        lr = lr

        # init weights
        W1 = np.random.uniform(-1e-3,1e-3,size=(784,num_hidden_nodes))
        b1 = np.zeros((1,num_hidden_nodes))

        W2 = np.random.uniform(-1e-3,1e-3,size=(num_hidden_nodes,num_classes))
        b2 = np.zeros((1,num_classes))
    '''
    def forward(self, X1):
        '''
        Forward pass through the network

        INPUT
            X: input to network
                shape: (?, 784)

        RETURN
            Y_hat: prediction from output of network 
                shape: (?, 10)
        '''
        Z1 = np.add(np.matmul(X,W1), b1)
        X2 =  sigmoid(Z1)# activation function of Z1
        Z2 = np.add(np.matmul(X2,W2), b2)
        Y_hat =  softmax(Z2)

        #return the hypothesis
        return Y_hat

        # store input for backward pass

        # you can basically copy and past what you did in the forward pass above here

        # think about what you need to store for the backward pass

        return 

    def backward(self, Y_hat, Y):
        '''
        Backward pass through network. Update parameters 

        INPUT
            Y_hat: Network predicted 
                shape: (?, 10)

            Y: Correct target
                shape: (?, 10)

        RETURN 
            cost: calculate J for errors 
                type: (float)

        '''

        #Naked Backprop
        dJ_dZ2 = Y_hat - Y
        dJ_dW2 = np.matmul(np.transpose(X2), dJ_dZ2)
        dJ_db2 = Y_hat - Y
        dJ_dX2 =  np.matmul(dJ_db2, np.transpose(NeuralNetwork.W2))
        dJ_dZ1 = dJ_dX2 * d_sigmoid(Z1)
        inner_mat = np.matmul(Y-Y_hat,np.transpose(NeuralNetwork.W2))
        dJ_dW1 = np.matmul(np.transpose(X),inner_mat) * d_sigmoid(Z1)
        dJ_db1 = np.matmul(Y - Y_hat, np.transpose(NeuralNetwork.W2)) * d_sigmoid(Z1)

        lr = 0.1

        # weight updates here
        #just line 'em up and do lr * the dJ_.. vars you found above
        NeuralNetwork.W2 = NeuralNetwork.W2 - lr * dJ_dW2
        NeuralNetwork.b2 = NeuralNetwork.b2 - lr * dJ_db2
        NeuralNetwork.W1 = NeuralNetwork.W1 - lr * dJ_dW1
        NeuralNetwork.b1 = NeuralNetwork.b1 - lr * dJ_db1

        # calculate the cost
        cost = -1 * np.sum(Y * np.log(Y_hat))

        # calc gradients

        # weight updates

        return cost#, W1, W2, b1, b2

nn = NeuralNetwork(200,10,lr=.01)
num_train = float(len(x_train)) 
num_test = float(len(x_test))

for epoch in range(10):
    train_correct = 0; train_cost = 0
    # training loop
    for i in range(len(x_train)):
        x = x_train[i]; y = y_train[i]
        # standardizing input to range 0 to 1
        X = x.reshape(1,784) /255.

        # forward pass through network
        Y_hat = nn.forward(X)

        # get pred number
        pred_num = np.argmax(Y_hat)

        # check if prediction was accurate
        if pred_num == y:
            train_correct += 1

        # make a one hot categorical vector; same as keras.utils.to_categorical()
        zeros = np.zeros(10); zeros[y] = 1
        Y = zeros

        # compute gradients and update weights
        train_cost += nn.backward(Y_hat, Y)

    test_correct = 0
    # validation loop
    for i in range(len(x_test)):
        x = x_test[i]; y = y_test[i]
        # standardizing input to range 0 to 1
        X = x.reshape(1,784) /255.

        # forward pass
        Y_hat = nn.forward(X)

        # get pred number
        pred_num = np.argmax(Y_hat)

        # check if prediction was correct
        if pred_num == y:
            test_correct += 1

        # no backward pass here!

    # compute average metrics for train and test
    train_correct = round(100*(train_correct/num_train), 2)
    test_correct = round(100*(test_correct/num_test ), 2)
    train_cost = round( train_cost/num_train, 2)

    # print status message every epoch
    log_message = 'Epoch: {epoch}, Train Accuracy: {train_acc}%, Train Cost: {train_cost}, Test Accuracy: {test_acc}%'.format(
        epoch=epoch, 
        train_acc=train_correct, 
        train_cost=train_cost, 
        test_acc=test_correct
    )
    print (log_message)




此外,该项目在colab&ipynb笔记本中

我相信这一点非常清楚,在您的循环的这一部分:

for epoch in range(10):
    train_correct = 0; train_cost = 0
    # training loop
    for i in range(len(x_train)):
        x = x_train[i]; y = y_train[i]
        # standardizing input to range 0 to 1
        X = x.reshape(1,784) /255.

        # forward pass through network
        Y_hat = nn.forward(X)

        # get pred number
        pred_num = np.argmax(Y_hat)

        # check if prediction was accurate
        if pred_num == y:
            train_correct += 1

        # make a one hot categorical vector; same as keras.utils.to_categorical()
        zeros = np.zeros(10); zeros[y] = 1
        Y = zeros

        # compute gradients and update weights
        train_cost += nn.backward(Y_hat, Y)

    test_correct = 0
    # validation loop
    for i in range(len(x_test)):
        x = x_test[i]; y = y_test[i]
        # standardizing input to range 0 to 1
        X = x.reshape(1,784) /255.

        # forward pass
        Y_hat = nn.forward(X)

        # get pred number
        pred_num = np.argmax(Y_hat)

        # check if prediction was correct
        if pred_num == y:
            test_correct += 1

        # no backward pass here!

    # compute average metrics for train and test
    train_correct = round(100*(train_correct/num_train), 2)
    test_correct = round(100*(test_correct/num_test ), 2)
    train_cost = round( train_cost/num_train, 2)

    # print status message every epoch
    log_message = 'Epoch: {epoch}, Train Accuracy: {train_acc}%, Train Cost: {train_cost}, Test Accuracy: {test_acc}%'.format(
        epoch=epoch, 
        train_acc=train_correct, 
        train_cost=train_cost, 
        test_acc=test_correct
    )
    print (log_message)

对于循环中10个历元中的每个历元,您都将您的train_correct和train_cost设置为0,因此每个历元后都没有更新

嘿,Celius,我尝试将“train_correct”和“test_cost”移动到For循环之外,但我仍然获得了静态精度。您不需要将它们移出,您可能没有让它们在您的循环中更新。确保这样做,那么你的准确度可能也会更新。我正在更新循环中的权重,因为在backprop方法中,我从过去的权重/偏差中减去梯度*学习率。