Python 自制反向传播

Python 自制反向传播,python,numpy,machine-learning,neural-network,backpropagation,Python,Numpy,Machine Learning,Neural Network,Backpropagation,我正在学习神经网络,我确实实现了前向传递,但当涉及到反向传播时,我遇到了麻烦。对于我来说,很难理解矩阵的形状,其中存储了权重和偏差导数,以及如何在回程中计算这些导数。我想实现神经网络: 我有以下代码: import numpy as np class Layer: activation_func = None bias = None weights = None def __init__(self, inputs, neurons, activati

我正在学习神经网络,我确实实现了前向传递,但当涉及到反向传播时,我遇到了麻烦。对于我来说,很难理解矩阵的形状,其中存储了权重和偏差导数,以及如何在回程中计算这些导数。我想实现神经网络:

我有以下代码:

import numpy as np

class Layer:
    activation_func = None
    bias = None
    weights = None
    
    def __init__(self, inputs, neurons, activation_func):
        self.activation_func = activation_func
        self.weights = np.random.rand(inputs, neurons) * 2 - 1
        self.bias = np.random.rand(1, neurons) * 2 - 1

class Net:
    layers = []
    activation_func = None
    output = []
    adds = []
    inputs = []
    
    def __init__(self, inputs, topology, activation_func):
        #intputs: inputs number by neuron in the input layer
        #topology: each element it's the neurons number of each layer. Topology lenght it's the layers number 
        self.activation_func = activation_func
        layers = []
        for i in range(len(topology)):
            if i == 0:
                layers.append(Layer(inputs, topology[i], self.activation_func))
            else:
                layers.append(Layer(topology[i - 1], topology[i], self.activation_func))
        self.layers = layers
    
    def forward(self, inputs):
        output = []
        adds = []
        self.inputs = inputs
        for i, l in enumerate(self.layers):
            if i == 0:
                output_aux = [[]]
                adds_aux = [[]]
                for j, x in enumerate(self.inputs):
                    z = l.weights[j]@x.T + l.bias[0][j]
                    adds_aux[0].append(z[0])
                    act = self.activation_func(z[0])
                    output_aux[0].append(act)
                output.append(np.array(output_aux))
                adds.append(np.array(adds_aux))
            else:
                z = output[i - 1]@l.weights.T + l.bias
                adds.append(z)
                act = self.activation_func(z)
                output.append(act)
        self.output = output
        self.adds = adds
        return output[len(output) - 1]

sigm = lambda x: 1/(1 + np.e**(-x))

topology = [2, 2, 2, 2] 
inputs_net = 4
inputs_test = [np.array([[0.56, 0.75]]), np.array([[0.23, 0.41]])]

net = Net(inputs_net, topology, sigm)
result = net.forward(inputs_test)
其思想是网络采用固定的输入值​​并在各自的输出神经元中返回1和0

如何为本例构建反向传播算法