Python 3.x 为什么';这个简单的神经网络不收敛于XOR吗?

Python 3.x 为什么';这个简单的神经网络不收敛于XOR吗?,python-3.x,neural-network,xor,backpropagation,convergence,Python 3.x,Neural Network,Xor,Backpropagation,Convergence,下面的网络代码工作正常,但速度太慢。这意味着网络在100个纪元后应获得99%的准确率,学习率为0.2,而我的网络即使在1900个纪元后也从未超过97% Epoch 0, Inputs [0 0], Outputs [-0.83054376], Targets [0] Epoch 100, Inputs [0 1], Outputs [ 0.72563824], Targets [1] Epoch 200, Inputs [1 0], Outputs [ 0.87570863], Targets

下面的网络代码工作正常,但速度太慢。这意味着网络在100个纪元后应获得99%的准确率,学习率为0.2,而我的网络即使在1900个纪元后也从未超过97%

Epoch 0, Inputs [0 0], Outputs [-0.83054376], Targets [0]
Epoch 100, Inputs [0 1], Outputs [ 0.72563824], Targets [1]
Epoch 200, Inputs [1 0], Outputs [ 0.87570863], Targets [1]
Epoch 300, Inputs [0 1], Outputs [ 0.90996706], Targets [1]
Epoch 400, Inputs [1 1], Outputs [ 0.00204791], Targets [0]
Epoch 500, Inputs [0 1], Outputs [ 0.93396672], Targets [1]
Epoch 600, Inputs [0 0], Outputs [ 0.00006375], Targets [0]
Epoch 700, Inputs [0 1], Outputs [ 0.94778227], Targets [1]
Epoch 800, Inputs [1 1], Outputs [-0.00149935], Targets [0]
Epoch 900, Inputs [0 0], Outputs [-0.00122716], Targets [0]
Epoch 1000, Inputs [0 0], Outputs [ 0.00457281], Targets [0]
Epoch 1100, Inputs [0 1], Outputs [ 0.95921556], Targets [1]
Epoch 1200, Inputs [0 1], Outputs [ 0.96001748], Targets [1]
Epoch 1300, Inputs [1 0], Outputs [ 0.96071742], Targets [1]
Epoch 1400, Inputs [1 1], Outputs [ 0.00110912], Targets [0]
Epoch 1500, Inputs [0 0], Outputs [-0.00012382], Targets [0]
Epoch 1600, Inputs [1 0], Outputs [ 0.9640324], Targets [1]
Epoch 1700, Inputs [1 0], Outputs [ 0.96431516], Targets [1]
Epoch 1800, Inputs [0 1], Outputs [ 0.97004973], Targets [1]
Epoch 1900, Inputs [1 0], Outputs [ 0.96616225], Targets [1]
我使用的数据集是:

0 0 0
1 0 1
0 1 1
1 1 1
使用助手文件中的函数读取训练集,但该函数与网络无关

import numpy as np
import helper

FILE_NAME = 'data.txt'
EPOCHS = 2000
TESTING_FREQ = 5
LEARNING_RATE = 0.2

INPUT_SIZE = 2
HIDDEN_LAYERS = [5]
OUTPUT_SIZE = 1


class Classifier:
    def __init__(self, layer_sizes):
        np.set_printoptions(suppress=True)

        self.activ = helper.tanh
        self.dactiv = helper.dtanh

        network = list()
        for i in range(1, len(layer_sizes)):
            layer = dict()
            layer['weights'] = np.random.randn(layer_sizes[i], layer_sizes[i-1])
            layer['biases'] = np.random.randn(layer_sizes[i])
            network.append(layer)

        self.network = network

    def forward_propagate(self, x):
        for i in range(0, len(self.network)):
            self.network[i]['outputs'] = self.network[i]['weights'].dot(x) + self.network[i]['biases']
            if i != len(self.network)-1:
                self.network[i]['outputs'] = x = self.activ(self.network[i]['outputs'])
            else:
                self.network[i]['outputs'] = self.activ(self.network[i]['outputs'])
        return self.network[-1]['outputs']

    def backpropagate_error(self, x, targets):
        self.forward_propagate(x)
        self.network[-1]['deltas'] = (self.network[-1]['outputs'] - targets) * self.dactiv(self.network[-1]['outputs'])
        for i in reversed(range(len(self.network)-1)):
            self.network[i]['deltas'] = self.network[i+1]['deltas'].dot(self.network[i+1]['weights'] * self.dactiv(self.network[i]['outputs']))

    def adjust_weights(self, inputs, learning_rate):
        self.network[0]['weights'] -= learning_rate * np.atleast_2d(self.network[0]['deltas']).T.dot(np.atleast_2d(inputs))
        self.network[0]['biases'] -= learning_rate * self.network[0]['deltas']
        for i in range(1, len(self.network)):
            self.network[i]['weights'] -= learning_rate * np.atleast_2d(self.network[i]['deltas']).T.dot(np.atleast_2d(self.network[i-1]['outputs']))
            self.network[i]['biases'] -= learning_rate * self.network[i]['deltas']

    def train(self, inputs, targets, epochs, testfreq, lrate):
        for epoch in range(epochs):
            i = np.random.randint(0, len(inputs))
            if epoch % testfreq == 0:
                predictions = self.forward_propagate(inputs[i])
                print('Epoch %s, Inputs %s, Outputs %s, Targets %s' % (epoch, inputs[i], predictions, targets[i]))
            self.backpropagate_error(inputs[i], targets[i])
            self.adjust_weights(inputs[i], lrate)


inputs, outputs = helper.readInput(FILE_NAME, INPUT_SIZE, OUTPUT_SIZE)
print('Input data: {0}'.format(inputs))
print('Output targets: {0}\n'.format(outputs))
np.random.seed(1)

nn = Classifier([INPUT_SIZE] + HIDDEN_LAYERS + [OUTPUT_SIZE])

nn.train(inputs, outputs, EPOCHS, TESTING_FREQ, LEARNING_RATE)

主要的错误是您只在20%的时间内进行正向传递,即
epoch%testfreq==0

for epoch in range(epochs):
  i = np.random.randint(0, len(inputs))
  if epoch % testfreq == 0:
    predictions = self.forward_propagate(inputs[i])
    print('Epoch %s, Inputs %s, Outputs %s, Targets %s' % (epoch, inputs[i], predictions, targets[i]))
  self.backpropagate_error(inputs[i], targets[i])
  self.adjust_weights(inputs[i], lrate)
当我从
if
中提取
predictions=self.forward\u propagate(输入[I])
时,我更快地得到更好的结果:

Epoch 100, Inputs [0 1], Outputs [ 0.80317447], Targets 1
Epoch 105, Inputs [1 1], Outputs [ 0.96340466], Targets 1
Epoch 110, Inputs [1 1], Outputs [ 0.96057278], Targets 1
Epoch 115, Inputs [1 0], Outputs [ 0.87960599], Targets 1
Epoch 120, Inputs [1 1], Outputs [ 0.97725825], Targets 1
Epoch 125, Inputs [1 0], Outputs [ 0.89433666], Targets 1
Epoch 130, Inputs [0 0], Outputs [ 0.03539024], Targets 0
Epoch 135, Inputs [0 1], Outputs [ 0.92888141], Targets 1
另外,请注意,在案例4中,术语epoch通常表示所有训练数据的一次运行。所以,事实上,你做了4倍少的时代

更新

我没有注意细节,因此错过了一些微妙但重要的注意事项:

  • 问题中的训练数据代表OR,而不是XOR,因此我上面的结果用于学习或操作
  • 向后传递也执行向前传递(所以这不是一个bug,而是一个令人惊讶的实现细节)
知道了这一点,我更新了数据并再次检查了脚本。运行10000次迭代的训练会产生约0.001的平均误差,因此模型正在学习,只是速度不如它可能的快

简单的神经网络(没有嵌入归一化机制)对特定的超参数非常敏感,例如初始化和学习速率。我手动尝试了各种值,结果如下:

# slightly bigger learning rate
LEARNING_RATE = 0.3
...
# slightly bigger init variation of weights
layer['weights'] = np.random.randn(layer_sizes[i], layer_sizes[i-1]) * 2.0
这将提供以下性能:

...
Epoch 960, Inputs [1 1], Outputs [ 0.01392014], Targets 0
Epoch 970, Inputs [0 0], Outputs [ 0.04342895], Targets 0
Epoch 980, Inputs [1 0], Outputs [ 0.96471654], Targets 1
Epoch 990, Inputs [1 1], Outputs [ 0.00084511], Targets 0
Epoch 1000, Inputs [0 0], Outputs [ 0.01585915], Targets 0
Epoch 1010, Inputs [1 1], Outputs [-0.004097], Targets 0
Epoch 1020, Inputs [1 1], Outputs [ 0.01898956], Targets 0
Epoch 1030, Inputs [0 0], Outputs [ 0.01254217], Targets 0
Epoch 1040, Inputs [1 1], Outputs [ 0.01429213], Targets 0
Epoch 1050, Inputs [0 1], Outputs [ 0.98293925], Targets 1
...
Epoch 1920, Inputs [1 1], Outputs [-0.00043072], Targets 0
Epoch 1930, Inputs [0 1], Outputs [ 0.98544288], Targets 1
Epoch 1940, Inputs [1 0], Outputs [ 0.97682002], Targets 1
Epoch 1950, Inputs [1 0], Outputs [ 0.97684186], Targets 1
Epoch 1960, Inputs [0 0], Outputs [-0.00141565], Targets 0
Epoch 1970, Inputs [0 0], Outputs [-0.00097559], Targets 0
Epoch 1980, Inputs [0 1], Outputs [ 0.98548381], Targets 1
Epoch 1990, Inputs [1 0], Outputs [ 0.97721286], Targets 1

1000次迭代后的平均精度接近98.5%,2000次迭代后的平均精度接近99.1%。这比承诺的要慢一点,但已经足够好了。我相信它可以进一步调整,但这不是这个玩具练习的目标。毕竟,tanh是最好的激活函数,分类问题最好用交叉熵损失(而不是L2损失)来解决。所以我不会太担心这个特定网络的性能,而是继续进行逻辑回归。就学习速度而言,这肯定会更好。

您尝试过其他学习速度吗?0.2可能太低,而且有点不稳定。@eventHandler我已经更新了帖子。基于基准测试,它的收敛速度不够快或不够准确:我认为backpropagate()运行向前传递-否?您是否在训练XOR(
输入[1]…目标1
)。但无可否认,OP在他们的数据集中描述了OR逻辑。不过,看看OP的输出(
输入[1]…目标[0]
),他们在训练XOR,正如他们在标题中所说的那样。