Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/327.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/matlab/14.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Backprop实现问题_Python_Matlab_Artificial Intelligence_Machine Learning_Neural Network - Fatal编程技术网

Python Backprop实现问题

Python Backprop实现问题,python,matlab,artificial-intelligence,machine-learning,neural-network,Python,Matlab,Artificial Intelligence,Machine Learning,Neural Network,我该怎么做。我有一个黑白图像(100x100px): 我应该用这个图像来训练神经网络。输入为图像的x、y坐标(从0到99),输出为1(白色)或0(黑色) 一旦网络学会了,我希望它能根据其权重重现图像,并获得尽可能接近原始图像的图像 下面是我的backprop实现: import os import math import Image import random from random import sample #------------------------------ class de

我该怎么做。我有一个黑白图像(100x100px):

我应该用这个图像来训练神经网络。输入为图像的x、y坐标(从0到99),输出为1(白色)或0(黑色)

一旦网络学会了,我希望它能根据其权重重现图像,并获得尽可能接近原始图像的图像

下面是我的backprop实现:

import os
import math
import Image
import random
from random import sample

#------------------------------ class definitions

class Weight:
    def __init__(self, fromNeuron, toNeuron):
        self.value = random.uniform(-0.5, 0.5)
        self.fromNeuron = fromNeuron
        self.toNeuron = toNeuron
        fromNeuron.outputWeights.append(self)
        toNeuron.inputWeights.append(self)
        self.delta = 0.0 # delta value, this will accumulate and after each training cycle used to adjust the weight value

    def calculateDelta(self, network):
        self.delta += self.fromNeuron.value * self.toNeuron.error

class Neuron:
    def __init__(self):
        self.value = 0.0        # the output
        self.idealValue = 0.0   # the ideal output
        self.error = 0.0        # error between output and ideal output
        self.inputWeights = []
        self.outputWeights = []

    def activate(self, network):
        x = 0.0;
        for weight in self.inputWeights:
            x += weight.value * weight.fromNeuron.value
        # sigmoid function
        if x < -320:
            self.value = 0
        elif x > 320:
            self.value = 1
        else:
            self.value = 1 / (1 + math.exp(-x))

class Layer:
    def __init__(self, neurons):
        self.neurons = neurons

    def activate(self, network):
        for neuron in self.neurons:
            neuron.activate(network)

class Network:
    def __init__(self, layers, learningRate):
        self.layers = layers
        self.learningRate = learningRate # the rate at which the network learns
        self.weights = []
        for hiddenNeuron in self.layers[1].neurons:
            for inputNeuron in self.layers[0].neurons:
                self.weights.append(Weight(inputNeuron, hiddenNeuron))
            for outputNeuron in self.layers[2].neurons:
                self.weights.append(Weight(hiddenNeuron, outputNeuron))

    def setInputs(self, inputs):
        self.layers[0].neurons[0].value = float(inputs[0])
        self.layers[0].neurons[1].value = float(inputs[1])

    def setExpectedOutputs(self, expectedOutputs):
        self.layers[2].neurons[0].idealValue = expectedOutputs[0]

    def calculateOutputs(self, expectedOutputs):
        self.setExpectedOutputs(expectedOutputs)
        self.layers[1].activate(self) # activation function for hidden layer
        self.layers[2].activate(self) # activation function for output layer        

    def calculateOutputErrors(self):
        for neuron in self.layers[2].neurons:
            neuron.error = (neuron.idealValue - neuron.value) * neuron.value * (1 - neuron.value)

    def calculateHiddenErrors(self):
        for neuron in self.layers[1].neurons:
            error = 0.0
            for weight in neuron.outputWeights:
                error += weight.toNeuron.error * weight.value
            neuron.error = error * neuron.value * (1 - neuron.value)

    def calculateDeltas(self):
        for weight in self.weights:
            weight.calculateDelta(self)

    def train(self, inputs, expectedOutputs):
        self.setInputs(inputs)
        self.calculateOutputs(expectedOutputs)
        self.calculateOutputErrors()
        self.calculateHiddenErrors()
        self.calculateDeltas()

    def learn(self):
        for weight in self.weights:
            weight.value += self.learningRate * weight.delta

    def calculateSingleOutput(self, inputs):
        self.setInputs(inputs)
        self.layers[1].activate(self)
        self.layers[2].activate(self)
        #return round(self.layers[2].neurons[0].value, 0)
        return self.layers[2].neurons[0].value


#------------------------------ initialize objects etc

inputLayer = Layer([Neuron() for n in range(2)])
hiddenLayer = Layer([Neuron() for n in range(10)])
outputLayer = Layer([Neuron() for n in range(1)])

learningRate = 0.4

network = Network([inputLayer, hiddenLayer, outputLayer], learningRate)


# let's get the training set
os.chdir("D:/stuff")
image = Image.open("backprop-input.gif")
pixels = image.load()
bbox = image.getbbox()
width = 5#bbox[2] # image width
height = 5#bbox[3] # image height

trainingInputs = []
trainingOutputs = []
b = w = 0
for x in range(0, width):
    for y in range(0, height):
        if (0, 0, 0, 255) == pixels[x, y]:
            color = 0
            b += 1
        elif (255, 255, 255, 255) == pixels[x, y]:
            color = 1
            w += 1
        trainingInputs.append([float(x), float(y)])
        trainingOutputs.append([float(color)])

print "\nOriginal image ... Black:"+str(b)+" White:"+str(w)+"\n"

#------------------------------ let's train

for i in range(500):
    for j in range(len(trainingOutputs)):
        network.train(trainingInputs[j], trainingOutputs[j])
        network.learn()
    for w in network.weights:
        w.delta = 0.0

#------------------------------ let's check

b = w = 0
for x in range(0, width):
    for y in range(0, height):
        out = network.calculateSingleOutput([float(x), float(y)])
        if 0.0 == round(out):
            color = (0, 0, 0, 255)
            b += 1
        elif 1.0 == round(out):
            color = (255, 255, 255, 255)
            w += 1
        pixels[x, y] = color
        #print out

print "\nAfter learning the network thinks ... Black:"+str(b)+" White:"+str(w)+"\n"
例如,输出为:

0.0330125791296   # this should be 0, OK
0.953539182136    # this should be 1, OK
0.971854575477    # this should be 1, OK
0.00046146137467  # this should be 0, OK
0.896699762781    # this should be 1, OK
0.112909223162    # this should be 0, OK
0.00034058462280  # this should be 0, OK
0.0929886299643   # this should be 0, OK
0.940489647869    # this should be 1, OK
换句话说,网络猜对了所有像素(黑色和白色)。如果我使用图像中的实际像素而不是像上面那样的硬编码训练集,为什么会说所有像素都应该是黑色的

我试图改变隐藏层中的神经元数量(最多100个神经元),但没有成功

这是一个家庭作业


这也是我关于backprop的一个延续。

已经有一段时间了,但我确实获得了这方面的学位,所以我想我希望其中的一部分能够坚持下去

据我所知,你的中间层神经元的输入太多了。也就是说,您的输入集由10000个离散输入值(100像素x 100像素)组成;你试图将这10000个值编码成10个神经元。这种编码水平很难(我怀疑这是可能的,但肯定很难);至少,你需要大量的训练(超过500次)才能让它合理地重现。即使中间层有100个神经元,你也会看到一个相对密集的压缩级别(100像素到1个神经元)

如何解决这些问题,;嗯,这很棘手。你可以显著增加你的中间神经元的数量,你会得到一个合理的效果,但当然训练需要很长时间。然而,我认为可能有一个不同的解决方案;如果可能的话,你可以考虑使用极坐标代替输入的笛卡尔坐标;快速观察输入模式表明高度对称,实际上,你会看到一个线性模式,沿着角度坐标重复可预测的变形,这似乎会在少量中间层神经元中很好地编码

这东西很棘手;寻找模式编码的通用解决方案(正如您最初的解决方案一样)非常复杂,并且通常(即使有大量中间层神经元)需要大量的训练过程;另一方面,一些高级启发式任务分解和一点问题重新定义(即从笛卡尔坐标到极坐标的高级转换)可以为定义良好的问题集提供很好的解决方案。当然,这就是永恒的摩擦;一般的解决方案是很难得到的,但稍微多一些特定的解决方案确实很好


无论如何,有趣的东西

你为什么把它标记为MATLAB?看起来你只是在使用Python。@好吧,我认为MATLAB经常被用来编程神经网络和其他人工智能的东西,所以我认为一些MATLAB程序员可能能够发现我的算法中的错误,即使它是用Python编写的。@Amro:thx,对称性非常明显地适用于极坐标。@McWafflestix:在解决机器学习问题时,最重要的是拥有有用的特性(预处理步骤),算法考虑因素排在第二位(通常可以使用某种交叉验证来为模型找到最佳参数)@谢谢你的建议。我会尝试一下,但要到周末才能开始。我真的很忙。@RichardKnop:没问题,很乐意帮忙。请告诉我们事情的最新进展@阿姆罗:没错。OP表示输入为笛卡尔坐标;这就是为什么我对极坐标有点不确定。如果问题的约束条件是输入必须是对偶的(即笛卡尔坐标或极坐标),则极坐标将是一个slamdunk,因为线性变换最有效地使用给定输入的特征。
0.0330125791296   # this should be 0, OK
0.953539182136    # this should be 1, OK
0.971854575477    # this should be 1, OK
0.00046146137467  # this should be 0, OK
0.896699762781    # this should be 1, OK
0.112909223162    # this should be 0, OK
0.00034058462280  # this should be 0, OK
0.0929886299643   # this should be 0, OK
0.940489647869    # this should be 1, OK