简单的神经网络,在Python中将输入变量的总和作为输出?

简单的神经网络,在Python中将输入变量的总和作为输出?,python,neural-network,Python,Neural Network,请一些人制作一个简单的神经网络,将输入变量的总和作为输出。 示例:如果输入变量为X1、X2、X3,则输出为Y=X1+X2+X3 简单的Python程序,使用矩阵乘法会很有帮助 多谢各位 这是我试图应用的代码,它只是“iamtrask”代码的一个修改版本,但它并没有给我正确的答案,当我增加测试用例(设置大小)时,它会在[1.]处饱和 “一个描述反向传播内部工作原理的简单神经网络实现。”11行代码 X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ]) y

请一些人制作一个简单的神经网络,将输入变量的总和作为输出。 示例:如果输入变量为X1、X2、X3,则输出为Y=X1+X2+X3

简单的Python程序,使用矩阵乘法会很有帮助

多谢各位

这是我试图应用的代码,它只是“iamtrask”代码的一个修改版本,但它并没有给我正确的答案,当我增加测试用例(设置大小)时,它会在[1.]处饱和

“一个描述反向传播内部工作原理的简单神经网络实现。”11行代码

X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ])
y = np.array([[0,1,1,0]]).T
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1
for j in xrange(60000):
    l1 = 1/(1+np.exp(-(np.dot(X,syn0))))
    l2 = 1/(1+np.exp(-(np.dot(l1,syn1))))
    l2_delta = (y - l2)*(l2*(1-l2))
    l1_delta = l2_delta.dot(syn1.T) * (l1 * (1-l1))
    syn1 += l1.T.dot(l2_delta)
    syn0 += X.T.dot(l1_delta)

我用这种方式修改了Trask的代码:

import numpy as np

# sigmoid function
def nonlin(x,deriv=False):
    if(deriv==True):
        return x*(1-x)
    return 1/(1+np.exp(-x))

# input dataset of 100 pairs of X1, X2 & X3 numbers lying between 1 and 3
X = np.random.randint(1,3,size=(100,3)).astype(int) 
#Rescaling them or Normalizing them
X = X/(3*3)    #(Max_element, No_of_imputs)
# output dataset            
y = np.sum(X, axis = 1, keepdims=True)
#Normalizing
y=y/(3*3)

#Initializing weights
np.random.seed(1)

# randomly initialize our weights with mean 0
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1

#Training
for iter in range(30000):

    # forward propagation
    l0 = X
    l1 = nonlinn(np.dot(l0,syn0))
    l2 = nonlinn(np.dot(l1,syn1))

    # how much did we miss?
    l2_error = y-l2

    #if (iter% 100) == 0:
    #    print ("Error:" + str(np.mean(np.abs(l2_error))))

    l2_delta = l2_error*nonlinn(l2, deriv=True)

    l1_error = l2_delta.dot(syn1.T)

    # multiply how much we missed by the 
    # slope of the sigmoid at the values in l1
    l1_delta = l1_error * nonlinn(l1,True)

    # update weights
    syn1 += l1.T.dot(l2_delta)
    syn0 += l0.T.dot(l1_delta)

#Predict
l0 = [[3,1,0]]    #Should give 4 as answer
l1 = nonlinn(np.dot(l0,syn0))
l2 = nonlinn(np.dot(l1,syn1))
print (l2*(3*3))  
#(3*3) will invert the effect of normalization
结果是4.157(相当准确)。
我认为问题在于规范化。

到目前为止您都做了些什么?Stack Overflow不是我们为您编写代码的网站。在这里,我们可以帮助您在遇到问题时调试自己的代码。你能做到!好的,我已经添加了我试图修改和运行的代码。
import numpy as np

# sigmoid function
def nonlin(x,deriv=False):
    if(deriv==True):
        return x*(1-x)
    return 1/(1+np.exp(-x))

# input dataset of 100 pairs of X1, X2 & X3 numbers lying between 1 and 3
X = np.random.randint(1,3,size=(100,3)).astype(int) 
#Rescaling them or Normalizing them
X = X/(3*3)    #(Max_element, No_of_imputs)
# output dataset            
y = np.sum(X, axis = 1, keepdims=True)
#Normalizing
y=y/(3*3)

#Initializing weights
np.random.seed(1)

# randomly initialize our weights with mean 0
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1

#Training
for iter in range(30000):

    # forward propagation
    l0 = X
    l1 = nonlinn(np.dot(l0,syn0))
    l2 = nonlinn(np.dot(l1,syn1))

    # how much did we miss?
    l2_error = y-l2

    #if (iter% 100) == 0:
    #    print ("Error:" + str(np.mean(np.abs(l2_error))))

    l2_delta = l2_error*nonlinn(l2, deriv=True)

    l1_error = l2_delta.dot(syn1.T)

    # multiply how much we missed by the 
    # slope of the sigmoid at the values in l1
    l1_delta = l1_error * nonlinn(l1,True)

    # update weights
    syn1 += l1.T.dot(l2_delta)
    syn0 += l0.T.dot(l1_delta)

#Predict
l0 = [[3,1,0]]    #Should give 4 as answer
l1 = nonlinn(np.dot(l0,syn0))
l2 = nonlinn(np.dot(l1,syn1))
print (l2*(3*3))  
#(3*3) will invert the effect of normalization