Keras学习二乘乘法的问题

Keras学习二乘乘法的问题,keras,neural-network,deep-learning,Keras,Neural Network,Deep Learning,我只是想玩一下Keras,但是我在教它一个基本函数(乘以2)时遇到了一些麻烦。我的设置如下。由于我是新手,我在评论中添加了我认为在每一步都会发生的事情 x_train = np.linspace(1,1000,1000) y_train=x_train*2 model = Sequential() model.add(Dense(32, input_dim=1, activation='sigmoid')) #add a 32-node layer model.add(Dense(32, act

我只是想玩一下Keras,但是我在教它一个基本函数(乘以2)时遇到了一些麻烦。我的设置如下。由于我是新手,我在评论中添加了我认为在每一步都会发生的事情

x_train = np.linspace(1,1000,1000)
y_train=x_train*2
model = Sequential()
model.add(Dense(32, input_dim=1, activation='sigmoid')) #add a 32-node layer
model.add(Dense(32, activation='sigmoid')) #add a second 32-node layer
model.add(Dense(1, activation='sigmoid')) #add a final output layer
model.compile(loss='mse',
              optimizer='rmsprop') #compile it with loss being mean squared error

model.fit(x_train,y_train, epochs = 10, batch_size=100) #train 
score = model.evaluate(x_train,y_train,batch_size=100)
print(score)
我得到以下输出:

1000/1000 [==============================] - 0s 355us/step - loss: 1334274.0375
Epoch 2/10
1000/1000 [==============================] - 0s 21us/step - loss: 1333999.8250
Epoch 3/10
1000/1000 [==============================] - 0s 29us/step - loss: 1333813.4062
Epoch 4/10
1000/1000 [==============================] - 0s 28us/step - loss: 1333679.2625
Epoch 5/10
1000/1000 [==============================] - 0s 27us/step - loss: 1333591.6750
Epoch 6/10
1000/1000 [==============================] - 0s 51us/step - loss: 1333522.0000
Epoch 7/10
1000/1000 [==============================] - 0s 23us/step - loss: 1333473.7000
Epoch 8/10
1000/1000 [==============================] - 0s 24us/step - loss: 1333440.6000
Epoch 9/10
1000/1000 [==============================] - 0s 29us/step - loss: 1333412.0250
Epoch 10/10
1000/1000 [==============================] - 0s 21us/step - loss: 1333390.5000
1000/1000 [==============================] - 0s 66us/step
['loss']
1333383.1143554687
Epoch 860/1000
1000/1000 [==============================] - 0s 29us/step - loss: 5.1868e-08
这个基本函数的损失似乎非常大,我很困惑为什么它不能学习它。我是糊涂了还是做错了什么?

  • 使用sigmoid激活将输出限制在[0,1]范围内。但是您的目标输出在[0,2000]范围内,因此您的网络无法学习。请尝试使用
    relu
    激活
  • 试着在调试时使用
    adam
    而不是
    rmsprop
    ,它几乎总是工作得更好
  • 训练时间更长
综合起来,我得到以下输出:

1000/1000 [==============================] - 0s 355us/step - loss: 1334274.0375
Epoch 2/10
1000/1000 [==============================] - 0s 21us/step - loss: 1333999.8250
Epoch 3/10
1000/1000 [==============================] - 0s 29us/step - loss: 1333813.4062
Epoch 4/10
1000/1000 [==============================] - 0s 28us/step - loss: 1333679.2625
Epoch 5/10
1000/1000 [==============================] - 0s 27us/step - loss: 1333591.6750
Epoch 6/10
1000/1000 [==============================] - 0s 51us/step - loss: 1333522.0000
Epoch 7/10
1000/1000 [==============================] - 0s 23us/step - loss: 1333473.7000
Epoch 8/10
1000/1000 [==============================] - 0s 24us/step - loss: 1333440.6000
Epoch 9/10
1000/1000 [==============================] - 0s 29us/step - loss: 1333412.0250
Epoch 10/10
1000/1000 [==============================] - 0s 21us/step - loss: 1333390.5000
1000/1000 [==============================] - 0s 66us/step
['loss']
1333383.1143554687
Epoch 860/1000
1000/1000 [==============================] - 0s 29us/step - loss: 5.1868e-08