Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 2.7 Keras:机会水平绩效_Python 2.7_Keras_Mnist - Fatal编程技术网

Python 2.7 Keras:机会水平绩效

Python 2.7 Keras:机会水平绩效,python-2.7,keras,mnist,Python 2.7,Keras,Mnist,我一直在尝试训练一个大型神经网络,但损失并没有减少。因此,我想在MNIST数据库上尝试一个小型的ConvNet,因为它已经存在于Keras中,一些中性网络对它进行了出色的分类 这是我为这份工作编写的程序: import keras import numpy as np from keras.models import Sequential from keras.datasets import mnist from keras.layers import Dense, Input, Flatten

我一直在尝试训练一个大型神经网络,但损失并没有减少。因此,我想在MNIST数据库上尝试一个小型的
ConvNet
,因为它已经存在于
Keras
中,一些中性网络对它进行了出色的分类

这是我为这份工作编写的程序:

import keras
import numpy as np
from keras.models import Sequential
from keras.datasets import mnist
from keras.layers import Dense, Input, Flatten, Activation
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
from keras.layers.core import Dropout
from keras.optimizers import RMSprop
from keras import losses
from keras import optimizers
from keras.utils import plot_model

(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train = np.expand_dims(x_train, axis=3)
x_test = np.expand_dims(x_test, axis=3)
y_train = keras.utils.to_categorical(y_train, num_classes=10)
y_test = keras.utils.to_categorical(y_test, num_classes=10)
model = Sequential()

model.add(Conv2D(64,(2,2),padding='same',activation='relu',input_shape=(28,28,1)))
model.add(Conv2D(64,(2,2),padding='same',activation='relu'))
model.add(MaxPooling2D((2,2),strides=(2,2)))
model.add(Conv2D(128,(2,2),padding='same',activation='relu'))
model.add(Conv2D(128,(2,2),padding='same',activation='relu'))
model.add(MaxPooling2D((2,2),strides=(2,2)))
model.add(Conv2D(256,(2,2),padding='same',activation='relu'))
model.add(Conv2D(256,(2,2),padding='same',activation='relu'))
model.add(Flatten())
model.add(Dense(2048, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2048, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))

model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])
model.summary()

print('Creating model graph!\n')
plot_model(model, to_file='mnist-convnet.pdf', show_shapes=True)
print('Created model graph!\n')

model.fit(x_train,y_train,epochs=50,batch_size=500,shuffle=True,validation_split=0.2,verbose=1)

model.evaluate(x_test,y_test,batch_size=1000,verbose=1)
我将第四维添加到
x_列
x_测试
数据中,以便使用
Conv2D
层。神经网络的有效形状为:

问题是,损失没有下降,准确率一直处于机会水平。在第七纪元停止了该计划,因为损失根本没有下降。以下是第七纪元的表演:

Epoch 7/50

  500/48000 [..............................] - ETA: 2706s - loss: 14.2484 - acc: 0.1160
 1000/48000 [..............................] - ETA: 2474s - loss: 14.3612 - acc: 0.1090
 1500/48000 [..............................] - ETA: 2341s - loss: 14.4418 - acc: 0.1040
 2000/48000 [>.............................] - ETA: 2391s - loss: 14.5385 - acc: 0.0980
 2500/48000 [>.............................] - ETA: 2357s - loss: 14.5643 - acc: 0.0964
 3000/48000 [>.............................] - ETA: 2350s - loss: 14.5492 - acc: 0.0973
 3500/48000 [=>............................] - ETA: 2275s - loss: 14.5938 - acc: 0.0946
 4000/48000 [=>............................] - ETA: 2259s - loss: 14.5828 - acc: 0.0952
 4500/48000 [=>............................] - ETA: 2222s - loss: 14.5457 - acc: 0.0976
 5000/48000 [==>...........................] - ETA: 2189s - loss: 14.5482 - acc: 0.0974
 5500/48000 [==>...........................] - ETA: 2225s - loss: 14.5414 - acc: 0.0978
 6000/48000 [==>...........................] - ETA: 2303s - loss: 14.5385 - acc: 0.0980
 6500/48000 [===>..........................] - ETA: 2263s - loss: 14.5261 - acc: 0.0988
 7000/48000 [===>..........................] - ETA: 2229s - loss: 14.5224 - acc: 0.0990
 7500/48000 [===>..........................] - ETA: 2231s - loss: 14.5342 - acc: 0.0983
 8000/48000 [====>.........................] - ETA: 2212s - loss: 14.5163 - acc: 0.0994
 8500/48000 [====>.........................] - ETA: 2196s - loss: 14.5195 - acc: 0.0992
 9000/48000 [====>.........................] - ETA: 2146s - loss: 14.5457 - acc: 0.0976
 9500/48000 [====>.........................] - ETA: 2106s - loss: 14.5555 - acc: 0.0969
10000/48000 [=====>........................] - ETA: 2051s - loss: 14.5417 - acc: 0.0978
10500/48000 [=====>........................] - ETA: 2031s - loss: 14.5539 - acc: 0.0970
11000/48000 [=====>........................] - ETA: 1987s - loss: 14.5561 - acc: 0.0969
11500/48000 [======>.......................] - ETA: 1949s - loss: 14.5497 - acc: 0.0973
12000/48000 [======>.......................] - ETA: 1927s - loss: 14.5479 - acc: 0.0974
12500/48000 [======>.......................] - ETA: 1892s - loss: 14.5450 - acc: 0.0976
13000/48000 [=======>......................] - ETA: 1851s - loss: 14.5373 - acc: 0.0981
13500/48000 [=======>......................] - ETA: 1817s - loss: 14.5349 - acc: 0.0982
14000/48000 [=======>......................] - ETA: 1800s - loss: 14.5431 - acc: 0.0977
14500/48000 [========>.....................] - ETA: 1771s - loss: 14.5363 - acc: 0.0981
15000/48000 [========>.....................] - ETA: 1757s - loss: 14.5224 - acc: 0.0990
15500/48000 [========>.....................] - ETA: 1728s - loss: 14.5343 - acc: 0.0983
16000/48000 [=========>....................] - ETA: 1704s - loss: 14.5274 - acc: 0.0987
16500/48000 [=========>....................] - ETA: 1666s - loss: 14.5278 - acc: 0.0987
17000/48000 [=========>....................] - ETA: 1603s - loss: 14.5243 - acc: 0.0989
17500/48000 [=========>....................] - ETA: 1543s - loss: 14.5219 - acc: 0.0990
18000/48000 [==========>...................] - ETA: 1486s - loss: 14.5215 - acc: 0.0991
18500/48000 [==========>...................] - ETA: 1431s - loss: 14.5115 - acc: 0.0997
19000/48000 [==========>...................] - ETA: 1379s - loss: 14.4986 - acc: 0.1005
19500/48000 [===========>..................] - ETA: 1330s - loss: 14.4848 - acc: 0.1013
20000/48000 [===========>..................] - ETA: 1283s - loss: 14.4845 - acc: 0.1013
20500/48000 [===========>..................] - ETA: 1238s - loss: 14.4803 - acc: 0.1016
21000/48000 [============>.................] - ETA: 1194s - loss: 14.4694 - acc: 0.1023
21500/48000 [============>.................] - ETA: 1152s - loss: 14.4725 - acc: 0.1021
22000/48000 [============>.................] - ETA: 1112s - loss: 14.4814 - acc: 0.1015
22500/48000 [=============>................] - ETA: 1074s - loss: 14.4798 - acc: 0.1016
23000/48000 [=============>................] - ETA: 1037s - loss: 14.4684 - acc: 0.1023
23500/48000 [=============>................] - ETA: 1000s - loss: 14.4706 - acc: 0.1022
24000/48000 [==============>...............] - ETA: 966s - loss: 14.4720 - acc: 0.1021 
24500/48000 [==============>...............] - ETA: 933s - loss: 14.4727 - acc: 0.1021
25000/48000 [==============>...............] - ETA: 927s - loss: 14.4695 - acc: 0.1023
25500/48000 [==============>...............] - ETA: 918s - loss: 14.4702 - acc: 0.1022
26000/48000 [===============>..............] - ETA: 902s - loss: 14.4678 - acc: 0.1024
26500/48000 [===============>..............] - ETA: 885s - loss: 14.4655 - acc: 0.1025
27000/48000 [===============>..............] - ETA: 875s - loss: 14.4609 - acc: 0.1028
27500/48000 [================>.............] - ETA: 843s - loss: 14.4541 - acc: 0.1032
28000/48000 [================>.............] - ETA: 812s - loss: 14.4631 - acc: 0.1027
28500/48000 [================>.............] - ETA: 782s - loss: 14.4655 - acc: 0.1025
29000/48000 [=================>............] - ETA: 753s - loss: 14.4674 - acc: 0.1024
29500/48000 [=================>............] - ETA: 725s - loss: 14.4686 - acc: 0.1023
30000/48000 [=================>............] - ETA: 697s - loss: 14.4697 - acc: 0.1023
30500/48000 [==================>...........] - ETA: 670s - loss: 14.4698 - acc: 0.1023
31000/48000 [==================>...........] - ETA: 644s - loss: 14.4720 - acc: 0.1021
31500/48000 [==================>...........] - ETA: 619s - loss: 14.4751 - acc: 0.1019
32000/48000 [===================>..........] - ETA: 594s - loss: 14.4791 - acc: 0.1017
32500/48000 [===================>..........] - ETA: 569s - loss: 14.4745 - acc: 0.1020
33000/48000 [===================>..........] - ETA: 545s - loss: 14.4755 - acc: 0.1019
33500/48000 [===================>..........] - ETA: 522s - loss: 14.4779 - acc: 0.1018
34000/48000 [====================>.........] - ETA: 499s - loss: 14.4764 - acc: 0.1019
34500/48000 [====================>.........] - ETA: 477s - loss: 14.4792 - acc: 0.1017
35000/48000 [====================>.........] - ETA: 455s - loss: 14.4805 - acc: 0.1016
35500/48000 [=====================>........] - ETA: 433s - loss: 14.4808 - acc: 0.1016
36000/48000 [=====================>........] - ETA: 412s - loss: 14.4727 - acc: 0.1021
36500/48000 [=====================>........] - ETA: 391s - loss: 14.4736 - acc: 0.1020
37000/48000 [======================>.......] - ETA: 371s - loss: 14.4697 - acc: 0.1023
37500/48000 [======================>.......] - ETA: 351s - loss: 14.4672 - acc: 0.1024
38000/48000 [======================>.......] - ETA: 332s - loss: 14.4651 - acc: 0.1026
38500/48000 [=======================>......] - ETA: 312s - loss: 14.4678 - acc: 0.1024
39000/48000 [=======================>......] - ETA: 294s - loss: 14.4707 - acc: 0.1022
39500/48000 [=======================>......] - ETA: 275s - loss: 14.4720 - acc: 0.1021
40000/48000 [========================>.....] - ETA: 257s - loss: 14.4720 - acc: 0.1021
40500/48000 [========================>.....] - ETA: 239s - loss: 14.4756 - acc: 0.1019
41000/48000 [========================>.....] - ETA: 221s - loss: 14.4689 - acc: 0.1023
41500/48000 [========================>.....] - ETA: 204s - loss: 14.4674 - acc: 0.1024
42000/48000 [=========================>....] - ETA: 187s - loss: 14.4644 - acc: 0.1026
42500/48000 [=========================>....] - ETA: 170s - loss: 14.4642 - acc: 0.1026
43000/48000 [=========================>....] - ETA: 153s - loss: 14.4609 - acc: 0.1028
43500/48000 [==========================>...] - ETA: 137s - loss: 14.4600 - acc: 0.1029
44000/48000 [==========================>...] - ETA: 121s - loss: 14.4605 - acc: 0.1028
44500/48000 [==========================>...] - ETA: 105s - loss: 14.4588 - acc: 0.1029
45000/48000 [===========================>..] - ETA: 89s - loss: 14.4590 - acc: 0.1029 
45500/48000 [===========================>..] - ETA: 74s - loss: 14.4592 - acc: 0.1029
46000/48000 [===========================>..] - ETA: 59s - loss: 14.4537 - acc: 0.1033
46500/48000 [============================>.] - ETA: 44s - loss: 14.4539 - acc: 0.1032
47000/48000 [============================>.] - ETA: 29s - loss: 14.4514 - acc: 0.1034
47500/48000 [============================>.] - ETA: 14s - loss: 14.4516 - acc: 0.1034
48000/48000 [==============================] - 1424s - loss: 14.4499 - acc: 0.1035 - val_loss: 14.3760 - val_acc: 0.1081

我不知道为什么会这样!我应该怎么做才能减少损失呢?

调低批量大小,500是一个大的方法。也许可以遵循相同的数据处理方法。我尝试将
批量大小减少到50,将
Conv2D
更改为
(3,3)
(2,2)
,并在相同代码的第二个版本中,将LR从0.001减少到0.00005。这些似乎都不起作用。对于
示例
,我使用了
expand_dims
,而不是
重塑
;我不知道这是否会影响程序的运行,因为数据位于
np
数组中,
expand\u dims
np
函数。我不知道他们为什么要将数组值除以225。^将学习率从0.001改为0.00005实际上是有效的。我在正确调用修改后的
RMSprop
时出错。我保留了我使用的数据处理。