Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 对于MNIST图像,前馈ANN的测试精度为42%_Python_Tensorflow_Mnist_Feed Forward - Fatal编程技术网

Python 对于MNIST图像,前馈ANN的测试精度为42%

Python 对于MNIST图像,前馈ANN的测试精度为42%,python,tensorflow,mnist,feed-forward,Python,Tensorflow,Mnist,Feed Forward,我在mnist数据集上使用一个原始的神经网络,但是我的模式在验证数据的精度上停留在42% 数据为csv,格式为:60000行(用于培训数据)和785列,第一列为标签 以下是分割和转换CSV数据的代码,表示图像(28x28): 以下是学习网络: Dense = tf.keras.layers.Dense fc_model = tf.keras.Sequential( [ tf.keras.Input(shape=(28,28)), tf.keras.layers.F

我在mnist数据集上使用一个原始的神经网络,但是我的模式在验证数据的精度上停留在42%

数据为csv,格式为:60000行(用于培训数据)和785列,第一列为标签

以下是分割和转换CSV数据的代码,表示图像(28x28):

以下是学习网络:

Dense = tf.keras.layers.Dense
fc_model = tf.keras.Sequential(
    [
      tf.keras.Input(shape=(28,28)),
      tf.keras.layers.Flatten(),
      Dense(128, activation='relu'),
      Dense(32, activation='relu'),
      Dense(10, activation='softmax')])
fc_model.compile(optimizer="Adam", loss="categorical_crossentropy", metrics=["accuracy"])
history = fc_model.fit(sep, labels_array, batch_size=128, validation_data=(sep_t, labels_array_t) ,epochs=35)
以下是我得到的结果:

Train on 60000 samples, validate on 10000 samples
Epoch 1/35
60000/60000 [==============================] - 2s 31us/sample - loss: 1.8819 - accuracy: 0.3539 - val_loss: 1.6867 - val_accuracy: 0.4068
Epoch 2/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.6392 - accuracy: 0.4126 - val_loss: 1.6407 - val_accuracy: 0.4098
Epoch 3/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.5969 - accuracy: 0.4224 - val_loss: 1.6202 - val_accuracy: 0.4196
Epoch 4/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.5735 - accuracy: 0.4291 - val_loss: 1.6158 - val_accuracy: 0.4220
Epoch 5/35
60000/60000 [==============================] - 1s 25us/sample - loss: 1.5561 - accuracy: 0.4324 - val_loss: 1.6089 - val_accuracy: 0.4229
Epoch 6/35
60000/60000 [==============================] - 1s 24us/sample - loss: 1.5423 - accuracy: 0.4377 - val_loss: 1.6074 - val_accuracy: 0.4181
Epoch 7/35
60000/60000 [==============================] - 2s 25us/sample - loss: 1.5309 - accuracy: 0.4416 - val_loss: 1.6053 - val_accuracy: 0.4226
Epoch 8/35
60000/60000 [==============================] - 1s 24us/sample - loss: 1.5207 - accuracy: 0.4435 - val_loss: 1.6019 - val_accuracy: 0.4252
Epoch 9/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.5111 - accuracy: 0.4480 - val_loss: 1.6015 - val_accuracy: 0.4233
Epoch 10/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.5020 - accuracy: 0.4517 - val_loss: 1.6038 - val_accuracy: 0.4186
Epoch 11/35
60000/60000 [==============================] - 1s 24us/sample - loss: 1.4954 - accuracy: 0.4530 - val_loss: 1.6096 - val_accuracy: 0.4209
Epoch 12/35
60000/60000 [==============================] - 1s 24us/sample - loss: 1.4885 - accuracy: 0.4554 - val_loss: 1.6003 - val_accuracy: 0.4278
Epoch 13/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4813 - accuracy: 0.4573 - val_loss: 1.6072 - val_accuracy: 0.4221
Epoch 14/35
60000/60000 [==============================] - 1s 24us/sample - loss: 1.4749 - accuracy: 0.4598 - val_loss: 1.6105 - val_accuracy: 0.4242
Epoch 15/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4693 - accuracy: 0.4616 - val_loss: 1.6160 - val_accuracy: 0.4213
Epoch 16/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4632 - accuracy: 0.4626 - val_loss: 1.6149 - val_accuracy: 0.4266
Epoch 17/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4580 - accuracy: 0.4642 - val_loss: 1.6145 - val_accuracy: 0.4267
Epoch 18/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4532 - accuracy: 0.4656 - val_loss: 1.6169 - val_accuracy: 0.4330
Epoch 19/35
60000/60000 [==============================] - 1s 24us/sample - loss: 1.4479 - accuracy: 0.4683 - val_loss: 1.6198 - val_accuracy: 0.4236
Epoch 20/35
60000/60000 [==============================] - 1s 24us/sample - loss: 1.4436 - accuracy: 0.4693 - val_loss: 1.6246 - val_accuracy: 0.4264
Epoch 21/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4389 - accuracy: 0.4713 - val_loss: 1.6300 - val_accuracy: 0.4254
Epoch 22/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4350 - accuracy: 0.4730 - val_loss: 1.6296 - val_accuracy: 0.4258
Epoch 23/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4328 - accuracy: 0.4727 - val_loss: 1.6279 - val_accuracy: 0.4257
Epoch 24/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4282 - accuracy: 0.4742 - val_loss: 1.6327 - val_accuracy: 0.4209
Epoch 25/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4242 - accuracy: 0.4745 - val_loss: 1.6387 - val_accuracy: 0.4256
Epoch 26/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4210 - accuracy: 0.4765 - val_loss: 1.6418 - val_accuracy: 0.4240
Epoch 27/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4189 - accuracy: 0.4773 - val_loss: 1.6438 - val_accuracy: 0.4237
Epoch 28/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4151 - accuracy: 0.4781 - val_loss: 1.6526 - val_accuracy: 0.4184
Epoch 29/35
60000/60000 [==============================] - 1s 25us/sample - loss: 1.4129 - accuracy: 0.4788 - val_loss: 1.6572 - val_accuracy: 0.4190
Epoch 30/35
60000/60000 [==============================] - 1s 24us/sample - loss: 1.4097 - accuracy: 0.4801 - val_loss: 1.6535 - val_accuracy: 0.4225
Epoch 31/35
60000/60000 [==============================] - 1s 24us/sample - loss: 1.4070 - accuracy: 0.4795 - val_loss: 1.6689 - val_accuracy: 0.4188
Epoch 32/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4053 - accuracy: 0.4809 - val_loss: 1.6663 - val_accuracy: 0.4194
Epoch 33/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4029 - accuracy: 0.4831 - val_loss: 1.6618 - val_accuracy: 0.4220
Epoch 34/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.4000 - accuracy: 0.4832 - val_loss: 1.6603 - val_accuracy: 0.4270
Epoch 35/35
60000/60000 [==============================] - 1s 23us/sample - loss: 1.3979 - accuracy: 0.4845 - val_loss: 1.6741 - val_accuracy: 0.4195

这仅仅是因为优化器吗?我试过SGD,但没用

TLDR将损失更改为分类交叉熵


优化器在这里不是问题

我能看到的直接问题是,对于多类别分类问题,您使用的损失为
mse
。请将其更改为
category\u crossentropy
。这会让你得到更好的数字。另外,也不要忘记从度量中删除
mse

fc_model.compile(optimizer="Adam", loss="categorical_crossentropy", metrics=["accuracy"])
为了便于将来参考,您可以使用下表了解最佳做法。如果你花时间研究为什么这些激活和损失函数都用于数学上的特定问题,那就更好了


注意:另一个侧面注意,即使这不会影响任何性能,也不需要将标签转换为一个热向量

# YOU CAN SKIP THIS COMPLETELY
for i in label_t:
    if i==0:
        labels_array_t.append([1,0,0,0,0,0,0,0,0,0])
    if i==1:
        labels_array_t.append([0,1,0,0,0,0,0,0,0,0])
    if i==2:
        labels_array_t.append([0,0,1,0,0,0,0,0,0,0])
    if i==3:
        labels_array_t.append([0,0,0,1,0,0,0,0,0,0])
    if i==4:
        labels_array_t.append([0,0,0,0,1,0,0,0,0,0])
    .....
相反,您可以直接使用原始的
标签
标签
作为您的
y\u序列
,而不是使用损失
分类交叉熵
您可以将其更改为
稀疏分类交叉熵


编辑:

根据您的评论,以及我在另一个mnist数据集上所做的测试,请尝试以下方法-

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128,activation='relu'),
  tf.keras.layers.Dense(10)
])
model.compile(
    optimizer='adam',
    loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
    metrics=['accuracy'],
)

model.fit(
    ds_train,
    epochs=6,
    validation_data=ds_test,
)


您是否也可以使用epochs的最新结果进行更新?在修改损耗并运行代码后?同样,尝试将
tf.nn.swish
更改为
relu
。感谢您提供的教程答案,我将损耗函数更改为分类交叉熵(如问题所示),但我仍然得到相同的结果,如果按相反顺序训练,精确度将停留在40%左右,那就好了。但是,如果你只是把它作为预测来传递。不。这将引起一些问题。但是你可以通过图像增强来解决这个问题。在培训期间,制作两个版本的每张图像(正向和反向),并在两张图像上复制相同的标签。这样,您将不得不在更多图像上进行培训,但模型将学习如何使用普通图像和增强图像,因为CSV数据是如何保存数据的,因此如果您绘制它们,它们是颠倒的,我认为这可能导致了这个问题,另外,我在不更改标签的情况下,将新模型的损失更改为稀疏的交叉熵,就像您所说的那样,模型将学会预测正确的标签,即使是旋转/翻转的图像。
model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128,activation='relu'),
  tf.keras.layers.Dense(10)
])
model.compile(
    optimizer='adam',
    loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
    metrics=['accuracy'],
)

model.fit(
    ds_train,
    epochs=6,
    validation_data=ds_test,
)