Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
对于mnist数据,tensorflow模型的精度不会增加_Tensorflow_Keras_Neural Network - Fatal编程技术网

对于mnist数据,tensorflow模型的精度不会增加

对于mnist数据,tensorflow模型的精度不会增加,tensorflow,keras,neural-network,Tensorflow,Keras,Neural Network,我目前正在学习《机器学习》一书。我想创建一个简单的神经网络,如《mnist手写数据》一书第10章所述。但是我的模型被卡住了,而且精度根本没有提高。 这是我的密码: import tensorflow as tf from tensorflow import keras import pandas as pd import numpy as np data = pd.read_csv('sample_data/mnist_train_small.csv', header=None) test =

我目前正在学习《机器学习》一书。我想创建一个简单的神经网络,如《mnist手写数据》一书第10章所述。但是我的模型被卡住了,而且精度根本没有提高。 这是我的密码:

import tensorflow as tf
from tensorflow import keras
import pandas as pd
import numpy as np

data = pd.read_csv('sample_data/mnist_train_small.csv', header=None)
test = pd.read_csv('sample_data/mnist_test.csv', header=None)
labels = data[0]
data = data.drop(0, axis=1)
test_labels = test[0]
test = test.drop(0, axis=1)

model = keras.models.Sequential([
            keras.layers.Dense(300, activation='relu', input_shape=(784,)),
            keras.layers.Dense(100, activation='relu'),
            keras.layers.Dense(10, activation='softmax'),
])

model.compile(loss='sparse_categorical_crossentropy',
              optimizer='sgd',
              metrics=['accuracy'])

keras.utils.plot_model(model, show_shapes=True)

hist = model.fit(data.to_numpy(), labels.to_numpy(), epochs=20, validation_data=(test.to_numpy(), test_labels.to_numpy()))
前几项产出是:

Epoch 1/20
625/625 [==============================] - 2s 3ms/step - loss: 2055059923226079526912.0000 - accuracy: 0.1115 - val_loss: 2.4539 - val_accuracy: 0.1134
Epoch 2/20
625/625 [==============================] - 2s 3ms/step - loss: 2.4160 - accuracy: 0.1085 - val_loss: 2.2979 - val_accuracy: 0.1008
Epoch 3/20
625/625 [==============================] - 2s 2ms/step - loss: 2.3006 - accuracy: 0.1110 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 4/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3009 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 5/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3009 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 6/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 7/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 8/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 9/20
625/625 [==============================] - 2s 2ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 10/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 11/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136
Epoch 12/20
625/625 [==============================] - 2s 3ms/step - loss: 2.3008 - accuracy: 0.1121 - val_loss: 2.3014 - val_accuracy: 0.1136

你的损失函数应该是分类的。稀疏用于大型且大部分为空的矩阵(单词矩阵等)。也可以使用data.iloc[]代替data[]。在这个问题上,adam优化器会更好。

我使用了“adam”优化器,它成功了!你能告诉我为什么“sgd”失败了,但“adam”在手写数据集上可以工作,同样,“sgd”在时尚数据集上也可以工作吗?下面是一篇关于深度学习优化器的文章。你可以了解它们的特点。但adam通常比其他算法更适合。这就是我推荐它的原因。以下是链接: