Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/visual-studio-2008/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 使用自定义数据集代替MNIST进行人脸识别_Python_Tensorflow_Keras_Deep Learning_Autoencoder - Fatal编程技术网

Python 使用自定义数据集代替MNIST进行人脸识别

Python 使用自定义数据集代替MNIST进行人脸识别,python,tensorflow,keras,deep-learning,autoencoder,Python,Tensorflow,Keras,Deep Learning,Autoencoder,我想使用一个自定义数据集,其中包含不同人物的人脸图像。我计划使用CNN和堆叠式自动编码器对我的图像进行分类 我是否应该更改(x_列,u),(x_测试,u)=mnist.load_data() 或者更改输入,我认为问题出在输入数据上,但我不知道应该在哪里修改 我迷路了,我需要帮助 from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D from keras.models import Model from

我想使用一个自定义数据集,其中包含不同人物的人脸图像。我计划使用CNN和堆叠式自动编码器对我的图像进行分类

我是否应该更改(x_列,u),(x_测试,u)=mnist.load_data()

或者更改输入,我认为问题出在输入数据上,但我不知道应该在哪里修改

我迷路了,我需要帮助

from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K

input_img = Input(shape=(28, 28, 1))  # adapt this if using`channels_first` image data format

x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
 encoded = MaxPooling2D((2, 2), padding='same')(x)

# at this point the representation is (4, 4, 8) i.e. 128-dimensional

x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)

autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

from keras.datasets import mnist
import numpy as np

(x_train, _), (x_test, _) = mnist.load_data()

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1))  # adapt this if 
using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1))  # adapt this if 
using `channels_first` image data format


from keras.callbacks import TensorBoard

autoencoder.fit(x_train, x_train,
               epochs=50,
               batch_size=128,
               shuffle=True,
               validation_data=(x_test, x_test),
               callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])

decoded_imgs = autoencoder.predict(x_test)

n = 10
import matplotlib.pyplot as plt

plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)

# display reconstruction
ax = plt.subplot(2, n, i + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()

您需要加载数据集并将其分成两个子集:
x\u-train
x\u-test


您的数据是以何种格式存储的?

您需要使用数据加载器更改(x\u列、\u)、(x\u测试、\u)=mnist.load\u data()。您可以使用keras
ImageDataGenerator
类来完成此任务或构建您的应用程序。如果您的图像大小远大于
28 x 28
,您可能还需要更改模型架构,因为直接将其重塑为
28 x 28
不会产生好的结果。

感谢您的回答,我想处理我自己的jpg图像,以便将其转换为矩阵,然后将其作为数据集加载