Tensorflow &引用;“南”;运行多类分类时的结果
当我为二进制分类运行这些代码行时,它运行得很好,没有任何问题,并且得到了一个很好的结果,但是当我尝试为许多类(例如3个类)编写代码时,它在预测结果中给出了“NaN”Tensorflow &引用;“南”;运行多类分类时的结果,tensorflow,keras,deep-learning,multilabel-classification,multiclass-classification,Tensorflow,Keras,Deep Learning,Multilabel Classification,Multiclass Classification,当我为二进制分类运行这些代码行时,它运行得很好,没有任何问题,并且得到了一个很好的结果,但是当我尝试为许多类(例如3个类)编写代码时,它在预测结果中给出了“NaN” # Importing the Keras libraries and packages from keras.models import Sequential from keras.layers import Conv2D from keras.layers import MaxPooling2D from keras.laye
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a second convolutional layer
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2enter code hereD(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 3, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
# Part 2 - Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('data/train',
target_size = (64, 64),
batch_size = 32,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('data/test',
target_size = (64, 64),
batch_size = 32,
class_mode = 'categorical')
classifier.fit_generator(training_set,
steps_per_epoch = 240 ,
epochs = 25,
validation_data = test_set,
validation_steps = 30)
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('2.jpeg', target_size = (64, 64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict(test_image)
training_set.class_indices
我用损失函数“binary”和两个类尝试了这些代码行。它运行良好,没有任何问题,得到了一个很好的结果,帮助了我的工作,准确率约为“93%”。但是我的项目基于多类分类,所以我尝试将损失函数改为
“分类的交叉熵”
,将中的类mod改为“分类的”
,使其成为多类,精确度从60%开始增加到99%,然后突然下降到33%
预期结果是类的标签
实际结果是“南”
提前感谢。对于多类分类,通常在最后一个密集层上应用softmax
,而不是sigmoid
。将其更改为softmax
,以查看问题是否仍然存在 对于多类分类,通常将softmax
应用于最后一个密集层,而不是sigmoid
。将其更改为softmax
,以查看问题是否仍然存在 谢谢你在最困难的问题上帮助我谢谢你在最困难的问题上帮助我