Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/blackberry/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 验证准确率不';情况不会好转_Python_Tensorflow_Keras_Conv Neural Network - Fatal编程技术网

Python 验证准确率不';情况不会好转

Python 验证准确率不';情况不会好转,python,tensorflow,keras,conv-neural-network,Python,Tensorflow,Keras,Conv Neural Network,我正在做一个关于神经网络的项目,并尝试使用keras和tensorflow包编写python代码。目前,我遇到了一个问题,验证的准确度根本无法提高。我有一个9815图像的训练集和200个测试集图像。我真的被困在这里了,请帮帮我 现在,几乎所有100个历元的验证结果都精确到0.5000,而且根本没有上升 #Image Processing Stage train_data = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_

我正在做一个关于神经网络的项目,并尝试使用keras和tensorflow包编写python代码。目前,我遇到了一个问题,验证的准确度根本无法提高。我有一个9815图像的训练集和200个测试集图像。我真的被困在这里了,请帮帮我

现在,几乎所有100个历元的验证结果都精确到0.5000,而且根本没有上升

#Image Processing Stage
train_data = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True)

test_data = ImageDataGenerator(rescale = 1./255)

training_set = train_data.flow_from_directory('dataset/train_data', target_size = (128, 128),  batch_size = 42,  class_mode = 'binary')

test_set = test_data.flow_from_directory('dataset/test_data', target_size = (128, 128), batch_size = 42, class_mode = 'binary')




# Starting Convolutional Neural Network
start_cnn = load_model('CNN.h5')
start_cnn.get_weights()
start_cnn = Sequential()


start_cnn.add(Conv2D(32, (3, 3), input_shape = (128, 128, 3), activation = 'relu', padding='same'))                 #3*3*3*32+32
start_cnn.add(Conv2D(32, (3, 3), activation = 'relu'))
start_cnn.add(MaxPooling2D(pool_size = (2, 2)))

for i in range(0,2):
    start_cnn.add(Conv2D(128, (3, 3), activation = 'relu', padding='same'))

start_cnn.add(MaxPooling2D(pool_size = (2, 2)))

for i in range(0,2):
    start_cnn.add(Conv2D(128, (3, 3), activation = 'relu', padding='same'))

start_cnn.add(MaxPooling2D(pool_size = (2, 2)))

# Flattening
start_cnn.add(Flatten())

# Step 4 - Full connection
start_cnn.add(Dense(activation="relu", units=128))
start_cnn.add(Dense(activation="relu", units=64))
start_cnn.add(Dense(activation="relu", units=32))
start_cnn.add(Dense(activation="softmax", units=1))

start_cnn.summary()


# Compiling the CNN

start_cnn.compile(Adam(learning_rate=0.001), loss = 'binary_crossentropy', metrics = ['accuracy'])

start_cnn.fit(training_set, steps_per_epoch=234, epochs = 100, validation_data = test_set)  


start_cnn.save('CNN.h5')


您不能对一个神经元使用
softmax
激活,就像您在这里所做的那样:

start_cnn.add(Dense(activation="softmax", units=1))
要对一个神经元进行二元分类,必须使用
sigmoid
激活:

start_cnn.add(Dense(activation="sigmoid", units=1))

这回答了你的问题吗?你的训练精度提高了吗?@Cutter它一直在0.89到0.90之间。谢谢,我会尝试改变它。为了澄清,我的CNN的结果有两个输出选择,所以对于我的最后一个密集层,我应该把单位放在1还是2?@WinRummaneethorn答案不是说改变神经元的数量,只是激活。对不起,我刚开始做神经网络,你能解释为什么softmax不能用在单位1上吗?@WinRummaneethorn当然,因为softmax是通过所有单位的指数总和进行标准化的,所以对于一个单位,唯一可能的值是常数1.0,您可能注意到训练或验证损失/准确度都没有改变,这就是原因。目前,在像您提到的那样改为sigmoid之后,我的val_损失从8-10大幅减少到现在的1。尽管如此,我的val_精度仍然精确到0.5000