Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/290.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 在具有对比损耗函数的暹罗网络中,损耗正在减少,但精度没有提高_Python_Tensorflow_Keras - Fatal编程技术网

Python 在具有对比损耗函数的暹罗网络中,损耗正在减少,但精度没有提高

Python 在具有对比损耗函数的暹罗网络中,损耗正在减少,但精度没有提高,python,tensorflow,keras,Python,Tensorflow,Keras,我的准确度在下降,即使损失值在下降? 我有这样的输出 # We have 2 inputs, 1 for each picture left_input = Input(img_size) right_input = Input(img_size) # We will use 2 instances of 1 network for this task convnet = Sequential([ Conv2D(4,3, input_shape=img_size, padding='same

我的准确度在下降,即使损失值在下降? 我有这样的输出

# We have 2 inputs, 1 for each picture
left_input = Input(img_size)
right_input = Input(img_size)

# We will use 2 instances of 1 network for this task
convnet = Sequential([
 Conv2D(4,3, input_shape=img_size, padding='same'),
 Activation('relu'),
 MaxPooling2D(),
 Conv2D(4,3, padding='same'),
 Activation('relu'),
 MaxPooling2D(),
 Conv2D(4,3, padding='same'),
 Activation('relu'),
 MaxPooling2D(),
 Conv2D(4,3, padding='same'),
 Activation('relu'),
 Flatten(),
 Dropout(0.3),
 Dense(18),
 Activation('sigmoid')
])
# Connect each 'leg' of the network to each input
# Remember, they have the same weights
encoded_l = convnet(left_input)
encoded_r = convnet(right_input)

# Getting the L1 Distance between the 2 encodings
L1_layer = Lambda(lambda tensor:K.abs(tensor[0] - tensor[1]))

# Add the distance function to the network
L1_distance = L1_layer([encoded_l, encoded_r])

prediction = Dense(1,activation='sigmoid')(L1_distance)
siamese_net = Model(inputs=[left_input,right_input],outputs=prediction)

optimizer = Adam(lr, decay=2.5e-4)

#//TODO: get layerwise learning rates and momentum annealing scheme described in paperworking
siamese_net.compile(loss=tfa.losses.contrastive.contrastive_loss,optimizer=optimizer,metrics=['acc'])

siamese_net.summary()


我可以知道即使损失函数在减少,我的准确度也在降低的原因吗?

def acc(y_真,y_pred):ones=K.ones_like(y_pred)返回K.mean(K.equal(y_真,ones-K.clip(K.round(y_pred),0,1)),axis=-1)我使用了这个精度函数,而不是keras in Build精度函数你的第二个历元甚至还没有完成,你能等它完成,然后再回来,如果你观察到同样的事情。事实上,我尝试了另一个参数较少的模型。我也犯了同样的错误。我会再次要求同样的事情,等待3-4个纪元完成并发布完整的培训日志,然后对损失是否真的上升或下降进行评论是有意义的,很难根据培训纪元的中期提出建议。对6368个样本进行培训,在1592个样本上验证````Epoch 1/10 6368/6368[=========================================================-647s 102ms/步-损耗:0.1762-acc:0.2134-val\u损耗:0.1590-val\u acc:0.1784 Epoch 2/10 6368/6368[=======================================-1094s/步-损耗:0.1649-val\u acc:0-val\u损耗:0.1784-Epoch 2/10 6368-val\u1[==============================1440s 226ms/步长-损耗:0.1573-acc:0.1779-val\u损耗:0.1515-val\u acc:0.1790```这是3个Epochs.def acc(y\u true,y\u pred)之后的结果:one=K.one\u like(y\u pred)返回K.mean(K.equal(y\u true,one-K.clip(K.round(y\u pred),0,1)我使用了这个精度函数而不是keras inbuild精度函数你的第二个历元甚至还没有完成,你能等它完成,然后再回来,如果你观察到相同的东西吗?事实上,我尝试了另一个参数较少的模型。我得到了相同的错误。我会再次请求相同的东西,等待3-4个历元要完成并发布完整的培训日志,那么就有必要对损失是否真的上升或下降进行评论,很难根据培训周期的中间阶段提出建议。在6368个样本上进行培训,在1592个样本上进行验证“```epoch 1/10 6368/6368[========================================================================]-647s102ms/步-损耗:0.1762-acc:0.2134-val\u损耗:0.1590-val\u acc:0.1784历元2/106368/6368[================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================-1440s 226ms/step-损耗:0.1573-acc:0.1779-val_损耗:0.1515-val_acc:0.1790```这是三个时代后的结果。
Epoch 1/10 6368/6368 [==============================] - 647s 102ms/step - loss: 0.1762 - acc: 0.2134 - val_loss: 0.1590 - val_acc: 0.1784 
Epoch 2/10 6368/6368 [==============================] - 1094s 172ms/step - loss: 0.1649 - acc: 0.1881 - val_loss: 0.1548 - val_acc: 0.1784 
Epoch 3/10 6368/6368 [==============================] - 1440s 226ms/step - loss: 0.1573 - acc: 0.1779 - val_loss: 0.1515 - val_acc: 0.1790