TensorFlow未对所有输入进行培训

TensorFlow未对所有输入进行培训,tensorflow,Tensorflow,我想在5774上训练一个TF模型。但它被困在96个例子中,只是跳到下一个时代,忽略了大部分例子。为什么TF会表现出这种行为?如何纠正这种行为 model.compile( 优化器='rmsprop', 损失class='classifical_crossentropy', 指标=['acc'] ) callback=tf.keras.callbacks.earlystoping(monitor='acc',patience=50) 历史=model.fit( x=[列车ID、列车屏蔽、列车段],

我想在5774上训练一个TF模型。但它被困在96个例子中,只是跳到下一个时代,忽略了大部分例子。为什么TF会表现出这种行为?如何纠正这种行为

model.compile(
优化器='rmsprop',
损失class='classifical_crossentropy',
指标=['acc']
)
callback=tf.keras.callbacks.earlystoping(monitor='acc',patience=50)
历史=model.fit(
x=[列车ID、列车屏蔽、列车段],
y=列车y,
批次大小=32,
纪元=10000,
verbose=1,
回调=[回调]
)
输出:

Train on 5774 samples
Epoch 1/10000
  96/5774 [..............................] - ETA: 15:33 - loss: 1.9542 - acc: 0.2917Epoch 2/10000
  96/5774 [..............................] - ETA: 3:26 - loss: 1.6615 - acc: 0.5417Epoch 3/10000
  96/5774 [..............................] - ETA: 3:27 - loss: 4.9110 - acc: 0.2917Epoch 4/10000
  96/5774 [..............................] - ETA: 3:26 - loss: 1.8811 - acc: 0.2500Epoch 5/10000
  96/5774 [..............................] - ETA: 3:27 - loss: 2.0512 - acc: 0.3229Epoch 6/10000
  96/5774 [..............................] - ETA: 3:27 - loss: 1.3690 - acc: 0.4167Epoch 7/10000
  96/5774 [..............................] - ETA: 3:28 - loss: 1.4500 - acc: 0.3854Epoch 8/10000
  96/5774 [..............................] - ETA: 3:27 - loss: 1.2867 - acc: 0.3958Epoch 9/10000
  96/5774 [..............................] - ETA: 3:27 - loss: 1.3947 - acc: 0.3333Epoch 10/10000
  96/5774 [..............................] - ETA: 3:27 - loss: 1.6012 - acc: 0.1979Epoch 11/10000
  96/5774 [..............................] - ETA: 3:27 - loss: 1.4505 - acc: 0.4271Epoch 12/10000
  96/5774 [..............................] - ETA: 3:26 - loss: 1.5062 - acc: 0.2500Epoch 13/10000
  96/5774 [..............................] - ETA: 3:27 - loss: 1.4980 - acc: 0.3333Epoch 14/10000

在我的例子中,train_id、train_掩码和train_段是n个np.数组的列表,其形状为(96,)。在使用steps_per_epoch=5774//32强制拟合之后,它显示了正确的消息错误:输入只有96个样本,尽管日志中说是5774

将列表强制转换为np.array成功了,尽管我认为tensorflow日志中存在错误