Python Keras+Elephas-模型训练超过nb_大纪元

Python Keras+Elephas-模型训练超过nb_大纪元,python,machine-learning,neural-network,deep-learning,keras,Python,Machine Learning,Neural Network,Deep Learning,Keras,我与3名员工在集群上运行深度学习elephas代码。例如,如果我将Nb_epoch设置为30,它不会停止,但会再次运行3或4次30个历元。有人能帮忙解决这个问题吗 这怎么可能?执行应在30/30停止 2101/2101 [==============================] - 10s 5ms/step - loss: 0.6103 - acc: 0.7444 - val_loss: 1.1255 - val_acc: 0.5427 Epoch 30/30 128/2101 [&g

我与3名员工在集群上运行深度学习elephas代码。例如,如果我将Nb_epoch设置为30,它不会停止,但会再次运行3或4次30个历元。有人能帮忙解决这个问题吗

这怎么可能?执行应在30/30停止

2101/2101 [==============================] - 10s 5ms/step - loss: 0.6103 - acc: 0.7444 - val_loss: 1.1255 - val_acc: 0.5427
Epoch 30/30

 128/2101 [>.............................] - ETA: 8s - loss: 0.4757 - acc: 0.8281
 256/2101 [==>...........................] - ETA: 8s - loss: 0.5443 - acc: 0.7891
 384/2101 [====>.........................] - ETA: 7s - loss: 0.5503 - acc: 0.7812
 512/2101 [======>.......................] - ETA: 7s - loss: 0.5372 - acc: 0.7793
 640/2101 [========>.....................] - ETA: 6s - loss: 0.5590 - acc: 0.7609
 768/2101 [=========>....................] - ETA: 5s - loss: 0.5685 - acc: 0.7630
 896/2101 [===========>..................] - ETA: 5s - loss: 0.5730 - acc: 0.7634
1024/2101 [=============>................] - ETA: 4s - loss: 0.5728 - acc: 0.7705
1152/2101 [===============>..............] - ETA: 4s - loss: 0.5794 - acc: 0.7622
1280/2101 [=================>............] - ETA: 3s - loss: 0.5891 - acc: 0.7578
1408/2101 [===================>..........] - ETA: 3s - loss: 0.5923 - acc: 0.7550
1536/2101 [====================>.........] - ETA: 2s - loss: 0.5942 - acc: 0.7513
1664/2101 [======================>.......] - ETA: 1s - loss: 0.5953 - acc: 0.7524
1792/2101 [========================>.....] - ETA: 1s - loss: 0.5938 - acc: 0.7500
1920/2101 [==========================>...] - ETA: 0s - loss: 0.5868 - acc: 0.7552
2048/2101 [============================>.] - ETA: 0s - loss: 0.5930 - acc: 0.7524
2101/2101 [==============================] - 10s 5ms/step - loss: 0.5914 - acc: 0.7544 - val_loss: 1.2075 - val_acc: 0.5128
Train on 2101 samples, validate on 234 samples
Epoch 1/30

看起来你在训练多个模型。第一个完成后,下一个开始训练。您可以将多个经过训练的模型组合成一个整体,这通常会产生更好的结果。

工作人员的训练方法用作RDD映射器函数:,这意味着每个工作人员将使用提供的训练配置时段、批次大小等调用训练。。在你的例子中,3个工人x 30个时代=总共90个时代