Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/python-2.7/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 不能';我不能在凯拉斯训练一个模特_Python_Python 2.7_Keras_Conv Neural Network - Fatal编程技术网

Python 不能';我不能在凯拉斯训练一个模特

Python 不能';我不能在凯拉斯训练一个模特,python,python-2.7,keras,conv-neural-network,Python,Python 2.7,Keras,Conv Neural Network,我正在尝试训练一个用于keras人脸分析的多功能卷积模型,该模型的大小约为19.2GB。它成功显示模型摘要,但无法训练模型 我有一台内存约为4GB的电脑 Loading pickle files Loaded train, test and validation dataset Loading test images Loading validation images dataset/adience.py:100: FutureWarning: Method .as_matrix will be

我正在尝试训练一个用于keras人脸分析的多功能卷积模型,该模型的大小约为19.2GB。它成功显示模型摘要,但无法训练模型

我有一台内存约为4GB的电脑

Loading pickle files
Loaded train, test and validation dataset
Loading test images
Loading validation images
dataset/adience.py:100: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
  self.test_detection = self.test_dataset["is_face"].as_matrix()
Loaded all dataset and images
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 227, 227, 1)  0                                            
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 55, 55, 96)   11712       input_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 55, 55, 96)   384         conv2d_1[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 27, 27, 96)   0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 27, 27, 256)  614656      max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 27, 27, 256)  1024        conv2d_2[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D)  (None, 13, 13, 256)  0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 13, 13, 384)  885120      max_pooling2d_2[0][0]            
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 13, 13, 384)  1327488     conv2d_3[0][0]                   
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 13, 13, 512)  1769984     conv2d_4[0][0]                   
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 6, 6, 256)    393472      max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 6, 6, 256)    393472      conv2d_3[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D)  (None, 6, 6, 512)    0           conv2d_5[0][0]                   
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 6, 6, 1024)   0           conv2d_8[0][0]                   
                                                                 conv2d_9[0][0]                   
                                                                 max_pooling2d_4[0][0]            
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 6, 6, 256)    262400      concatenate_1[0][0]              
__________________________________________________________________________________________________
flatten_2 (Flatten)             (None, 9216)         0           conv2d_10[0][0]                  
__________________________________________________________________________________________________
dense_3 (Dense)                 (None, 2048)         18876416    flatten_2[0][0]                  
__________________________________________________________________________________________________
dropout_3 (Dropout)             (None, 2048)         0           dense_3[0][0]                    
__________________________________________________________________________________________________
dense_11 (Dense)                (None, 512)          1049088     dropout_3[0][0]                  
__________________________________________________________________________________________________
dropout_10 (Dropout)            (None, 512)          0           dense_11[0][0]                   
__________________________________________________________________________________________________
detection_probablity (Dense)    (None, 2)            1026        dropout_10[0][0]                 
==================================================================================================
Total params: 25,586,242
Trainable params: 25,585,538
Non-trainable params: 704
__________________________________________________________________________________________________
Epoch 1/10

上面写着纪元1/10,但它停止了。这是我的计算机的计算问题吗?

如果它开始这样运行,那么它可能有足够的ram来正常运行。您可以检查资源监视器以查看有多少内存可用。您还可以检查是否有CPU使用情况。如果有CPU使用,那么可能只是训练非常缓慢

这是一个相当大的模型,因此在一个小CPU上训练可能需要非常长的时间

确保Keras详细度设置为1,以便每批打印信息。虽然这是默认设置,所以应该已经设置为那样,除非您更改它

model.fit(verbose=1)
还可以尝试将批大小调低到1,看看是否有输出(因为它可以更快地完成较小的批)


如果它运行正常但速度缓慢,那么最好的办法就是使用GPU来运行它。如果您不能做到这一点,那么您可以尝试从源代码处编译Tensorflow,以确保您拥有所有的CPU指令集和MKL库(如果您需要),这可以提高它的速度。

当我将批处理大小设置为1时,它开始训练网络,但我得到以下错误
分配的75497472超过了系统内存的10%。2018-07-18 09:47:05.318941:W tensorflow/core/framework/allocator.cc:101]75497472的分配超过了系统内存的10%。2018-07-18 09:47:05.679869:W tensorflow/core/framework/allocator.cc:101]75497472的分配超过了系统内存的10%。2018-07-18 09:47:05.793544:W tensorflow/core/framework/allocator.cc:101]
这是否意味着我必须使用GPU服务器进行培训?现在您知道了它的工作原理,如果可能的话,您可能希望将批处理大小恢复到至少32左右。如果该消息只是一个警告,那么您可以忽略它。我想,如果它真的困扰您,您可以禁用来自tensorflow的警告。最终,尽管您的计算机只有很少的RAM来完成这类工作,但您可以非常努力地优化您的模型,或者使用GPU获得更好的计算机。尝试检查数据的数据类型,可能可以通过降低精度来减少内存使用