Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/345.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python np.concatenate的内存错误_Python_Numpy_Tensorflow_Keras - Fatal编程技术网

Python np.concatenate的内存错误

Python np.concatenate的内存错误,python,numpy,tensorflow,keras,Python,Numpy,Tensorflow,Keras,当我在iPython笔记本中运行流动代码时: _x = np.concatenate([_batches.next() for i in range(_batches.samples)]) 我收到了这个错误消息 --------------------------------------------------------------- MemoryError Traceback (most recent call last) <ipython-inp

当我在iPython笔记本中运行流动代码时:

_x = np.concatenate([_batches.next() for i in range(_batches.samples)])
我收到了这个错误消息

---------------------------------------------------------------
MemoryError                   Traceback (most recent call last)
<ipython-input-14-313ecf2ea184> in <module>()
----> 1 _x = np.concatenate([_batches.next() for i in 
range(_batches.samples)])

MemoryError:
当使用verbose=1时,我可以看到进度指示器一直运行,但随后出现以下错误:

2300/2300 [==============================] - 177s 77ms/step
---------------------------------------------------------------
MemoryError                   Traceback (most recent call last)
<ipython-input-19-d0e463f64f5a> in <module>()
----> 1 bottleneck_features_train = 
bottleneck_model.predict_generator(batches, len(batches), verbose=1)

~/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py in 
wrapper(*args, **kwargs)
     85                 warnings.warn('Update your `' + object_name +
     86                               '` call to the Keras 2 API: ' + 
signature, stacklevel=2)
---> 87             return func(*args, **kwargs)
     88         wrapper._original_function = func
     89         return wrapper

~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in 
predict_generator(self, generator, steps, max_queue_size, workers, 
use_multiprocessing, verbose)
   2345                 return all_outs[0][0]
   2346             else:
-> 2347                 return np.concatenate(all_outs[0])
   2348         if steps_done == 1:
   2349             return [out for out in all_outs]

MemoryError: 
2300/2300[=====================================]-177s 77ms/步
---------------------------------------------------------------
MemoryError回溯(上次最近调用)
在()
---->1瓶颈\u特征\u序列=
瓶颈\u模型。预测\u生成器(批次,len(批次),verbose=1)
~/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py in
包装(*args,**kwargs)
85警告。警告('更新您的`+对象\u名称+
86'`对Keras 2 API的调用:'+
签名,stacklevel=2)
--->87返回函数(*args,**kwargs)
88包装器。_原始函数=func
89返回包装器
~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in
预测生成器(自身、生成器、步骤、最大队列大小、工作人员、,
使用(多处理,详细)
2345返回所有输出[0][0]
2346其他:
->2347返回np.连接(所有输出[0])
2348如果步骤_done==1:
2349返回[out for out in all_out]
记忆错误:

您能为这个内存问题提供一个解决方案吗?谢谢大家!

对于第一个错误,数据太大了。假设数据类型为int64或float64(每个元素8字节),则总数据为9200*400*400*3*8字节,即35GB。所有这些数据都分块收集,然后通过连接复制到一个大数组中

您可以预先分配阵列,也许它可以工作:

x_ = np.empty((9200,400,400,3))
for i in range(9200): 
    x_[i] = batches.next()

是的,你是对的,这与第二个错误的原因相同。大小不同,但数据也太大。谢谢你的回答!keras的东西被称为“生成器”,这可能意味着它的设计是为了提高内存利用率,但除此之外,我对它一无所知。我会发布一个真实的答案,这样你就可以结束这个问题了。
x_ = np.empty((9200,400,400,3))
for i in range(9200): 
    x_[i] = batches.next()