Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/323.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow.batch无法正确分离张量_Python_Tensorflow_Data Preprocessing - Fatal编程技术网

Python Tensorflow.batch无法正确分离张量

Python Tensorflow.batch无法正确分离张量,python,tensorflow,data-preprocessing,Python,Tensorflow,Data Preprocessing,我有一个形状数组(16354944) 把它们转换成张量切片 stream = tf.data.Dataset.from_tensor_slices(reshaped_data) 但当我批量生产它们时 seqs = stream.batch(1000, drop_remainder=True) 它回来了 <BatchDataset shapes: (1000, 6354944), types: tf.float64> 你应该设置 drop_余数=False 如文件所述,生产较小批

我有一个形状数组(16354944)

把它们转换成张量切片

stream = tf.data.Dataset.from_tensor_slices(reshaped_data)
但当我批量生产它们时

seqs = stream.batch(1000, drop_remainder=True)
它回来了

<BatchDataset shapes: (1000, 6354944), types: tf.float64>
你应该设置

drop_余数=False

如文件所述,生产较小批次:

批次:

批处理( 批处理大小,删除剩余值=False,num\u并行调用=None,确定性调用=None )


结果元素的组件将有一个额外的外部维度,即batch_size(如果batch_size未将输入元素的数量N平均分割,且drop_余数为False,则最后一个元素的batch_size为N%batch_size)。如果您的程序依赖于具有相同外部尺寸的批次,则应将drop_rements参数设置为True,以防止生成较小的批次。

您可以在创建数据集之前重塑数据:

r = tf.reshape(a[ : , :6354000 ], (1000, 6354))
stream = tf.data.Dataset.from_tensor_slices(r)
seqs = stream.batch(1000) #(1000,6354)

一批1000件太大了!你希望所有的数据集都是一大批1000个元素吗?哇,谢谢!这对我有用。
(1000, 6354)
r = tf.reshape(a[ : , :6354000 ], (1000, 6354))
stream = tf.data.Dataset.from_tensor_slices(r)
seqs = stream.batch(1000) #(1000,6354)