Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/350.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 批量大小的Numpy切片_Python_Numpy_Numpy Slicing - Fatal编程技术网

Python 批量大小的Numpy切片

Python 批量大小的Numpy切片,python,numpy,numpy-slicing,Python,Numpy,Numpy Slicing,我有一个numpy数组a的形状(550,10)。我的批量大小为100,即我希望从A获得多少数据行。在每次迭代中,我希望从A中提取100行。但当我到达最后50行时,我希望从A中提取最后50行和前50行 我有这样一个函数: def train(index, batch_size): if(batch_size + index < A.shape(0)): data_end_index = index + batch_size batch_dat

我有一个numpy数组
a
的形状
(550,10)
。我的批量大小为100,即我希望从
A
获得多少数据行。在每次迭代中,我希望从A中提取100行。但当我到达最后50行时,我希望从A中提取最后50行和前50行

我有这样一个函数:

def train(index, batch_size):

    if(batch_size + index < A.shape(0)):
          data_end_index = index + batch_size
          batch_data = A[index:batch_end_index,:]
    else:
          data_end_index = index + batch_size - A.shape(0) #550+100-600 = 50
          batch_data = A[500 to 549 and 0 to 49] # How to slice here ?
def序列(索引、批次大小):
如果(批次尺寸+索引
如何执行最后一步?

您可以尝试:

import numpy as np
data=np.random.rand(550,10)
batch_size=100

for index in range(0,data.shape[0],batch_size):
    batch=data[index:min(index+batch_size,data.shape[0]),:]
    print(batch.shape)
输出:

(100, 10)
(100, 10)
(100, 10)
(100, 10)
(100, 10)
(50, 10)
您可以尝试:

import numpy as np
data=np.random.rand(550,10)
batch_size=100

for index in range(0,data.shape[0],batch_size):
    batch=data[index:min(index+batch_size,data.shape[0]),:]
    print(batch.shape)
输出:

(100, 10)
(100, 10)
(100, 10)
(100, 10)
(100, 10)
(50, 10)

使用
numpy.split
窃取riccardo的示例数据:

data=np.random.rand(550,10)
batch_size=100

q, block_end = data.shape[0] // batch_size, q * batch_size

batch = np.split(data[:block_end], q) + [data[block_end:]]

[*map(np.shape, batch)]
Out[89]: [(100, 10), (100, 10), (100, 10), (100, 10), (100, 10), (50, 10)]

使用
numpy.split
窃取riccardo的示例数据:

data=np.random.rand(550,10)
batch_size=100

q, block_end = data.shape[0] // batch_size, q * batch_size

batch = np.split(data[:block_end], q) + [data[block_end:]]

[*map(np.shape, batch)]
Out[89]: [(100, 10), (100, 10), (100, 10), (100, 10), (100, 10), (50, 10)]

可能重复的可能重复为避免批次之间的重叠,我更喜欢
范围内的索引(0,数据.形状[0],批次大小+1)
为避免批次之间的重叠,我更喜欢
范围内的索引(0,数据.形状[0],批次大小+1)