Python 如何将列表拆分为大小相等的块?

Python 如何将列表拆分为大小相等的块?,python,list,split,chunks,Python,List,Split,Chunks,我有一个任意长度的列表,我需要将它分成大小相等的块,并对其进行操作。有一些明显的方法可以做到这一点,比如保留一个计数器和两个列表,当第二个列表填满时,将其添加到第一个列表中,并清空第二个列表以获得下一轮数据,但这可能非常昂贵 我想知道是否有人对任何长度的列表都有很好的解决方案,例如使用生成器 我在itertools中寻找有用的东西,但找不到任何明显有用的东西。但我可能错过了 相关问题:这里有一个生成器,可以生成您想要的块: def chunks(lst, n): """Yield suc

我有一个任意长度的列表,我需要将它分成大小相等的块,并对其进行操作。有一些明显的方法可以做到这一点,比如保留一个计数器和两个列表,当第二个列表填满时,将其添加到第一个列表中,并清空第二个列表以获得下一轮数据,但这可能非常昂贵

我想知道是否有人对任何长度的列表都有很好的解决方案,例如使用生成器

我在
itertools
中寻找有用的东西,但找不到任何明显有用的东西。但我可能错过了


相关问题:

这里有一个生成器,可以生成您想要的块:

def chunks(lst, n):
    """Yield successive n-sized chunks from lst."""
    for i in range(0, len(lst), n):
        yield lst[i:i + n]


如果您使用的是Python 2,那么应该使用
xrange()
而不是
range()


此外,您还可以简单地使用列表理解而不是编写函数,不过最好在命名函数中封装这样的操作,以便代码更易于理解。Python 3:

[lst[i:i + n] for i in range(0, len(lst), n)]
Python 2版本:

[lst[i:i + n] for i in xrange(0, len(lst), n)]
如果您知道列表大小:

def SplitList(mylist, chunk_size):
    return [mylist[offs:offs+chunk_size] for offs in range(0, len(mylist), chunk_size)]
如果您没有(迭代器):


在后一种情况下,如果您可以确保序列始终包含给定大小的整数块(即没有不完整的最后一个块),则可以用更漂亮的方式对其进行重新表述。

这里有一个生成器,可用于任意iterables:

def split_seq(iterable, size):
    it = iter(iterable)
    item = list(itertools.islice(it, size))
    while item:
        yield item
        item = list(itertools.islice(it, size))
def isplitter(l, n):
    l = iter(l)
    chunk = list(islice(l, n))
    while chunk:
        yield chunk
        chunk = list(islice(l, n))
例如:

>>> import pprint
>>> pprint.pprint(list(split_seq(xrange(75), 10)))
[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
 [10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
 [20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
 [30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
 [40, 41, 42, 43, 44, 45, 46, 47, 48, 49],
 [50, 51, 52, 53, 54, 55, 56, 57, 58, 59],
 [60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
 [70, 71, 72, 73, 74]]
嘿,一行版本

In [48]: chunk = lambda ulist, step:  map(lambda i: ulist[i:i+step],  xrange(0, len(ulist), step))

In [49]: chunk(range(1,100), 10)
Out[49]: 
[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
 [11, 12, 13, 14, 15, 16, 17, 18, 19, 20],
 [21, 22, 23, 24, 25, 26, 27, 28, 29, 30],
 [31, 32, 33, 34, 35, 36, 37, 38, 39, 40],
 [41, 42, 43, 44, 45, 46, 47, 48, 49, 50],
 [51, 52, 53, 54, 55, 56, 57, 58, 59, 60],
 [61, 62, 63, 64, 65, 66, 67, 68, 69, 70],
 [71, 72, 73, 74, 75, 76, 77, 78, 79, 80],
 [81, 82, 83, 84, 85, 86, 87, 88, 89, 90],
 [91, 92, 93, 94, 95, 96, 97, 98, 99]]
直接来自(旧)Python文档(itertools的配方):

J.F.Sebastian建议的当前版本:

#from itertools import izip_longest as zip_longest # for Python 2.x
from itertools import zip_longest # for Python 3.x
#from six.moves import zip_longest # for both (uses the six compat library)

def grouper(n, iterable, padvalue=None):
    "grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')"
    return zip_longest(*[iter(iterable)]*n, fillvalue=padvalue)
我猜圭多的时间机器工作了,会工作的,会工作的,又工作了

这些解决方案之所以有效,是因为
[iter(iterable)]*n
(或早期版本中的等效项)创建了一个迭代器,在列表中重复
n次
izip_longest
然后有效地执行“每个”迭代器的循环;因为这是同一个迭代器,所以每次这样的调用都会对它进行改进,从而导致每次这样的zip循环生成一个
n
项的元组

def split_seq(seq, num_pieces):
    start = 0
    for i in xrange(num_pieces):
        stop = start + len(seq[i::num_pieces])
        yield seq[start:stop]
        start = stop
用法:

seq = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

for seq in split_seq(seq, 3):
    print seq

如果你想要超简单的东西:

def chunks(l, n):
    n = max(1, n)
    return (l[i:i+n] for i in range(0, len(l), n))
在Python 2.x的情况下,使用
xrange()
而不是
range()
,而不调用len(),这对大型列表很有用:

def splitter(l, n):
    i = 0
    chunk = l[:n]
    while chunk:
        yield chunk
        i += n
        chunk = l[i:i+n]
这是为伊特拉伯雷人准备的:

def split_seq(iterable, size):
    it = iter(iterable)
    item = list(itertools.islice(it, size))
    while item:
        yield item
        item = list(itertools.islice(it, size))
def isplitter(l, n):
    l = iter(l)
    chunk = list(islice(l, n))
    while chunk:
        yield chunk
        chunk = list(islice(l, n))
上述产品的功能性风味:

def isplitter2(l, n):
    return takewhile(bool,
                     (tuple(islice(start, n))
                            for start in repeat(iter(l))))
或:

或:

素雅

L = range(1, 1000)
print [L[x:x+10] for x in xrange(0, len(L), 10)]
或者,如果您愿意:

def chunks(L, n): return [L[x: x+n] for x in xrange(0, len(L), n)]
chunks(L, 10)

例如,如果块大小为3,则可以执行以下操作:

zip(*[iterable[i::3] for i in range(3)]) 
资料来源:

当我的区块大小是我可以输入的固定数字时,我会使用它,例如“3”,并且永远不会改变。

考虑使用片段

例如:

import matplotlib.cbook as cbook
segments = cbook.pieces(np.arange(20), 3)
for s in segments:
     print s
>>> AA=range(10,21);SS=3
>>> [AA[i:i+SS] for i in range(len(AA))[::SS]]
[[10, 11, 12], [13, 14, 15], [16, 17, 18], [19, 20]]
# or [range(10, 13), range(13, 16), range(16, 19), range(19, 21)] in py3
def块(iterable,n):
“”“假定n是大于0的整数
"""
iterable=iter(iterable)
尽管如此:
结果=[]
对于范围(n)中的i:
尝试:
a=下一个(可编辑)
除停止迭代外:
打破
其他:
结果.追加(a)
如果结果为:
产量结果
其他:
打破
g1=(范围(10)内i的i*i)
g2=块(g1,3)
打印g2
''
打印列表(g2)
'[[0, 1, 4], [9, 16, 25], [36, 49, 64], [81]]'
请参见


蟒蛇3

我知道这有点古老,但还没有人提到:


我非常喜欢tzot和J.F.Sebastian提出的Python文档版本, 但它有两个缺点:

  • 这不是很明确
  • 我通常不希望最后一个块中有填充值
我在代码中经常使用这个:

from itertools import islice

def chunks(n, iterable):
    iterable = iter(iterable)
    while True:
        yield tuple(islice(iterable, n)) or iterable.next()
更新:惰性块版本:

from itertools import chain, islice

def chunks(n, iterable):
   iterable = iter(iterable)
   while True:
       yield chain([next(iterable)], islice(iterable, n-1))
该库具有分区功能,用于:

from toolz.itertoolz.core import partition

list(partition(2, [1, 2, 3, 4]))
[(1, 2), (3, 4)]
如何将列表拆分为大小相等的块? 对我来说,“大小均匀的块”意味着它们都是相同的长度,或者排除这个选项,在长度上差异最小。例如,21个项目的5个篮子可能有以下结果:

>>> import statistics
>>> statistics.variance([5,5,5,5,1]) 
3.2
>>> statistics.variance([5,4,4,4,4]) 
0.19999999999999998
更倾向于后一种结果的一个实际原因是:如果您使用这些函数来分配工作,您已经内置了这样一种前景,即一个函数可能比其他函数完成得早,因此当其他函数继续努力工作时,它将无所事事

对其他答案的评论 当我最初写这个答案时,没有一个答案是大小相等的块——它们都在最后留下了一块矮小的块,所以它们不是很平衡,并且长度的差异比必要的要大

例如,当前最重要的答案以以下内容结尾:

[60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
[70, 71, 72, 73, 74]]
其他如
list(石斑鱼(3,范围(7)))
chunk(范围(7,3)
都返回:
[(0,1,2),(3,4,5),(6,无,无)]
None
只是填充,在我看来相当不雅观。他们并没有均匀地把易卜拉比分块

为什么我们不能把这些分开

循环解 使用
itertools.cycle
的高级平衡解决方案,这是我今天可能采用的方法。以下是设置:

来自itertools导入周期的

项目=范围(10,75)
篮子的数量=10
现在我们需要我们的列表来填充元素:

baskets = [[] for _ in range(number_of_baskets)]
最后,我们将要分配的元素压缩到一个篮子循环中,直到元素用完,从语义上讲,这正是我们想要的:

for element, basket in zip(items, cycle(baskets)):
    basket.append(element)
结果如下:

>>> from pprint import pprint
>>> pprint(baskets)
[[10, 20, 30, 40, 50, 60, 70],
 [11, 21, 31, 41, 51, 61, 71],
 [12, 22, 32, 42, 52, 62, 72],
 [13, 23, 33, 43, 53, 63, 73],
 [14, 24, 34, 44, 54, 64, 74],
 [15, 25, 35, 45, 55, 65],
 [16, 26, 36, 46, 56, 66],
 [17, 27, 37, 47, 57, 67],
 [18, 28, 38, 48, 58, 68],
 [19, 29, 39, 49, 59, 69]]
为了实现此解决方案,我们编写了一个函数,并提供了类型注释:

from itertools import cycle
from typing import List, Any

def cycle_baskets(items: List[Any], maxbaskets: int) -> List[List[Any]]:
    baskets = [[] for _ in range(min(maxbaskets, len(items)))]
    for item, basket in zip(items, cycle(baskets)):
        basket.append(item)
    return baskets
在上面,我们列出了我们的物品清单,以及篮子的最大数量。我们创建一个空列表列表,在其中以循环方式追加每个元素

片 另一个优雅的解决方案是使用slices,特别是slices中不太常用的步骤参数。i、 e:

start = 0
stop = None
step = number_of_baskets

first_basket = items[start:stop:step]
这尤其优雅,因为切片不在乎数据的长度——结果,我们的第一个篮子,只需要它需要的长度。我们只需要增加每个篮筐的起点

事实上,这可能是一条单行线,但是
>>> import statistics
>>> statistics.variance([5,5,5,5,1]) 
3.2
>>> statistics.variance([5,4,4,4,4]) 
0.19999999999999998
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
[70, 71, 72, 73, 74]]
baskets = [[] for _ in range(number_of_baskets)]
for element, basket in zip(items, cycle(baskets)):
    basket.append(element)
>>> from pprint import pprint
>>> pprint(baskets)
[[10, 20, 30, 40, 50, 60, 70],
 [11, 21, 31, 41, 51, 61, 71],
 [12, 22, 32, 42, 52, 62, 72],
 [13, 23, 33, 43, 53, 63, 73],
 [14, 24, 34, 44, 54, 64, 74],
 [15, 25, 35, 45, 55, 65],
 [16, 26, 36, 46, 56, 66],
 [17, 27, 37, 47, 57, 67],
 [18, 28, 38, 48, 58, 68],
 [19, 29, 39, 49, 59, 69]]
from itertools import cycle
from typing import List, Any

def cycle_baskets(items: List[Any], maxbaskets: int) -> List[List[Any]]:
    baskets = [[] for _ in range(min(maxbaskets, len(items)))]
    for item, basket in zip(items, cycle(baskets)):
        basket.append(item)
    return baskets
start = 0
stop = None
step = number_of_baskets

first_basket = items[start:stop:step]
from typing import List, Any

def slice_baskets(items: List[Any], maxbaskets: int) -> List[List[Any]]:
    n_baskets = min(maxbaskets, len(items))
    return [items[i::n_baskets] for i in range(n_baskets)]
from pprint import pprint

items = list(range(10, 75))
pprint(cycle_baskets(items, 10))
pprint(slice_baskets(items, 10))
pprint([list(s) for s in yield_islice_baskets(items, 10)])
def baskets_from(items, maxbaskets=25):
    baskets = [[] for _ in range(maxbaskets)]
    for i, item in enumerate(items):
        baskets[i % maxbaskets].append(item)
    return filter(None, baskets) 
def iter_baskets_from(items, maxbaskets=3):
    '''generates evenly balanced baskets from indexable iterable'''
    item_count = len(items)
    baskets = min(item_count, maxbaskets)
    for x_i in range(baskets):
        yield [items[y_i] for y_i in range(x_i, item_count, baskets)]
    
def iter_baskets_contiguous(items, maxbaskets=3, item_count=None):
    '''
    generates balanced baskets from iterable, contiguous contents
    provide item_count if providing a iterator that doesn't support len()
    '''
    item_count = item_count or len(items)
    baskets = min(item_count, maxbaskets)
    items = iter(items)
    floor = item_count // baskets 
    ceiling = floor + 1
    stepdown = item_count % baskets
    for x_i in range(baskets):
        length = ceiling if x_i < stepdown else floor
        yield [items.next() for _ in range(length)]
print(baskets_from(range(6), 8))
print(list(iter_baskets_from(range(6), 8)))
print(list(iter_baskets_contiguous(range(6), 8)))
print(baskets_from(range(22), 8))
print(list(iter_baskets_from(range(22), 8)))
print(list(iter_baskets_contiguous(range(22), 8)))
print(baskets_from('ABCDEFG', 3))
print(list(iter_baskets_from('ABCDEFG', 3)))
print(list(iter_baskets_contiguous('ABCDEFG', 3)))
print(baskets_from(range(26), 5))
print(list(iter_baskets_from(range(26), 5)))
print(list(iter_baskets_contiguous(range(26), 5)))
[[0], [1], [2], [3], [4], [5]]
[[0], [1], [2], [3], [4], [5]]
[[0], [1], [2], [3], [4], [5]]
[[0, 8, 16], [1, 9, 17], [2, 10, 18], [3, 11, 19], [4, 12, 20], [5, 13, 21], [6, 14], [7, 15]]
[[0, 8, 16], [1, 9, 17], [2, 10, 18], [3, 11, 19], [4, 12, 20], [5, 13, 21], [6, 14], [7, 15]]
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10, 11], [12, 13, 14], [15, 16, 17], [18, 19], [20, 21]]
[['A', 'D', 'G'], ['B', 'E'], ['C', 'F']]
[['A', 'D', 'G'], ['B', 'E'], ['C', 'F']]
[['A', 'B', 'C'], ['D', 'E'], ['F', 'G']]
[[0, 5, 10, 15, 20, 25], [1, 6, 11, 16, 21], [2, 7, 12, 17, 22], [3, 8, 13, 18, 23], [4, 9, 14, 19, 24]]
[[0, 5, 10, 15, 20, 25], [1, 6, 11, 16, 21], [2, 7, 12, 17, 22], [3, 8, 13, 18, 23], [4, 9, 14, 19, 24]]
[[0, 1, 2, 3, 4, 5], [6, 7, 8, 9, 10], [11, 12, 13, 14, 15], [16, 17, 18, 19, 20], [21, 22, 23, 24, 25]]
from itertools import islice

def chunk(it, size):
    it = iter(it)
    return iter(lambda: tuple(islice(it, size)), ())
>>> list(chunk(range(14), 3))
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13)]
from itertools import islice, chain, repeat

def chunk_pad(it, size, padval=None):
    it = chain(iter(it), repeat(padval))
    return iter(lambda: tuple(islice(it, size)), (padval,) * size)
>>> list(chunk_pad(range(14), 3))
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, None)]
>>> list(chunk_pad(range(14), 3, 'a'))
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, 'a')]
_no_padding = object()

def chunk(it, size, padval=_no_padding):
    if padval == _no_padding:
        it = iter(it)
        sentinel = ()
    else:
        it = chain(iter(it), repeat(padval))
        sentinel = (padval,) * size
    return iter(lambda: tuple(islice(it, size)), sentinel)
>>> list(chunk(range(14), 3))
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13)]
>>> list(chunk(range(14), 3, None))
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, None)]
>>> list(chunk(range(14), 3, 'a'))
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, 'a')]
_no_padding = object()
def chunk(it, size, padval=_no_padding):
    it = iter(it)
    chunker = iter(lambda: tuple(islice(it, size)), ())
    if padval == _no_padding:
        yield from chunker
    else:
        for ch in chunker:
            yield ch if len(ch) == size else ch + (padval,) * (size - len(ch))
>>> list(chunk([1, 2, (), (), 5], 2))
[(1, 2), ((), ()), (5,)]
>>> list(chunk([1, 2, None, None, 5], 2, None))
[(1, 2), (None, None), (5, None)]
def chunkList(initialList, chunkSize):
    """
    This function chunks a list into sub lists 
    that have a length equals to chunkSize.

    Example:
    lst = [3, 4, 9, 7, 1, 1, 2, 3]
    print(chunkList(lst, 3)) 
    returns
    [[3, 4, 9], [7, 1, 1], [2, 3]]
    """
    finalList = []
    for i in range(0, len(initialList), chunkSize):
        finalList.append(initialList[i:i+chunkSize])
    return finalList
from itertools import zip_longest

a = range(1, 16)
i = iter(a)
r = list(zip_longest(i, i, i))
>>> print(r)
[(1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12), (13, 14, 15)]
[(1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12), (13, 14, None)]
def split_list(the_list, chunk_size):
    result_list = []
    while the_list:
        result_list.append(the_list[:chunk_size])
        the_list = the_list[chunk_size:]
    return result_list

a_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

print split_list(a_list, 3)
[[1, 2, 3], [4, 5, 6], [7, 8, 9], [10]]
a = [1, 2, 3, 4, 5, 6, 7, 8, 9]
CHUNK = 4
[a[i*CHUNK:(i+1)*CHUNK] for i in xrange((len(a) + CHUNK - 1) / CHUNK )]
def chunks(li, n):
    if li == []:
        return
    yield li[:n]
    for e in chunks(li[n:], n):
        yield e
def chunks(li, n):
    if li == []:
        return
    yield li[:n]
    yield from chunks(li[n:], n)
def dec(gen):
    def new_gen(li, n):
        for e in gen(li, n):
            if e == []:
                return
            yield e
    return new_gen

@dec
def chunks(li, n):
    yield li[:n]
    for e in chunks(li[n:], n):
        yield e
[AA[i:i+SS] for i in range(len(AA))[::SS]]
>>> AA=range(10,21);SS=3
>>> [AA[i:i+SS] for i in range(len(AA))[::SS]]
[[10, 11, 12], [13, 14, 15], [16, 17, 18], [19, 20]]
# or [range(10, 13), range(13, 16), range(16, 19), range(19, 21)] in py3
from boltons import iterutils

list(iterutils.chunked_iter(list(range(50)), 11))
[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
 [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21],
 [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32],
 [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43],
 [44, 45, 46, 47, 48, 49]]
>>> from utilspie import iterutils
>>> a = [1, 2, 3, 4, 5, 6, 7, 8, 9]

>>> list(iterutils.get_chunks(a, 5))
[[1, 2, 3, 4, 5], [6, 7, 8, 9]]
sudo pip install utilspie
import time
batch_size = 7
arr_len = 298937

#---------slice-------------

print("\r\nslice")
start = time.time()
arr = [i for i in range(0, arr_len)]
while True:
    if not arr:
        break

    tmp = arr[0:batch_size]
    arr = arr[batch_size:-1]
print(time.time() - start)

#-----------index-----------

print("\r\nindex")
arr = [i for i in range(0, arr_len)]
start = time.time()
for i in range(0, round(len(arr) / batch_size + 1)):
    tmp = arr[batch_size * i : batch_size * (i + 1)]
print(time.time() - start)

#----------batches 1------------

def batch(iterable, n=1):
    l = len(iterable)
    for ndx in range(0, l, n):
        yield iterable[ndx:min(ndx + n, l)]

print("\r\nbatches 1")
arr = [i for i in range(0, arr_len)]
start = time.time()
for x in batch(arr, batch_size):
    tmp = x
print(time.time() - start)

#----------batches 2------------

from itertools import islice, chain

def batch(iterable, size):
    sourceiter = iter(iterable)
    while True:
        batchiter = islice(sourceiter, size)
        yield chain([next(batchiter)], batchiter)


print("\r\nbatches 2")
arr = [i for i in range(0, arr_len)]
start = time.time()
for x in batch(arr, batch_size):
    tmp = x
print(time.time() - start)

#---------chunks-------------
def chunks(l, n):
    """Yield successive n-sized chunks from l."""
    for i in range(0, len(l), n):
        yield l[i:i + n]
print("\r\nchunks")
arr = [i for i in range(0, arr_len)]
start = time.time()
for x in chunks(arr, batch_size):
    tmp = x
print(time.time() - start)

#-----------grouper-----------

from itertools import zip_longest # for Python 3.x
#from six.moves import zip_longest # for both (uses the six compat library)

def grouper(iterable, n, padvalue=None):
    "grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')"
    return zip_longest(*[iter(iterable)]*n, fillvalue=padvalue)

arr = [i for i in range(0, arr_len)]
print("\r\ngrouper")
start = time.time()
for x in grouper(arr, batch_size):
    tmp = x
print(time.time() - start)
slice
31.18285083770752

index
0.02184295654296875

batches 1
0.03503894805908203

batches 2
0.22681021690368652

chunks
0.019841909408569336

grouper
0.006506919860839844
import itertools as it
import collections as ct

import more_itertools as mit


iterable = range(11)
n = 3
list(it.zip_longest(*[iter(iterable)] * n))
# [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, None)]
d = {}
for i, x in enumerate(iterable):
    d.setdefault(i//n, []).append(x)

list(d.values())
# [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]
dd = ct.defaultdict(list)
for i, x in enumerate(iterable):
    dd[i//n].append(x)

list(dd.values())
# [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]
list(mit.chunked(iterable, n))
# [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]

list(mit.sliced(iterable, n))
# [range(0, 3), range(3, 6), range(6, 9), range(9, 11)]

list(mit.grouper(n, iterable))
# [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, None)]

list(mit.windowed(iterable, len(iterable)//n, step=n))
# [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, None)]