Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/331.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何使用传统python或pandas/numpy/scy在列表中按顺序选择重复项的第一个匹配项_Python_Pandas_Numpy_Scipy_Pandas Groupby - Fatal编程技术网

如何使用传统python或pandas/numpy/scy在列表中按顺序选择重复项的第一个匹配项

如何使用传统python或pandas/numpy/scy在列表中按顺序选择重复项的第一个匹配项,python,pandas,numpy,scipy,pandas-groupby,Python,Pandas,Numpy,Scipy,Pandas Groupby,假设有一个列表“序列”,在几个索引值处有一些重复的元素。有没有办法找到一个数字重复序列的第一次出现 series = [2,3,7,10,11,16,16,9,11,12,14,16,16,16,5,7,9,17,17,4,8,18,18] 返回值应类似于[5,11,17,21],它们是[16,16]、[16,16,16]、[17,17]和[18,18]重复序列的第一次出现的索引值 In [3815]: s = pd.Series(series) In [3816]: cond = (s =

假设有一个列表“序列”,在几个索引值处有一些重复的元素。有没有办法找到一个数字重复序列的第一次出现

series = [2,3,7,10,11,16,16,9,11,12,14,16,16,16,5,7,9,17,17,4,8,18,18]

返回值应类似于[5,11,17,21],它们是[16,16]、[16,16,16]、[17,17]和[18,18]重复序列的第一次出现的索引值

In [3815]: s = pd.Series(series)

In [3816]: cond = (s == s.shift(-1))

In [3817]: cond.index[cond]
Out[3817]: Int64Index([5, 11, 12, 17, 21], dtype='int64')
或者,
diff

In [3828]: cond = s.diff(-1).eq(0)

In [3829]: cond.index[cond]
Out[3829]: Int64Index([5, 11, 12, 17, 21], dtype='int64')
对于列表输出,使用
tolist

In [3833]: cond.index[cond].tolist()
Out[3833]: [5, 11, 12, 17, 21]

细节

In [3823]: s.head(10)
Out[3823]:
0     2
1     3
2     7
3    10
4    11
5    16
6    16
7     9
8    11
9    12
dtype: int64

In [3824]: cond.head(10)
Out[3824]:
0    False
1    False
2    False
3    False
4    False
5     True
6    False
7    False
8    False
9    False
dtype: bool

np.diff
np.flatnonzero

这个答案使用
np.diff
并测试该差值何时为零。在这些点上,我们知道我们有重复。我们使用
np.flatnonzero
给出这些差异为零的位置。然而,我们只想要第一个位置的连续差异。因此,我们再次使用
np.diff
来过滤出重复序列中的第一个。这次我们将结果用作布尔掩码

d = np.flatnonzero(np.diff(series) == 0)
w = np.append(True, np.diff(d) > 1)
d[w]

array([ 5, 11, 17, 21])

np.flatnonzero

我认为这是一个更好的答案。我们构建一个布尔数组,当一个值等于下一个值但不等于上一个值时进行计算。我们利用
np.flatnonzero
来告诉我们
True
值的位置

我还发现答案的对称性很吸引人

s = np.array(series)

np.flatnonzero(
    np.append(s[:-1] == s[1:], True) &
    np.append(True, s[1:] != s[:-1])
)

array([ 5, 11, 17, 21])

首先通过
shift
cumsum
创建唯一组,然后获取第一个重复项的掩码并通过以下方式进行过滤:




您可以非常简单地模仿Python的
itertools.groupby
,并将相邻的副本分组在一起

>>> import pandas
>>> s = pandas.Series([2, 3, 7, 10, 11, 16, 16, 9, 11, 12, 14, 16, 16, 16, 5, 7, 9, 17, 17, 4, 8, 18, 18])
>>> for _, group in s.groupby((s != s.shift()).cumsum()):
...     if len(group) > 1:
...         print(group.index[0])
5
11
17
21
或作为列表:

>>> [g.index[0] for _, g in s.groupby((s != s.shift()).cumsum()) if len(g) > 1]
[5, 11, 17, 21]

由于我们似乎是在速度上竞争,而且不可能有人会击败Divakar/piRsquared而不在
pandas
/
numpy
/
scipy
要求周围作弊,因此我的
numba
解决方案如下:

from numba import jit
import numpy as np

@jit
def rpt_idx(s):
    out = []
    j = True
    for i in range(len(s)):
        if s[i] == s[i+1]:
            if j:
                out.append(i)
                j = False
        else:
            j = True
    return out

rpt_idx(series)
Out: array([ 5, 11, 17, 21])
对于这样一个微不足道的情况,退出jit可能完全是过火了,但它确实会带来很大的加速

%timeit rpt_idx(series)
The slowest run took 10.50 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 1.99 µs per loop

%timeit divakar(series)
The slowest run took 7.73 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 12.5 µs per loop

series_ = np.tile(series,10000).tolist()

%timeit divakar(series_)
100 loops, best of 3: 20.1 ms per loop

%timeit rpt_idx(series_)
100 loops, best of 3: 5.84 ms per loop

下面是一个使用数组切片的性能示例,类似于但没有任何附加/连接-

a = np.array(series)
out = np.flatnonzero((a[2:] == a[1:-1]) & (a[1:-1] != a[:-2]))+1
样本运行-

In [28]: a = np.array(series)

In [29]: np.flatnonzero((a[2:] == a[1:-1]) & (a[1:-1] != a[:-2]))+1
Out[29]: array([ 5, 11, 17, 21])
运行时测试(用于工作解决方案)

接近-

def piRSquared1(series):
    d = np.flatnonzero(np.diff(series) == 0)
    w = np.append(True, np.diff(d) > 1)
    return d[w].tolist()

def piRSquared2(series):
    s = np.array(series)
    return np.flatnonzero(
        np.append(s[:-1] == s[1:], True) &
        np.append(True, s[1:] != s[:-1])
    ).tolist()

def Zach(series):
    s = pd.Series(series)
    i = [g.index[0] for _, g in s.groupby((s != s.shift()).cumsum()) if len(g) > 1]
    return i

def jezrael(series):
    s = pd.Series(series)
    s1 = s.shift(1).ne(s).cumsum()
    m = ~s1.duplicated() & s1.duplicated(keep=False)
    s2 = m.index[m].tolist()
    return s2    

def divakar(series):
    a = np.array(series)
    x = a[1:-1]
    return (np.flatnonzero((a[2:] == x) & (x != a[:-2]))+1).tolist()
对于设置,我们只是多次平铺示例输入

时间安排-

案例#1:大套

In [34]: series0 = [2,3,7,10,11,16,16,9,11,12,14,16,16,16,5,7,9,17,17,4,8,18,18]

In [35]: series = np.tile(series0,10000).tolist()

In [36]: %timeit piRSquared1(series)
    ...: %timeit piRSquared2(series)
    ...: %timeit Zach(series)
    ...: %timeit jezrael(series)
    ...: %timeit divakar(series)
    ...: 
100 loops, best of 3: 8.06 ms per loop
100 loops, best of 3: 7.79 ms per loop
1 loop, best of 3: 3.88 s per loop
10 loops, best of 3: 24.3 ms per loop
100 loops, best of 3: 7.97 ms per loop
案例2:更大的集合(在前两个解决方案上)

现在,这两种解决方案的不同之处在于后者避免了附加。让我们仔细看看它们,在更小的数据集上运行——

In [43]: series = np.tile(series0,100).tolist()

In [44]: %timeit piRSquared2(series)
10000 loops, best of 3: 89.4 µs per loop

In [45]: %timeit divakar(series)
10000 loops, best of 3: 82.8 µs per loop
因此,它揭示了后一种解决方案中的连接/附加避免在处理较小的数据集时有很大帮助,但在更大的数据集上,它们变得具有可比性

在更大的数据集上,通过一次连接就可以实现边际改进。因此,最后一步可以重写为:

np.flatnonzero(np.concatenate(([False],(a[2:] == a[1:-1]) & (a[1:-1] != a[:-2]))))

我想,OP要求返回连续的重复索引。@Zero fixed。注意(:need
[5,11,17,21]
在我意识到问题所在之前,我已经尝试过这种方法。假设重复序列是s=pd.Series(1,2,3,3,3)。因此,如果按-1移位,则在运行
cond=(s==s.shift(-1))时将有2个
True
。但是返回值只需是索引
2
,因为这是任何重复序列的第一次出现,不再是:)也快了。我们投票承认作弊部分。。还有一些努力:)嗯,微秒的基准从来都不好。我们是在20个元素的阵列上测量的吗?:)哦,好吧,我们在速度上竞争?让我拖出
numba
:)看看下面我的答案。@DanielF我以为OP使用的是pandas/numpy/scipy;)谁说骗子永远不会赢?谢谢你的详细介绍!使用
保持
参数重复的巧妙方法。谢谢谢谢你的帮助。我不知道为什么,@jezrael answer以3倍的速度运行,尽管他使用了cumsum()函数!
In [34]: series0 = [2,3,7,10,11,16,16,9,11,12,14,16,16,16,5,7,9,17,17,4,8,18,18]

In [35]: series = np.tile(series0,10000).tolist()

In [36]: %timeit piRSquared1(series)
    ...: %timeit piRSquared2(series)
    ...: %timeit Zach(series)
    ...: %timeit jezrael(series)
    ...: %timeit divakar(series)
    ...: 
100 loops, best of 3: 8.06 ms per loop
100 loops, best of 3: 7.79 ms per loop
1 loop, best of 3: 3.88 s per loop
10 loops, best of 3: 24.3 ms per loop
100 loops, best of 3: 7.97 ms per loop
In [40]: series = np.tile(series0,1000000).tolist()

In [41]: %timeit piRSquared2(series)
1 loop, best of 3: 823 ms per loop

In [42]: %timeit divakar(series)
1 loop, best of 3: 823 ms per loop
In [43]: series = np.tile(series0,100).tolist()

In [44]: %timeit piRSquared2(series)
10000 loops, best of 3: 89.4 µs per loop

In [45]: %timeit divakar(series)
10000 loops, best of 3: 82.8 µs per loop
np.flatnonzero(np.concatenate(([False],(a[2:] == a[1:-1]) & (a[1:-1] != a[:-2]))))