Python 使用另一列的偏移值比较数据帧列的值
我有一个数据框,如下所示:Python 使用另一列的偏移值比较数据帧列的值,python,performance,pandas,dataframe,Python,Performance,Pandas,Dataframe,我有一个数据框,如下所示: Time InvInstance 5 5 8 4 9 3 19 2 20 1 3 3 8 2 13 1 Time变量被排序,ininstance变量表示Time块末尾的行数。我想创建另一列,显示在Time列中是否满足交叉条件。我可以用这样的for循环来实现: import pandas as pd import numpy
Time InvInstance
5 5
8 4
9 3
19 2
20 1
3 3
8 2
13 1
Time
变量被排序,ininstance
变量表示Time
块末尾的行数。我想创建另一列,显示在Time
列中是否满足交叉条件。我可以用这样的for循环来实现:
import pandas as pd
import numpy as np
df = pd.read_csv("test.csv")
df["10mMark"] = 0
for i in range(1,len(df)):
r = int(df.InvInstance.iloc[i])
rprev = int(df.InvInstance.iloc[i-1])
m = df['Time'].iloc[i+r-1] - df['Time'].iloc[i]
mprev = df['Time'].iloc[i-1+rprev-1] - df['Time'].iloc[i-1]
df["10mMark"].iloc[i] = np.where((m < 10) & (mprev >= 10),1,0)
具体来说,;时间列中有2个已排序的时间块,我们通过ININSTANCE值逐行知道到每个块末尾的距离(以行为单位)。问题是,一行和区块末尾之间的时差是否小于10分钟,而在前一行中是否大于10分钟。是否可以不使用循环(例如
shift()
等)来执行此操作,从而使其运行得更快?我不知道如何使用内部矢量化熊猫/Numpy方法来使用非标量/vector步骤移动序列/数组,但我们可以在这里使用:
8000行DF的计时:
In [13]: df = pd.concat([df] * 10**3, ignore_index=True)
In [14]: df.shape
Out[14]: (8000, 3)
In [15]: %%timeit
...: df["10mMark"] = 0
...: for i in range(1,len(df)):
...: r = int(df.InvInstance.iloc[i])
...: rprev = int(df.InvInstance.iloc[i-1])
...: m = df['Time'].iloc[i+r-1] - df['Time'].iloc[i]
...: mprev = df['Time'].iloc[i-1+rprev-1] - df['Time'].iloc[i-1]
...: df["10mMark"].iloc[i] = np.where((m < 10) & (mprev >= 10),1,0)
...:
3.06 s ± 109 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [16]: %%timeit
...: mask1 = dyn_shift(df.Time.values, df.InvInstance.values) - df.Time < 10
...: mask2 = (dyn_shift(df.Time.values, df.InvInstance.values) - df.Time).shift() >= 10
...: df['10mMark'] = np.where(mask1 & mask2,1,0)
...:
1.02 ms ± 21.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [17]: 3.06 * 1000 / 1.02
Out[17]: 3000.0
实际上,您的
m
是一行时间与“块”末尾时间之间的时间差,mprev
与前一行时间相同(因此它实际上是m
的移位)。我的想法是创建一个列,其中包含块末尾的时间,首先标识每个块,然后在块上使用groupby
时将merge
与last
时间合并。然后计算创建列“m”的差值,并使用np.where和shift最终填充列标记
# a column with incremental value for each block end
df['block'] = df.InvInstance[df.InvInstance ==1].cumsum()
#to back fill the number to get all block with same value of block
df['block'] = df['block'].bfill() #to back fill the number
# now merge to create a column time_last with the time at the end of the block
df = df.merge(df.groupby('block', as_index=False)['Time'].last(), on = 'block', suffixes=('','_last'), how='left')
# create column m with just a difference
df['m'] = df['Time_last'] - df['Time']
# now you can use np.where and shift on this column to create the 10mMark column
df['10mMark'] = np.where((df['m'] < 10) & (df['m'].shift() >= 10),1,0)
#just drop the useless column
df = df.drop(['block', 'Time_last','m'],1)
其中,列标记具有预期结果
它的效率不如使用
Numba
的@MaxU解决方案,但使用他使用的8000行df
,我的加速系数约为350。也许其他人会更容易。但你能用文字来澄清逻辑吗?@AntonvBR,其他人不是我。我添加了一些措辞:)@GurselKaracor你有带块ID或类似内容的列吗?或者在块的最后一行的INSTANCE列中总是有一个吗?@Ben.T原始数据中没有块ID,但可以很容易地添加它。实际上,您的答案是一个很好的通用解决方案,也可以是该问题的解决方案:对吗?
In [13]: df = pd.concat([df] * 10**3, ignore_index=True)
In [14]: df.shape
Out[14]: (8000, 3)
In [15]: %%timeit
...: df["10mMark"] = 0
...: for i in range(1,len(df)):
...: r = int(df.InvInstance.iloc[i])
...: rprev = int(df.InvInstance.iloc[i-1])
...: m = df['Time'].iloc[i+r-1] - df['Time'].iloc[i]
...: mprev = df['Time'].iloc[i-1+rprev-1] - df['Time'].iloc[i-1]
...: df["10mMark"].iloc[i] = np.where((m < 10) & (mprev >= 10),1,0)
...:
3.06 s ± 109 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [16]: %%timeit
...: mask1 = dyn_shift(df.Time.values, df.InvInstance.values) - df.Time < 10
...: mask2 = (dyn_shift(df.Time.values, df.InvInstance.values) - df.Time).shift() >= 10
...: df['10mMark'] = np.where(mask1 & mask2,1,0)
...:
1.02 ms ± 21.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [17]: 3.06 * 1000 / 1.02
Out[17]: 3000.0
# a column with incremental value for each block end
df['block'] = df.InvInstance[df.InvInstance ==1].cumsum()
#to back fill the number to get all block with same value of block
df['block'] = df['block'].bfill() #to back fill the number
# now merge to create a column time_last with the time at the end of the block
df = df.merge(df.groupby('block', as_index=False)['Time'].last(), on = 'block', suffixes=('','_last'), how='left')
# create column m with just a difference
df['m'] = df['Time_last'] - df['Time']
# now you can use np.where and shift on this column to create the 10mMark column
df['10mMark'] = np.where((df['m'] < 10) & (df['m'].shift() >= 10),1,0)
#just drop the useless column
df = df.drop(['block', 'Time_last','m'],1)
Time InvInstance block Time_last m 10mMark
0 5 5 1.0 20 15 0
1 8 4 1.0 20 12 0
2 9 3 1.0 20 11 0
3 19 2 1.0 20 1 1
4 20 1 1.0 20 0 0
5 3 3 2.0 13 10 0
6 8 2 2.0 13 5 1
7 13 1 2.0 13 0 0