Python 聚合并替换表中的行

Python 聚合并替换表中的行,python,pandas,sequence,Python,Pandas,Sequence,我有一个具有以下结构的数据帧: event_timestamp message_number an_robot check 2015-04-15 12:09:39 10125 robot_7 False 2015-04-15 12:09:41 10053 robot_4 True 2015-04-15 12:09:44 10156_ad robot_7 True 2015-04-15 1

我有一个具有以下结构的数据帧:

event_timestamp      message_number  an_robot     check
2015-04-15 12:09:39  10125            robot_7     False
2015-04-15 12:09:41  10053            robot_4     True
2015-04-15 12:09:44  10156_ad         robot_7     True
2015-04-15 12:09:47  20205            robot_108   False
2015-04-15 12:09:51  10010            robot_38    True
2015-04-15 12:09:54  10012            robot_65    True
2015-04-15 12:09:59  10011            robot_39    True
2015-04-15 12:10:01  87954            robot_2     False
......etc
check(检查)列提供了是否应以这种方式合并行的详细信息:

event timestamp: first
 message number: combine (e.g., 10053,10156)
       an_robot: combine (e.g., robot_4, robot_7)
          check: can be removed after the operation.
到目前为止,我已经成功地使用groupby为check列中的True和False值获取了正确的值:

df.groupby(by='check').agg({'event_timestamp':'first',
                            'message_number':lambda x: ','.join(x),
                            'an_robot':lambda x: ','.join(x)}.reset_index()
哪些产出:

     check    event_timestamp        message_number         an_robot
0    False    2015-04-15 12:09:39    10125,10053,..,87954   robot_7,robot_4, ... etc
1    True     2015-04-15 12:09:51    10010,10012            robot_38,robot_65
然而,理想情况下,最终结果如下。将组合
10053和10156_ad
行,并组合
100101001210011
行。在完整数据帧中,序列的最大长度为5。我有一个包含这些规则的单独数据框(如100101001210011规则)

我怎样才能做到这一点

--编辑--

具有单独规则的数据集如下所示:

sequence             support
10053,10156,20205    0.94783
10010,10012          0.93322
10010,10033          0.93211
10053,10032          0.92222
etc....
确定检查中的行何时为true或false的代码:

def find_drops(seq, df):
    if seq:
        m = np.logical_and.reduce([df.message_number.shift(-i).eq(seq[i]) for i in range(len(seq))])
        if len(seq) == 1:
            return pd.Series(m, index=df.index)
        else:
            return pd.Series(m, index=df.index).replace({False: np.NaN}).ffill(limit=len(seq)-1).fillna(False)
    else:
        return pd.Series(False, index=df.index)
如果我然后运行
df['check']=find_drops(['10010','10012','10011',df)
我将得到这些行的带有True的check列。如果可以使用规则为数据帧中的每一行运行此操作,然后使用提供的代码合并这些行,那就太好了

--新代码4-17-2019--

输出为:

event_timestamp      message_number           an_robot
2015-04-15 12:09:39  10125,45689,98765,12345  robot_7,robot_23,robot_99
2015-04-15 12:09:41  10053,10156_ad,20205     robot_4,robot_7,robot_108
2015-04-15 12:09:51  10010,10012              robot_38,robot_65
2015-04-15 12:09:59  10011,87954              robot_39,robot_2
应该是:

event_timestamp      message_number        an_robot
2015-04-15 12:09:39  10125                 robot_7
2015-04-15 12:09:41  10053,10156_ad,20205  robot_4,robot_7,robot_108
2015-04-15 12:09:48  45689                 robot_23
2015-04-15 12:09:51  10010,10012           robot_38,robot_65
2015-04-15 12:09:58  98765                 robot_99
2015-04-15 12:09:59  10011,87954           robot_39,robot_2
2015-04-15 12:10:03  12345                 robot_1

您可以在对邮件编号进行分组之前对其进行分类。最好将这些分类规则放在一个数据框架中,每个数字一个分类

class_df=pd.DataFrame(数据={'message_number':['10010','10012','10011','10053','10156_ad'],
'class':['a','a','a','b','b']})
然后可以合并它们

results=pd.merge(df,class\u-df,on=['message\u number'],how='left)
然后你可以按班级分组并检查

results.groupby(by=['check','class']).agg({'event\u timestamp':'first',
“消息编号”:lambda x:','。连接(x),
'an_robot':lambda x:','.join(x)}.reset_index()

这个问题比较复杂,所以修改得很好

第一步是预处理-筛选值按和顺序存在:

修改了第一个解决方案-添加了
groupby
,用于长度大于1的序列,为每个值调用函数,最后通过以下方式连接在一起:

您的解决方案应更改为创建辅助列
g
,用于上一步中的
分组

used_idx = []
c = ['event_timestamp','message_number','an_robot']
def find_drops(seq):
    if seq:
        m = np.logical_and.reduce([df1.message_number.shift(-i).eq(seq[i]) for i in range(len(seq))])
        if len(seq) == 1:
            df2 = df1.loc[m,  c].assign(g = df1.index[m])
            used_idx.extend(df2.index.tolist())
            return df2
        else:
            m1 = (pd.Series(m, index=df1.index).replace({False: np.NaN})
                                               .ffill(limit=len(seq)-1)
                                               .fillna(False))
            df2 = df1.loc[m1,  c]
            used_idx.extend(df2.index.tolist())
            df2['g'] = np.where(df2.index.isin(df1.index[m]), df2.index, np.nan)
            return df2


out = (pd.concat([find_drops(x) for x in patterns])
        .assign(g = lambda x: x['g'].ffill())
        .groupby(by=['g']).agg({'event_timestamp':'first',
                                 'message_number':','.join, 
                                 'an_robot':','.join})
        .reset_index(drop=True))

print (used_idx)
上次从
False
值创建新数据帧并连接到输出:

print (out)
       event_timestamp        message_number                   an_robot
0  2015-04-15 12:09:41  10053,10156_ad,20205  robot_4,robot_7,robot_108
1  2015-04-15 12:09:51           10010,10012          robot_38,robot_65
2  2015-04-15 12:09:59           10011,87954           robot_39,robot_2

c = ['event_timestamp','message_number','an_robot']
df2 = pd.concat([out, df[~df.index.isin(used_idx)]]).sort_values('event_timestamp')
print(df2)
       event_timestamp        message_number                   an_robot
0  2015-04-15 12:09:39                 10125                    robot_7
0  2015-04-15 12:09:41  10053,10156_ad,20205  robot_4,robot_7,robot_108
4  2015-04-15 12:09:48                 45689                   robot_23
1  2015-04-15 12:09:51           10010,10012          robot_38,robot_65
7  2015-04-15 12:09:58                 98765                   robot_99
2  2015-04-15 12:09:59           10011,87954           robot_39,robot_2

哇,这真的很棒!我还添加了带有规则的数据框的外观。我如何将这些规则转换为您创建的规则变量?此外,它是否仍然适用于具有公共代码的规则(数据框中的规则3中也包含10010)?哦,等等,在你回答之前。我将上传确定检查是否为真或假的代码,也许我们可以组合这些代码。这两个代码都很好!这两个解决方案应该如何更改为一个版本,其中不在模式/序列df中的行也在结束数据帧中,就像在我的预期输出中一样?因为现在结束了dataframe只是包含连接的MessageNumber和Robot的行。我尝试使用新代码,但得到的错误是dataframe对象没有属性message_number。我在开始时将find_drops函数中的行更改为df.message_number,但这给了我一个错误:传递的项数错误18,placement暗示4.@intStdu-编辑的答案。
patterns = df1['sequence'].str.split(',')
print (patterns)

#flatten lists to sets
flatten = set([y for x in patterns for y in x])
#print (flatten)

df1 = df[df['message_number'].isin(flatten)]
#print (df1)
def rolling_window(a, window):
    shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
    strides = a.strides + (a.strides[-1],)
    c = np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
    return c

used_idx = []

def agg_pattern(seq):
    if seq:
        N = len(seq)
        arr = df1['message_number'].values
        b = np.all(rolling_window(arr, N) == seq, axis=1)
        c = np.mgrid[0:len(b)][b]

        d = [i  for x in c for i in range(x, x+N)]
        used_idx.extend(df1.index.values[d])
        m = np.in1d(np.arange(len(arr)), d)

        di = {'event_timestamp':'first','message_number':','.join, 'an_robot':','.join}

        if len(seq) == 1:
            return df1.loc[m, ['event_timestamp','message_number','an_robot']]
        else:
            df2 = df1[m]
            return df2.groupby(np.arange(len(df2)) // N).agg(di)


out = pd.concat([agg_pattern(x) for x in patterns], ignore_index=True)
used_idx = []
c = ['event_timestamp','message_number','an_robot']
def find_drops(seq):
    if seq:
        m = np.logical_and.reduce([df1.message_number.shift(-i).eq(seq[i]) for i in range(len(seq))])
        if len(seq) == 1:
            df2 = df1.loc[m,  c].assign(g = df1.index[m])
            used_idx.extend(df2.index.tolist())
            return df2
        else:
            m1 = (pd.Series(m, index=df1.index).replace({False: np.NaN})
                                               .ffill(limit=len(seq)-1)
                                               .fillna(False))
            df2 = df1.loc[m1,  c]
            used_idx.extend(df2.index.tolist())
            df2['g'] = np.where(df2.index.isin(df1.index[m]), df2.index, np.nan)
            return df2


out = (pd.concat([find_drops(x) for x in patterns])
        .assign(g = lambda x: x['g'].ffill())
        .groupby(by=['g']).agg({'event_timestamp':'first',
                                 'message_number':','.join, 
                                 'an_robot':','.join})
        .reset_index(drop=True))

print (used_idx)
print (out)
       event_timestamp        message_number                   an_robot
0  2015-04-15 12:09:41  10053,10156_ad,20205  robot_4,robot_7,robot_108
1  2015-04-15 12:09:51           10010,10012          robot_38,robot_65
2  2015-04-15 12:09:59           10011,87954           robot_39,robot_2

c = ['event_timestamp','message_number','an_robot']
df2 = pd.concat([out, df[~df.index.isin(used_idx)]]).sort_values('event_timestamp')
print(df2)
       event_timestamp        message_number                   an_robot
0  2015-04-15 12:09:39                 10125                    robot_7
0  2015-04-15 12:09:41  10053,10156_ad,20205  robot_4,robot_7,robot_108
4  2015-04-15 12:09:48                 45689                   robot_23
1  2015-04-15 12:09:51           10010,10012          robot_38,robot_65
7  2015-04-15 12:09:58                 98765                   robot_99
2  2015-04-15 12:09:59           10011,87954           robot_39,robot_2