Python 带两个键的熊猫群比

Python 带两个键的熊猫群比,python,pandas,group-by,aggregate-functions,Python,Pandas,Group By,Aggregate Functions,我花了整整一个下午来完成这项任务,但失败了 ,我有一个像这样的熊猫数据框 columns=[ka,kb_1,kb_2,timeofEvent,timeInterval] 0:'3M' '2345' '2345' '2014-10-5',3000 1:'3M' '2958' '2152' '2015-3-22',5000 2:'GE' '2183' '2183' '2012-12-31',515 3:'3M' '2958' '2958' '2015-3-10',395 4:'GE' '2183'

我花了整整一个下午来完成这项任务,但失败了 ,我有一个像这样的熊猫数据框

columns=[ka,kb_1,kb_2,timeofEvent,timeInterval]
0:'3M' '2345' '2345' '2014-10-5',3000
1:'3M' '2958' '2152' '2015-3-22',5000
2:'GE' '2183' '2183' '2012-12-31',515
3:'3M' '2958' '2958' '2015-3-10',395
4:'GE' '2183' '2285' '2015-4-19',1925
5:'GE' '2598' '2598' '2015-3-17',1915
df['isError'] = (df['kb_1'] != df['kb_2']).astype('int')
grouped2 = df.groupby(['ka', 'kb_1'])

df_rst = pd.DataFrame()
df_rst['ka']  =grouped2['ka'].all()
df_rst['kb_1'] = grouped2['kb_1'].all()
df_rst['errorNum'] = grouped2['isError'].transform(sum)
df_rst['totalNum of records'] = grouped2.size()
df_rst['Soll_neq_Letzt_error_rate'] = df_rst['errorNum'].astype('float').div(df_rst['totalNum'].astype('float'), axis='index')
df_rst.to_csv('rst.csv',index=False)
将要实现的是一个由下面的“ka和kb_1”分组的新数据帧

(错误记录的定义:当kb_1!=kb_2时,对应的记录被视为异常记录)

我的代码是这样的

columns=[ka,kb_1,kb_2,timeofEvent,timeInterval]
0:'3M' '2345' '2345' '2014-10-5',3000
1:'3M' '2958' '2152' '2015-3-22',5000
2:'GE' '2183' '2183' '2012-12-31',515
3:'3M' '2958' '2958' '2015-3-10',395
4:'GE' '2183' '2285' '2015-4-19',1925
5:'GE' '2598' '2598' '2015-3-17',1915
df['isError'] = (df['kb_1'] != df['kb_2']).astype('int')
grouped2 = df.groupby(['ka', 'kb_1'])

df_rst = pd.DataFrame()
df_rst['ka']  =grouped2['ka'].all()
df_rst['kb_1'] = grouped2['kb_1'].all()
df_rst['errorNum'] = grouped2['isError'].transform(sum)
df_rst['totalNum of records'] = grouped2.size()
df_rst['Soll_neq_Letzt_error_rate'] = df_rst['errorNum'].astype('float').div(df_rst['totalNum'].astype('float'), axis='index')
df_rst.to_csv('rst.csv',index=False)
但结果不是我想要的

例如,列kb_1变为true/false,errorNum变为Nan。
有人能解释原因并给出可行的实施方案吗?谢谢

我不确定你到底做了什么,但我认为你离得不远

df2 = df.groupby(['ka','kb_1'])['isError'].agg({ 'errorNum':  'sum',
                                                 'recordNum': 'count' })

df2['errorRate'] = df2['errorNum'] / df2['recordNum']

         recordNum  errorNum  errorRate
ka kb_1                                
3M 2345          1         0        0.0
   2958          2         1        0.5
GE 2183          2         1        0.5
   2598          1         0        0.0

请显示样本数据和所需结果。说结果不是你想要的并不能真正告诉我们你想要什么。谢谢,我刚刚添加了输入和期望的输出哇,谢谢,无法想象它只需要两行就可以完成,因为我还在考虑加入表格。很高兴这有帮助,如果你对答案相当满意,请记住单击复选标记。完成,我在实现这一点时遇到了另一个困难,因为我试图操纵的csv太大,无法放入内存,请参见