Pandas dataframe,从groupby中筛选出一个值上的值
我有一个很长的CSV,覆盖几天,每5秒有一次数据,来自多个通道,在5秒钟内重复。格式与此类似:Pandas dataframe,从groupby中筛选出一个值上的值,pandas,dataframe,pandas-groupby,Pandas,Dataframe,Pandas Groupby,我有一个很长的CSV,覆盖几天,每5秒有一次数据,来自多个通道,在5秒钟内重复。格式与此类似: IoT Channel Datetime [ Other fields ] calculated_value Chan1 01/01/2020 01:00:00 [...] 1.50203 Chan2 01/01/2020 01:00:00 [...]
IoT Channel Datetime [ Other fields ] calculated_value
Chan1 01/01/2020 01:00:00 [...] 1.50203
Chan2 01/01/2020 01:00:00 [...] 0.80203
Chan3 01/01/2020 01:00:00 [...] 4.23232
...
ChanN 01/01/2020 01:00:00 [...] 2.32123
Chan1 01/01/2020 01:00:05 [...] 1.23232
Chan2 01/01/2020 01:00:05 [...] 0.23234
Chan3 01/01/2020 01:00:05 [...] 3.12312
...
ChanN 01/01/2020 01:00:05 [...] 5.12321
Chan1 01/01/2020 01:00:10 [...] 1.12312
Chan2 01/01/2020 01:00:10 [...] 0.99232
Chan3 01/01/2020 01:00:10 [...] 5.23323
...
ChanN 01/01/2020 01:00:10 [...] 2.00012
Chan1 01/01/2020 01:00:15 [...] 1.55552
Chan2 01/01/2020 01:00:15 [...] 0.77874
Chan3 01/01/2020 01:00:15 [...] 4.23232
...
ChanN 01/01/2020 01:00:15 [...] 2.32123
问题是,我们有一些峰值扰乱了分析,因为它们比平均值大几个数量级,影响了计算和图表
我想得到每个通道的平均值,然后过滤掉平均值的两倍。这似乎可以过滤掉我们的峰值
然而,我不知道如何做到这一点,我对熊猫的了解并没有那么多。我可以在单个值上过滤整个数据帧,但我知道我需要根据其平均值*2过滤每个通道的值。我怎样才能做到这一点呢?您可以用每组的平均值创建一个新列
df = df.join(df.groupby('IoT Channel')['calculated_value'].mean(), on='IoT Channel', rsuffix='_mean')
并过滤比计算值平均值大2倍的峰值
df_new = df.drop(df[(df['calculated_value'] > (2 * df['calculated_value_mean']))].index)
以下是我将如何处理它:
from io import StringIO
df = pd.read_csv(StringIO("""IoT_Channel,Datetime,calculated_value
Chan1 , 01/01/2020 01:00:00 , 1.50203
Chan2 , 01/01/2020 01:00:00 , 0.80203
Chan3 , 01/01/2020 01:00:00 , 4.23232
ChanN , 01/01/2020 01:00:00 , 2.32123
Chan1 , 01/01/2020 01:00:05 , 1.23232
Chan2 , 01/01/2020 01:00:05 , 0.23234
Chan3 , 01/01/2020 01:00:05 , 3.12312
ChanN , 01/01/2020 01:00:05 , 5.12321
Chan1 , 01/01/2020 01:00:10 , 1.12312
Chan2 , 01/01/2020 01:00:10 , 0.99232
Chan3 , 01/01/2020 01:00:10 , 5.23323
ChanN , 01/01/2020 01:00:10 , 2.00012
Chan1 , 01/01/2020 01:00:15 , 1.55552
Chan2 , 01/01/2020 01:00:15 , 0.77874
Chan3 , 01/01/2020 01:00:15 , 4.23232
ChanN , 01/01/2020 01:00:15 , 2.32123"""))
df_median = df.groupby("IoT_Channel")['calculated_value'].median()
# merge median values
df = df.merge(df_median, left_on='IoT_Channel', right_index=True)
# filter
df = df[df.calculated_value_x < 2*df.calculated_value_y]
#drop added cols and rename them
del df["calculated_value_x"]
df.rename(columns={'calculated_value_y':'calculated_value'}, inplace=True)
从io导入StringIO
df=pd.read\u csv(StringIO(“IoT”)通道,日期时间,计算值
2020年1月1日01:00:00,1.50203
Chan2,01/01/2020 01:00:00,0.80203
Chan3,01/01/2020 01:00:00,4.23232
ChanN,01/01/2020 01:00:00,2.32123
2020年1月1日01:00:051.23232
Chan2,01/01/2020 01:00:05,0.23234
Chan3,01/01/2020 01:00:053.12312
ChanN,01/01/2020 01:00:055.12321
Chan119020年1月1日01:00:102312
Chan2,01/01/2020 01:00:1010.99232
Chan3,01/01/2020 01:00:1023323
ChanN,01/01/2020 01:00:10200012
Chan119020年1月1日01:00:151552
Chan2,01/01/2020 01:00:15,0.77874
Chan3,01/01/2020 01:00:15,4.23232
ChanN,01/01/2020 01:00:15,2.32123“
df_中值=df.groupby(“IoT_通道”)[“计算的_值”]。中值()
#合并中值
df=df.merge(df_中值,IoT_频道左侧,右侧索引=True)
#滤器
df=df[df.computed_value_x<2*df.computed_value_y]
#删除添加的列并重命名它们
del df[“计算值”]
rename(列={'computed_value_y':'computed_value'},inplace=True)
这将降低大于中位数两倍的值。中位数的使用将优于平均值