Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/337.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python分组并获取平均值、最小值和最大值_Python_Pandas_Csv - Fatal编程技术网

Python分组并获取平均值、最小值和最大值

Python分组并获取平均值、最小值和最大值,python,pandas,csv,Python,Pandas,Csv,我有一个csv数据集,如下所示: Class, Code, Vendor, State, NumberOfDays 3, 123, Name1, NE, 12.58402778 1, 876, Name2, TX, 12.51041667 3, 123, Name1, NE, 2.354166667 1, 876, Name2, TX, 12.21111111 3, 456,

我有一个csv数据集,如下所示:

Class,  Code,   Vendor, State,  NumberOfDays
3,      123,    Name1,  NE,     12.58402778
1,      876,    Name2,  TX,     12.51041667
3,      123,    Name1,  NE,     2.354166667
1,      876,    Name2,  TX,     12.21111111
3,      456,    Name2,  NY,     6.346527778
2,      876,    Name1,  NY,     5.513194444
3,      123,    Name1,  NE,     5.38125
1,      876,    Name2,  TX,     5.409722222
我有以下代码:

df = pd.read_csv(r'C:\Python36\Data\testing\LowHighMean.csv')
df2 = df.groupby(['Class','Code','Vendor','State'])['NumberOfDays'].mean().apply(lambda x: '{:.2f}'.format(x))
df2.to_csv(r'C:\Python36\Data\testing\output.csv')
通过对其他字段进行分组,可以获得平均“NumberOfDays”:

1,876,Name2,TX,10.04
2,876,Name1,NY,5.51
3,123,Name1,NE,6.77
3,456,Name2,NY,6.35
我似乎不能继续头球,但这不是什么大问题,我只是把头球放在另一个步骤。我试图解决的问题是添加列,这些列将提供最低的
min()
和最高的
max()
值。我希望创建以下内容:

Class,  Code,   Vendor, State,  AverageDays, LowestNumberOfDays,    HighestNumberOfDays
1,      876,    Name2,  TX,     10.04,       5.41                   12.51             
2,      876,    Name1,  NY,     5.51,        5.51                   5.51
3,      123,    Name1,  NE,     6.77,        2.35                   12.58
3,      456,    Name2,  NY,     6.35,        6.35                   6.35   
起始数据文件的大小超过3 gig,记录超过3000万条。转换后,文件大小变得更小。由于起始文件的大小,我试图找出一种方法,在四个不同的步骤中可以避免这样做。3次单独的步骤/运行以获得
mean()
max()
min()
,然后第四次运行以组合它们。由于我是一个noob,我甚至不知道如何在设置4组代码和运行文件4次的情况下做到这一点

使用聚合方式,则有必要重命名列:

d = {'mean':'AverageDays','min':'LowestNumberOfDays','max':'HighestNumberOfDays'}
df = (df.groupby(['Class','Code','Vendor','State'])['NumberOfDays']
        .agg(['mean','min','max'])
        .rename(columns=d)
        .reset_index())
print (df)
   Class  Code Vendor State  AverageDays  LowestNumberOfDays  \
0      1   876  Name2    TX    10.043750            5.409722   
1      2   876  Name1    NY     5.513194            5.513194   
2      3   123  Name1    NE     6.773148            2.354167   
3      3   456  Name2    NY     6.346528            6.346528   

   HighestNumberOfDays  
0            12.510417  
1             5.513194  
2            12.584028  
3             6.346528  
感谢您提供替代解决方案:


这是如此之快,我建议
df.groupby(['Class','code','Vendor','State',as_index=False)
,因为它看起来更漂亮:)还有pivot表
df.pivot_表(index=['Class','code','Vendor','State',values='NumberOfDays',aggfunc=('min','mean','max')。重命名(columns=d)。重置_index()
@jezrael工作得很好。谢谢。@jezrael很有趣。。。我肯定会花一些时间测试:)
df = df.pivot_table(index=['Class','Code','Vendor','State'],
                    value‌​s='NumberOfDays',
                    agg‌​func=('min','mean','‌​max'))
        .rename(column‌​s=d)
        .reset_index()