Python 忽略GroupBy-Pandas中Max上的重复项

Python 忽略GroupBy-Pandas中Max上的重复项,python,pandas,Python,Pandas,我读过关于分组和获取最大值的帖子: 如果你的最大值对于一个组来说是唯一的,那么它工作得很好,而且很有帮助,但是我遇到了一个问题:忽略组中的重复项,获取唯一项的最大值,然后将其放回数据系列 输入(命名为df1): 我的代码: df1['peak_month'] = df1.groupby(df1.date.dt.year)['val'].transform(max) == df1['val'] 我的输出: date val max 2004-01-01 0 true

我读过关于分组和获取最大值的帖子:

如果你的最大值对于一个组来说是唯一的,那么它工作得很好,而且很有帮助,但是我遇到了一个问题:忽略组中的重复项,获取唯一项的最大值,然后将其放回数据系列

输入(命名为df1):

我的代码:

df1['peak_month'] = df1.groupby(df1.date.dt.year)['val'].transform(max) == df1['val']
我的输出:

date       val   max
2004-01-01 0     true #notice how all duplicates are true in 2004
2004-02-01 0     true
2004-03-01 0     true
2004-04-01 0     true
2004-05-01 0     true
2004-06-01 0     true
2004-07-01 0     true
2004-08-01 0     true
2004-09-01 0     true
2004-10-01 0     true
2004-11-01 0     true
2004-12-01 0     true
2005-01-01 11    true #notice how these two values 
2005-02-01 11    true #are the max values for 2005 and are true
2005-03-01 8     false
2005-04-01 5     false
2005-05-01 0     false 
2005-06-01 0     false
2005-07-01 2     false
2005-08-01 1     false
2005-09-01 0     false
2005-10-01 0     false
2005-11-01 3     false
2005-12-01 3     false
预期产出:

 date       val   max
2004-01-01 0     false #notice how all duplicates are false in 2004
2004-02-01 0     false #because they are the same and all vals are max
2004-03-01 0     false
2004-04-01 0     false
2004-05-01 0     false 
2004-06-01 0     false
2004-07-01 0     false
2004-08-01 0     false
2004-09-01 0     false
2004-10-01 0     false
2004-11-01 0     false
2004-12-01 0     false
2005-01-01 11    false #notice how these two values 
2005-02-01 11    false #are the max values for 2005 but are false
2005-03-01 8     true  #this is the second max val and is true
2005-04-01 5     false
2005-05-01 0     false 
2005-06-01 0     false
2005-07-01 2     false
2005-08-01 1     false
2005-09-01 0     false
2005-10-01 0     false
2005-11-01 3     false
2005-12-01 3     false
供参考:

df1 = pd.DataFrame({'val':[0, 0, 0, 0, 0 , 0, 0, 0, 0, 0, 0, 0, 11, 11, 8, 5, 0 , 0, 2, 1, 0, 0, 3, 3],
'date':['2004-01-01','2004-02-01','2004-03-01','2004-04-01','2004-05-01','2004-06-01','2004-07-01','2004-08-01','2004-09-01','2004-10-01','2004-11-01','2004-12-01','2005-01-01','2005-02-01','2005-03-01','2005-04-01','2005-05-01','2005-06-01','2005-07-01','2005-08-01','2005-09-01','2005-10-01','2005-11-01','2005-12-01',]})

这不是最巧妙的解决方案,但它确实有效。我们的想法是首先确定每年出现的独特值,然后对这些独特值进行转换

# Determine the unique values appearing in each year.
df1['year'] = df1.date.dt.year
unique_vals = df1.drop_duplicates(subset=['year', 'val'], keep=False)

# Max transform on the unique values.
df1['peak_month'] = unique_vals.groupby('year')['val'].transform(max) == unique_vals['val']

# Fill NaN's as False, drop extra column.
df1['peak_month'].fillna(False, inplace=True)
df1.drop('year', axis=1, inplace=True)

这个问题不清楚,你有太多的数据来说明你的观点。我不知道你为什么要忽略重复项。[5,5,2,2]的最大值与[5,2]的最大值相同。我需要一个“最大年份”值,如果它们相同,则需要一个值。否,
keep=False
关键字参数强制
drop\u duplicates
删除重复数据的所有副本。如果没有此关键字参数,您的关注点将是有效的,因为默认情况下,
drop\u duplicates
保留第一条重复记录。我的代码产生了预期的输出。@Parfait的效果很好。感谢您查看并逐步了解逻辑!
# Determine the unique values appearing in each year.
df1['year'] = df1.date.dt.year
unique_vals = df1.drop_duplicates(subset=['year', 'val'], keep=False)

# Max transform on the unique values.
df1['peak_month'] = unique_vals.groupby('year')['val'].transform(max) == unique_vals['val']

# Fill NaN's as False, drop extra column.
df1['peak_month'].fillna(False, inplace=True)
df1.drop('year', axis=1, inplace=True)