Python 多重索引上的重采样() 起点

Python 多重索引上的重采样() 起点,python,pandas,Python,Pandas,我有一个数据帧df,它有一个三级多索引。最内层是日期时间 value data_1 data_2 data_3 data_4 id_1 id_2 effective_date ADH10685 CA1P0 2018-07-31 0.000048 17901701 3mra Actual 198.00

我有一个数据帧
df
,它有一个三级多索引。最内层是日期时间

                                   value    data_1 data_2  data_3  data_4
id_1     id_2  effective_date                                            
ADH10685 CA1P0 2018-07-31       0.000048  17901701   3mra  Actual  198.00
               2018-08-31       0.000048  17901701   3mra  Actual  198.00
         CB0N0 2018-07-31       4.010784  17901701   3mra  Actual    0.01
               2018-08-31       2.044298  17901701   3mra  Actual    0.01
               2018-10-31      11.493831  17901701   3mra  Actual    0.01
               2018-11-30      13.929844  17901701   3mra  Actual    0.01
               2018-12-31      21.500490  17901701   3mra  Actual    0.01
         CB0P0 2018-07-31      22.389493  17901701   3mra  Actual    0.03
               2018-08-31      23.600726  17901701   3mra  Actual    0.03
               2018-09-30      45.105458  17901701   3mra  Actual    0.03
               2018-10-31      32.249056  17901701   3mra  Actual    0.03
               2018-11-30      60.790889  17901701   3mra  Actual    0.03
               2018-12-31      46.832914  17901701   3mra  Actual    0.03
您可以使用以下代码重新创建此数据帧:

df = pd.DataFrame({'id_1': ['ADH10685','ADH10685','ADH10685','ADH10685','ADH10685','ADH10685','ADH10685','ADH10685','ADH10685','ADH10685','ADH10685','ADH10685','ADH10685'],\
               'id_2': ['CA1P0','CA1P0','CB0N0','CB0N0','CB0N0','CB0N0','CB0N0','CB0P0','CB0P0','CB0P0','CB0P0','CB0P0','CB0P0'],\
               'effective_date': ['2018-07-31', '2018-08-31', '2018-07-31', '2018-08-31', '2018-10-31', '2018-11-30', '2018-12-31', '2018-07-31', '2018-08-31', '2018-09-30', '2018-10-31', '2018-11-30', '2018-12-31'],\
               'value': [0.000048, 0.000048, 4.010784, 2.044298, 11.493831, 13.929844, 21.500490, 22.389493, 23.600726, 45.105458, 32.249056, 60.790889, 46.832914],\
               'data_1': [17901701,17901701,17901701,17901701,17901701,17901701,17901701,17901701,17901701,17901701,17901701,17901701,17901701],\
               'data_2': ['3mra','3mra','3mra','3mra','3mra','3mra','3mra','3mra','3mra','3mra','3mra','3mra','3mra'],\
               'data_3': ['Actual','Actual','Actual','Actual','Actual','Actual','Actual','Actual','Actual','Actual','Actual','Actual','Actual'],\
               'data_4': [198.00, 198.00, 0.01, 0.01,0.01,0.01,0.01,0.03,0.03,0.03,0.03,0.03,0.03]})
df.effective_date = pd.to_datetime(df.effective_date)
df = df.groupby(['id_1', 'id_2', 'effective_date']).first()
预期结果 我感兴趣的日期范围是
2018-07-31
2018-12-31
。对于
id\u 1
id\u 2
的每个组合,我想对值进行重采样

对于
('ADH10685','CA1P0')
,我想得到从9月到12月的
0
值。对于
CB0N0
,我想将九月设置为
0
,对于
CB0P0
,我不想更改任何内容

                                   value    data_1 data_2  data_3  data_4
id_1     id_2  effective_date                                            
ADH10685 CA1P0 2018-07-31       0.000048  17901701   3mra  Actual  198.00
               2018-08-31       0.000048  17901701   3mra  Actual  198.00
               2018-09-30       0.000000  17901701   3mra  Actual  198.00
               2018-10-31       0.000000  17901701   3mra  Actual  198.00
               2018-11-30       0.000000  17901701   3mra  Actual  198.00
               2018-12-31       0.000000  17901701   3mra  Actual  198.00
         CB0N0 2018-07-31       4.010784  17901701   3mra  Actual    0.01
               2018-08-31       2.044298  17901701   3mra  Actual    0.01
               2018-09-30       0.000008  17901701   3mra  Actual    0.01
               2018-10-31      11.493831  17901701   3mra  Actual    0.01
               2018-11-30      13.929844  17901701   3mra  Actual    0.01
               2018-12-31      21.500490  17901701   3mra  Actual    0.01
         CB0P0 2018-07-31      22.389493  17901701   3mra  Actual    0.03
               2018-08-31      23.600726  17901701   3mra  Actual    0.03
               2018-09-30      45.105458  17901701   3mra  Actual    0.03
               2018-10-31      32.249056  17901701   3mra  Actual    0.03
               2018-11-30      60.790889  17901701   3mra  Actual    0.03
               2018-12-31      46.832914  17901701   3mra  Actual    0.03
我试过的 我问了几个与此主题相关的问题,因此我知道如何设置日期的上限和下限,以及如何在保持非
序列完整的情况下重新采样

我已经开发了以下代码,如果我硬编码切片每个级别,它就可以工作

min_date = '2018-07-31'
max_date = '2018-12-31'

# Slice to specific combination of id_1 and id_2
s = df.loc[('ADD00785', 'CA1P0')]

if not s.index.isin([min_date]).any():
    s.loc[pd.to_datetime(min_date)] = np.nan
if not s.index.isin([max_date]).any():
    s.loc[pd.to_datetime(max_date)] = np.nan
s.resample('M').first().fillna({'value': 0}).ffill().bfill()

我正在寻找关于如何最好地遍历大型数据帧并将逻辑应用于每对
(id\u 1,id\u 2)
的指导。我还希望更有效地清理上面的示例代码。

首先,通过
dt
重新索引每组
id\u 1
id\u 2

dt = pd.date_range('2018-07-31', '2018-12-31', freq='M')

df = (df.reset_index()
        .groupby(['id_1', 'id_2'])
        .apply(lambda x: x.set_index('effective_date').reindex(dt))
        .drop(columns=['id_1', 'id_2'])
        .reset_index()
        .rename(columns={'level_2':'effective_date'}))
然后在列值中填充缺少的值

df['value'] = df['value'].fillna(0)
填充其余缺少的值

df = df.groupby(['id_1', 'id_2']).apply(lambda x: x.ffill(axis=0).bfill(axis=0))
设置
id\u 1
id\u 2
,将生效日期追溯到索引

df.set_index(['id_1', 'id_2', 'effective_date'], inplace=True)

首先,通过
dt
对每组
id\u 1
id\u 2
重新编制索引

dt = pd.date_range('2018-07-31', '2018-12-31', freq='M')

df = (df.reset_index()
        .groupby(['id_1', 'id_2'])
        .apply(lambda x: x.set_index('effective_date').reindex(dt))
        .drop(columns=['id_1', 'id_2'])
        .reset_index()
        .rename(columns={'level_2':'effective_date'}))
然后在列值中填充缺少的值

df['value'] = df['value'].fillna(0)
填充其余缺少的值

df = df.groupby(['id_1', 'id_2']).apply(lambda x: x.ffill(axis=0).bfill(axis=0))
设置
id\u 1
id\u 2
,将生效日期追溯到索引

df.set_index(['id_1', 'id_2', 'effective_date'], inplace=True)
您可以使用
reindex()
查找缺少的月份:

# create the MultiIndex based on the existing df.index.levels
midx = pd.MultiIndex.from_product(df.index.levels, names=df.index.names)

# run reindex() with the new indexes and then fix Nan `value` column
df1 = df.reindex(midx).fillna({'value':0})

df1                                                                                                                 
#Out[41]: 
#                                   value      data_1 data_2  data_3  data_4
#id_1     id_2  effective_date                                              
#ADH10685 CA1P0 2018-07-31       0.000048  17901701.0   3mra  Actual  198.00
#               2018-08-31       0.000048  17901701.0   3mra  Actual  198.00
#               2018-09-30       0.000000         NaN    NaN     NaN     NaN
#               2018-10-31       0.000000         NaN    NaN     NaN     NaN
#               2018-11-30       0.000000         NaN    NaN     NaN     NaN
#               2018-12-31       0.000000         NaN    NaN     NaN     NaN
#         CB0N0 2018-07-31       4.010784  17901701.0   3mra  Actual    0.01
#               2018-08-31       2.044298  17901701.0   3mra  Actual    0.01
#               2018-09-30       0.000000         NaN    NaN     NaN     NaN
#               2018-10-31      11.493831  17901701.0   3mra  Actual    0.01
#               2018-11-30      13.929844  17901701.0   3mra  Actual    0.01
#               2018-12-31      21.500490  17901701.0   3mra  Actual    0.01
#         CB0P0 2018-07-31      22.389493  17901701.0   3mra  Actual    0.03
#               2018-08-31      23.600726  17901701.0   3mra  Actual    0.03
#               2018-09-30      45.105458  17901701.0   3mra  Actual    0.03
#               2018-10-31      32.249056  17901701.0   3mra  Actual    0.03
#               2018-11-30      60.790889  17901701.0   3mra  Actual    0.03
#               2018-12-31      46.832914  17901701.0   3mra  Actual    0.03

# select columns except the 'value' column
cols = df1.columns.difference(['value'])

# update the selected columns with ffill/bfill per groupby on level=[0,1]
df1.loc[:,cols] = df1.loc[:,cols].groupby(level=[0,1]).transform('ffill')
您可以使用
reindex()
查找缺少的月份:

# create the MultiIndex based on the existing df.index.levels
midx = pd.MultiIndex.from_product(df.index.levels, names=df.index.names)

# run reindex() with the new indexes and then fix Nan `value` column
df1 = df.reindex(midx).fillna({'value':0})

df1                                                                                                                 
#Out[41]: 
#                                   value      data_1 data_2  data_3  data_4
#id_1     id_2  effective_date                                              
#ADH10685 CA1P0 2018-07-31       0.000048  17901701.0   3mra  Actual  198.00
#               2018-08-31       0.000048  17901701.0   3mra  Actual  198.00
#               2018-09-30       0.000000         NaN    NaN     NaN     NaN
#               2018-10-31       0.000000         NaN    NaN     NaN     NaN
#               2018-11-30       0.000000         NaN    NaN     NaN     NaN
#               2018-12-31       0.000000         NaN    NaN     NaN     NaN
#         CB0N0 2018-07-31       4.010784  17901701.0   3mra  Actual    0.01
#               2018-08-31       2.044298  17901701.0   3mra  Actual    0.01
#               2018-09-30       0.000000         NaN    NaN     NaN     NaN
#               2018-10-31      11.493831  17901701.0   3mra  Actual    0.01
#               2018-11-30      13.929844  17901701.0   3mra  Actual    0.01
#               2018-12-31      21.500490  17901701.0   3mra  Actual    0.01
#         CB0P0 2018-07-31      22.389493  17901701.0   3mra  Actual    0.03
#               2018-08-31      23.600726  17901701.0   3mra  Actual    0.03
#               2018-09-30      45.105458  17901701.0   3mra  Actual    0.03
#               2018-10-31      32.249056  17901701.0   3mra  Actual    0.03
#               2018-11-30      60.790889  17901701.0   3mra  Actual    0.03
#               2018-12-31      46.832914  17901701.0   3mra  Actual    0.03

# select columns except the 'value' column
cols = df1.columns.difference(['value'])

# update the selected columns with ffill/bfill per groupby on level=[0,1]
df1.loc[:,cols] = df1.loc[:,cols].groupby(level=[0,1]).transform('ffill')

你能
.to_dict()
你的数据框的一个相关部分,这样我就不必在我这边构建它来玩弄它了吗?我会发布一些东西来重新创建数据框,但是你应该能够用熊猫中的
.read_clipboard()
函数来阅读它:每天学习新的东西!0.24.2在拷贝上挂起多索引\u剪贴簿()。我添加了代码,让您可以轻松地重新创建它。您可以
。\u dict()
您的数据帧的相关部分,这样我就不必构建它来玩它了?我会发布一些东西来重新创建数据帧,但是你应该能够使用熊猫:每天学习新东西中的
.read_clipboard()
函数来阅读它!0.24.2在copy_clipboad()上挂起多索引。我添加了代码,让您可以轻松地重新创建它。为什么
apply
似乎添加了一个额外的
id_1
id_2
?@GoodLuckGanesh groupby对象返回每个组的数据帧,其中包括所有列。使用.set_index之后,每个组都必须返回带有新索引的新df。我明白了,所以使用带有
groupby
的聚合函数会自动删除索引列吗?当我将
apply()
替换为
mean()
sum()
时,我没有为
id_1
id_2
获得两列。为什么
apply
似乎要添加一个额外的
id_1
id_2
?@GoodLuckGanesh groupby对象返回每个组的一个数据帧,其中包括所有列。使用.set_索引后,每个组都必须返回新的df和新的索引。我明白了,那么使用带有
groupby
的聚合函数会自动删除索引列吗?当我用
mean()
sum()
替换
apply()
时,
id\u 1
id\u 2
没有两列。