Python 以小时为单位的日期时间差异,周末除外

Python 以小时为单位的日期时间差异,周末除外,python,python-3.x,pandas,dataframe,Python,Python 3.x,Pandas,Dataframe,我目前有一个数据框,其中一个uniqueID在另一列中有多个日期。我希望提取每个日期之间的小时数,但如果下一个日期在周末之后,则忽略周末。例如,如果今天是星期五下午12点, 接下来的日期是星期二下午12点,那么这两个日期之间的小时差是48小时 以下是我的数据集和预期输出: df = pd.DataFrame({"UniqueID": ["A","A","A","B","B","B","C","C"],"Date": ["2018-12-07 10:30:00","2018-12-10 14:30

我目前有一个数据框,其中一个uniqueID在另一列中有多个日期。我希望提取每个日期之间的小时数,但如果下一个日期在周末之后,则忽略周末。例如,如果今天是星期五下午12点, 接下来的日期是星期二下午12点,那么这两个日期之间的小时差是48小时

以下是我的数据集和预期输出:

df = pd.DataFrame({"UniqueID": ["A","A","A","B","B","B","C","C"],"Date":
["2018-12-07 10:30:00","2018-12-10 14:30:00","2018-12-11 17:30:00",
"2018-12-14 09:00:00","2018-12-18 09:00:00",
"2018-12-21 11:00:00","2019-01-01 15:00:00","2019-01-07 15:00:00"],
"ExpectedOutput": ["28.0","27.0","Nan","48.0","74.0","NaN","96.0","NaN"]})

df["Date"] = df["Date"].astype(np.datetime64)
这是我目前所拥有的,但包括周末:

df["date_diff"] = df.groupby(["UniqueID"])["Date"].apply(lambda x: x.diff() 
/ np.timedelta64(1 ,'h')).shift(-1)

谢谢

想法是删除
时间的最低日期时间
,并获取开始日+一天和转移到
小时3
列之间的工作日数,然后创建
小时1
小时2
列(如果不是周末时间)。最后将所有小时列相加:

df["Date"] = pd.to_datetime(df["Date"])
df = df.sort_values(['UniqueID','Date'])

df["shifted"] = df.groupby(["UniqueID"])["Date"].shift(-1)
df["hours1"] = df["Date"].dt.floor('d') 
df["hours2"] = df["shifted"].dt.floor('d') 

mask = df['shifted'].notnull()
f = lambda x: np.busday_count(x['hours1'] + pd.Timedelta(1, unit='d'), x['hours2'])
df.loc[mask, 'hours3'] = df[mask].apply(f, axis=1) * 24

mask1 = df['hours1'].dt.dayofweek < 5
hours1 = df['hours1'] + pd.Timedelta(1, unit='d') - df['Date']
df['hours1'] = np.where(mask1, hours1, np.nan) / np.timedelta64(1 ,'h')

mask1 = df['hours2'].dt.dayofweek < 5
df['hours2'] = np.where(mask1, df['shifted']-df['hours2'], np.nan) / np.timedelta64(1 ,'h')

df['date_diff'] = df['hours1'].fillna(0) + df['hours2'] + df['hours3']
删除第一个解决方案有两个原因-不准确且速度慢:

np.random.seed(2019)

dates = pd.date_range('2015-01-01','2018-01-01', freq='H')
df = pd.DataFrame({"UniqueID": np.random.choice(list('ABCDEFGHIJ'), size=100),
                   "Date": np.random.choice(dates, size=100)})
print (df)


预期输出是什么?输出将与df[“日期差异”]但是如果没有周末时间,我认为输出数据中有数字。@dko512周五到周二是96小时,周末是48小时,那么为什么差异是72小时而不是48小时?无论如何,这可能是您想要的。@a_客人对不起,您的权利非常感谢。这对我来说很有意义,我非常感谢您花时间回答这个问题。对于仍在学习这门语言的人来说,这对我来说意义重大。@dko512-这对我来说真的很难,但我喜欢不容易的解决方案:)对不起,忽略我之前发表的评论,我看错了专栏
np.random.seed(2019)

dates = pd.date_range('2015-01-01','2018-01-01', freq='H')
df = pd.DataFrame({"UniqueID": np.random.choice(list('ABCDEFGHIJ'), size=100),
                   "Date": np.random.choice(dates, size=100)})
print (df)
def old(df):
    df["Date"] = pd.to_datetime(df["Date"])
    df = df.sort_values(['UniqueID','Date'])

    df["shifted"] = df.groupby(["UniqueID"])["Date"].shift(-1)

    def f(x):
        a = pd.date_range(x['Date'],  x['shifted'], freq='T')
        return ((a.dayofweek < 5).sum() / 60).round()


    mask = df['shifted'].notnull()
    df.loc[mask, 'date_diff'] = df[mask].apply(f, axis=1)  
    return df
def new(df):
    df["Date"] = pd.to_datetime(df["Date"])
    df = df.sort_values(['UniqueID','Date'])

    df["shifted"] = df.groupby(["UniqueID"])["Date"].shift(-1)
    df["hours1"] = df["Date"].dt.floor('d') 
    df["hours2"] = df["shifted"].dt.floor('d') 

    mask = df['shifted'].notnull()
    f = lambda x: np.busday_count(x['hours1'] + pd.Timedelta(1, unit='d'), x['hours2'])
    df.loc[mask, 'hours3'] = df[mask].apply(f, axis=1) * 24

    mask1 = df['hours1'].dt.dayofweek < 5
    hours1 = df['hours1'] + pd.Timedelta(1, unit='d') - df['Date']
    df['hours1'] = np.where(mask1, hours1, np.nan) / np.timedelta64(1 ,'h')

    mask1 = df['hours2'].dt.dayofweek < 5
    df['hours2'] = np.where(mask1, df['shifted'] - df['hours2'], np.nan) / np.timedelta64(1 ,'h')

    df['date_diff'] = df['hours1'].fillna(0) + df['hours2'] + df['hours3']
    return df
print (new(df))
print (old(df))
In [44]: %timeit (new(df))
22.7 ms ± 115 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [45]: %timeit (old(df))
1.01 s ± 8.03 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)