Python 如何使用条件向量化for循环,而不是在数据帧上迭代

Python 如何使用条件向量化for循环,而不是在数据帧上迭代,python,pandas,optimization,vectorization,Python,Pandas,Optimization,Vectorization,我有一些代码接收两个.csv文件:employee.csv和schedule.csv。employee.csv具有属性“ID”和“Building”,我将它们作为“键””一起使用,以根据条件收集计划文件中具有相同ID/Building对的条目 最后,我留下了一个列表,用于创建输出数据帧 employees.csv Name,Date,Building,ID,Start Time,Stop Time,Duration,Years,EmployeeType,Status 1,3/1/2021,1,1

我有一些代码接收两个.csv文件:employee.csv和schedule.csv。employee.csv具有属性“ID”和“Building”,我将它们作为“键””一起使用,以根据条件收集计划文件中具有相同ID/Building对的条目

最后,我留下了一个列表,用于创建输出数据帧

employees.csv

Name,Date,Building,ID,Start Time,Stop Time,Duration,Years,EmployeeType,Status
1,3/1/2021,1,1,22:04:05,0:00:00,1:55:55,21,EmployeeType1,Status
1,3/1/2021,2,2,17:04:05,0:00:00,5:55:55,21,EmployeeType1,Status
schedule.csv

Name,Rev,Building,ID,Op Date,Start Time,Dur,WorkType
1,1,1,1,3/1/2021,23:04:12,1,WorkType1
1,1,1,1,3/1/2021,23:44:00,1,WorkType1
伪代码(数据逻辑可能没有完美的意义,但它反映了我试图做的事情):

我在一个有80000行的数据集上运行了这个程序,花了几个小时。如何使用上面的条件对循环进行矢量化/优化,以便不再迭代整个df


我对熊猫优化一无所知,所以任何帮助都会有很大帮助。

鉴于您的数据帧如下:

>>df
  Name      Date Building ID  ... Duration Years   EmployeeType  Status
0    1  3/1/2021        1  1  ...  1:55:55    21  EmployeeType1  Status
1    1  3/1/2021        2  2  ...  5:55:55    21  EmployeeType1  Status

>>df2   # Schedule Data frame
  Name Rev Building ID   Op Date Start Time Dur   WorkType
0    1   1        1  1  3/1/2021   23:04:12   1  WorkType1
1    1   1        1  1  3/1/2021   23:44:00   1  WorkType1
我只是稍微修改了您的a函数,以使用pandas的
apply
方法实现它

def create_output(row):
    if len(list(df2.loc[(df2['Building'] == row['Building']) & (df2['ID'] ==  row['ID'])].iterrows())) == 0:
        data_retrieved = [row['Name'], row['Date'], row['Building'], row['ID'], row['Years'], row['EmployeeType'], row['Start Time'], row['Stop Time'], row['Duration'],
                          'NA', 'NA', 'NA', 'NA', 'NA', 'NA', row['Status']]
        return data_retrieved
    work_sequence = df2.loc[(df2['Building'] == row['Building']) & (df2['ID'] ==  row['ID'])]['WorkType'].tolist()
    work_sequence_converted = ''.join(work_sequence)
    # get all durations for this pair
    durations = df2.loc[(df2['Building'] == row['Building']) & (df2['ID'] == row['ID'])]['Dur'].astype(int).values
    min_duration = min(durations)
    max_duration = max(durations)
    sum_duration = sum(durations)
    # convert duration in datetime format to seconds
    date_time = datetime.datetime.strptime(str(row['Duration']), "%H:%M:%S")
    a_timedelta = date_time - datetime.datetime(1900, 1, 1)
    duration_in_seconds = a_timedelta.total_seconds()
    percent_time = 1.0 / duration_in_seconds
    data_retrieved = [row['Name'], row['Date'], row['Building'], row['ID'], row['Years'], row['EmployeeType'], row['Start Time'], row['Stop Time'], row['Duration'],
                      sum_duration, percent_time, 'NA', work_sequence_converted, min_duration, max_duration, row['Status']]
    return data_retrieved
现在,您可以为每一行调用此函数,而无需手动迭代,而且由于您不是手动迭代,因此速度将非常快

df.apply(create_output, axis=1)
0    [1, 3/1/2021, 1, 1, 21, EmployeeType1, 22:04:0...
1    [1, 3/1/2021, 2, 2, 21, EmployeeType1, 17:04:0...
dtype: object
由于它是一个数据帧,您可以轻松地将其转换为列表

df.apply(create_output, axis=1).tolist()
[['1', '3/1/2021', '1', '1', '21', 'EmployeeType1', '22:04:05', '0:00:00', '1:55:55', 2, 0.00014378145219266715, 'NA', 'WorkType1WorkType1', 1, 1, 'Status'], ['1', '3/1/2021', '2', '2', '21', 'EmployeeType1', '17:04:05', '0:00:00', '5:55:55', 'NA', 'NA', 'NA', 'NA', 'NA', 'NA', 'Status']]

鉴于您的数据帧如下所示:

>>df
  Name      Date Building ID  ... Duration Years   EmployeeType  Status
0    1  3/1/2021        1  1  ...  1:55:55    21  EmployeeType1  Status
1    1  3/1/2021        2  2  ...  5:55:55    21  EmployeeType1  Status

>>df2   # Schedule Data frame
  Name Rev Building ID   Op Date Start Time Dur   WorkType
0    1   1        1  1  3/1/2021   23:04:12   1  WorkType1
1    1   1        1  1  3/1/2021   23:44:00   1  WorkType1
我只是稍微修改了您的a函数,以使用pandas的
apply
方法实现它

def create_output(row):
    if len(list(df2.loc[(df2['Building'] == row['Building']) & (df2['ID'] ==  row['ID'])].iterrows())) == 0:
        data_retrieved = [row['Name'], row['Date'], row['Building'], row['ID'], row['Years'], row['EmployeeType'], row['Start Time'], row['Stop Time'], row['Duration'],
                          'NA', 'NA', 'NA', 'NA', 'NA', 'NA', row['Status']]
        return data_retrieved
    work_sequence = df2.loc[(df2['Building'] == row['Building']) & (df2['ID'] ==  row['ID'])]['WorkType'].tolist()
    work_sequence_converted = ''.join(work_sequence)
    # get all durations for this pair
    durations = df2.loc[(df2['Building'] == row['Building']) & (df2['ID'] == row['ID'])]['Dur'].astype(int).values
    min_duration = min(durations)
    max_duration = max(durations)
    sum_duration = sum(durations)
    # convert duration in datetime format to seconds
    date_time = datetime.datetime.strptime(str(row['Duration']), "%H:%M:%S")
    a_timedelta = date_time - datetime.datetime(1900, 1, 1)
    duration_in_seconds = a_timedelta.total_seconds()
    percent_time = 1.0 / duration_in_seconds
    data_retrieved = [row['Name'], row['Date'], row['Building'], row['ID'], row['Years'], row['EmployeeType'], row['Start Time'], row['Stop Time'], row['Duration'],
                      sum_duration, percent_time, 'NA', work_sequence_converted, min_duration, max_duration, row['Status']]
    return data_retrieved
现在,您可以为每一行调用此函数,而无需手动迭代,而且由于您不是手动迭代,因此速度将非常快

df.apply(create_output, axis=1)
0    [1, 3/1/2021, 1, 1, 21, EmployeeType1, 22:04:0...
1    [1, 3/1/2021, 2, 2, 21, EmployeeType1, 17:04:0...
dtype: object
由于它是一个数据帧,您可以轻松地将其转换为列表

df.apply(create_output, axis=1).tolist()
[['1', '3/1/2021', '1', '1', '21', 'EmployeeType1', '22:04:05', '0:00:00', '1:55:55', 2, 0.00014378145219266715, 'NA', 'WorkType1WorkType1', 1, 1, 'Status'], ['1', '3/1/2021', '2', '2', '21', 'EmployeeType1', '17:04:05', '0:00:00', '5:55:55', 'NA', 'NA', 'NA', 'NA', 'NA', 'NA', 'Status']]

您所做的是手动执行“合并”

您可以
.drop()
在最终结果中从
计划中选择不需要的任何列

how='outer'
将包括没有“匹配”的行

现在您有了一个数据帧,您可以在
键上
groupby
,并使用它生成每个组的摘要

summary = { column: (column, 'first') for column in employee_df.columns }
summary['%Time'] = (
    'Duration', 
    lambda dur: 
        1 / (pd.Timestamp(dur.iat[0])
               .replace(year=1900, day=1, month=1)
          - pd.Timestamp(1900, 1, 1)).total_seconds()
)
summary.update({
    'SumDuration': ('Dur', 'sum'), 
    'MinDuration': ('Dur', 'min'), 
    'MaxDuration': ('Dur', 'max'), 
    'WorkType':    ('WorkType', ','.join)
})

output_df = output_df.fillna('').groupby(key_cols).agg(**summary)
然后,您可以通过删除添加的索引来清理它,添加
NA
字符串,并删除没有
Dur
值的行的
%Time

output_df.reset_index(drop=True, inplace=True)
output_df.replace({'': 'NA'}, inplace=True)
output_df.loc[ output_df.SumDuration == 'NA', '%Time' ] = 'NA'
产生:

>>> output_df.to_csv()
Name,Date,Building,ID,Start Time,Stop Time,Duration,Years,EmployeeType,Status,%Time,SumDuration,MinDuration,MaxDuration,WorkType
1,3/1/2021,1,1,22:04:05,0:00:00,1:55:55,21,EmployeeType1,Status,0.00014378145219266715,2.0,1.0,1.0,"WorkType1,WorkType1"
1,3/1/2021,2,2,17:04:05,0:00:00,5:55:55,21,EmployeeType1,Status,NA,NA,NA,NA,NA 
编辑


这是您使用
groupby().apply()
而不是
.agg()
编写的
create\u输出
函数-应该更容易理解

def create_output(employee_file, schedule_file, output_file):
    output_columns = ['Name', 'Date', 'Building', 'ID', 'Years', 'Type', 'Start Time', 'Stop Time', 'Duration',
                      'SumDuration', '%Time', 'Gap', 'Sequence', 'MinDuration', 'MaxDuration', 'Status']

    employee_df = pd.read_csv(employee_file)
    schedule = pd.read_csv(schedule_file)

    key_cols = ['Building', 'ID']

    output_df = employee_df.merge(
        schedule.drop(columns=['Name', 'Op Date', 'Rev', 'Start Time']), 
        on=key_cols, how='outer'
    )

    def summary(df):
        row = df.iloc[0]

        min_duration  = df['Dur'].min()
        max_duration  = df['Dur'].max()
        sum_duration  = df['Dur'].sum()
        work_sequence = ','.join(df['WorkType'])

        row['Type'] = row['EmployeeType']
        row['SumDuration'] = sum_duration
        row['%Time'] = ''

        if sum_duration: # only add %Time if there is a duration
            duration = row['Duration']
            date_time = datetime.datetime.strptime(duration, "%H:%M:%S")
            a_timedelta = date_time - datetime.datetime(1900, 1, 1)
            duration_in_seconds = a_timedelta.total_seconds()
            percent_time = 1.0/duration_in_seconds
            row['%Time'] = percent_time

        row['Gap'] = 'NA'
        row['MinDuration'] = min_duration
        row['MaxDuration'] = max_duration
        row['Sequence'] = work_sequence

        return row.loc[output_columns] # reorder the columns

    output_df = output_df.fillna('').groupby(key_cols).apply(summary)

    output_df.replace({'': 'NA'}).to_csv(output_file, index=False)   

您所做的是手动执行“合并”

您可以
.drop()
在最终结果中从
计划中选择不需要的任何列

how='outer'
将包括没有“匹配”的行

现在您有了一个数据帧,您可以在
键上
groupby
,并使用它生成每个组的摘要

summary = { column: (column, 'first') for column in employee_df.columns }
summary['%Time'] = (
    'Duration', 
    lambda dur: 
        1 / (pd.Timestamp(dur.iat[0])
               .replace(year=1900, day=1, month=1)
          - pd.Timestamp(1900, 1, 1)).total_seconds()
)
summary.update({
    'SumDuration': ('Dur', 'sum'), 
    'MinDuration': ('Dur', 'min'), 
    'MaxDuration': ('Dur', 'max'), 
    'WorkType':    ('WorkType', ','.join)
})

output_df = output_df.fillna('').groupby(key_cols).agg(**summary)
然后,您可以通过删除添加的索引来清理它,添加
NA
字符串,并删除没有
Dur
值的行的
%Time

output_df.reset_index(drop=True, inplace=True)
output_df.replace({'': 'NA'}, inplace=True)
output_df.loc[ output_df.SumDuration == 'NA', '%Time' ] = 'NA'
产生:

>>> output_df.to_csv()
Name,Date,Building,ID,Start Time,Stop Time,Duration,Years,EmployeeType,Status,%Time,SumDuration,MinDuration,MaxDuration,WorkType
1,3/1/2021,1,1,22:04:05,0:00:00,1:55:55,21,EmployeeType1,Status,0.00014378145219266715,2.0,1.0,1.0,"WorkType1,WorkType1"
1,3/1/2021,2,2,17:04:05,0:00:00,5:55:55,21,EmployeeType1,Status,NA,NA,NA,NA,NA 
编辑


这是您使用
groupby().apply()
而不是
.agg()
编写的
create\u输出
函数-应该更容易理解

def create_output(employee_file, schedule_file, output_file):
    output_columns = ['Name', 'Date', 'Building', 'ID', 'Years', 'Type', 'Start Time', 'Stop Time', 'Duration',
                      'SumDuration', '%Time', 'Gap', 'Sequence', 'MinDuration', 'MaxDuration', 'Status']

    employee_df = pd.read_csv(employee_file)
    schedule = pd.read_csv(schedule_file)

    key_cols = ['Building', 'ID']

    output_df = employee_df.merge(
        schedule.drop(columns=['Name', 'Op Date', 'Rev', 'Start Time']), 
        on=key_cols, how='outer'
    )

    def summary(df):
        row = df.iloc[0]

        min_duration  = df['Dur'].min()
        max_duration  = df['Dur'].max()
        sum_duration  = df['Dur'].sum()
        work_sequence = ','.join(df['WorkType'])

        row['Type'] = row['EmployeeType']
        row['SumDuration'] = sum_duration
        row['%Time'] = ''

        if sum_duration: # only add %Time if there is a duration
            duration = row['Duration']
            date_time = datetime.datetime.strptime(duration, "%H:%M:%S")
            a_timedelta = date_time - datetime.datetime(1900, 1, 1)
            duration_in_seconds = a_timedelta.total_seconds()
            percent_time = 1.0/duration_in_seconds
            row['%Time'] = percent_time

        row['Gap'] = 'NA'
        row['MinDuration'] = min_duration
        row['MaxDuration'] = max_duration
        row['Sequence'] = work_sequence

        return row.loc[output_columns] # reorder the columns

    output_df = output_df.fillna('').groupby(key_cols).apply(summary)

    output_df.replace({'': 'NA'}).to_csv(output_file, index=False)   

我想我是在遵循,但是如果不传入df2,那么在create_output函数中从何处获取df2?在本例中,我在函数定义之前定义了df2,您可以做的是,您只需在函数定义之前将空数据帧定义为df2=pd.dataframe(),当你进行矢量化的时候,你可以把实际的数据帧分配给df2,我就这样做了。我实现了您的解决方案,但运行时仍然是我所关注的问题,但是如果您不传入df2,您从create_output函数中的何处获取df2?在示例中,我在函数定义之前定义了df2,您可以做的是,您只需将空数据帧定义为df2=pd.dataframe()在函数定义之前,您可以在进行矢量化时将实际数据帧分配给df2。我实现了您的解决方案,但运行时仍然是响应的samethanks!你能解释一下summary=。。。摘要[%Time…summer.update逻辑是在python中调用的吗?我很难理解它是如何工作的。我添加了一种替代方法,使用
groupby().apply()
而不是
.agg()
这使代码与您已有的代码非常相似-因此应该更容易理解。感谢您的回复!您能解释一下python中调用了什么summary=…summary[%Time…summer.update逻辑吗?我很难理解它是如何工作的。我使用
groupby().apply()添加了另一种方法
而不是
.agg()
,后者使代码与您已有的代码非常相似,因此应该更容易理解。