Python 提高Pandas中的数据验证效率

Python 提高Pandas中的数据验证效率,python,python-2.7,pandas,Python,Python 2.7,Pandas,我将CSV中的数据加载到Pandas中,并对以下字段进行验证: (1.5s) loans['net_mortgage_margin'] = loans['net_mortgage_margin'].map(lambda x: convert_to_decimal(x)) (1.5s) loans['current_interest_rate'] = loans['current_interest_rate'].map(lambda x: convert_to_decimal(x)) (1.5s)

我将CSV中的数据加载到Pandas中,并对以下字段进行验证:

(1.5s) loans['net_mortgage_margin'] = loans['net_mortgage_margin'].map(lambda x: convert_to_decimal(x))
(1.5s) loans['current_interest_rate'] = loans['current_interest_rate'].map(lambda x: convert_to_decimal(x))
(1.5s) loans['net_maximum_interest_rate'] = loans['net_maximum_interest_rate'].map(lambda x: convert_to_decimal(x))

(48s)  loans['credit_score'] = loans.apply(lambda row: get_minimum_score(row), axis=1)
(< 1s) loans['loan_age'] = ((loans['factor_date'] - loans['first_payment_date']) / np.timedelta64(+1, 'M')).round() + 1
(< 1s) loans['months_to_roll'] = ((loans['next_rate_change_date'] - loans['factor_date']) / np.timedelta64(+1, 'M')).round() + 1
(34s)  loans['first_payment_change_date'] = loans.apply(lambda x: validate_date(x, 'first_payment_change_date', loans.columns), axis=1)
(37s)  loans['first_rate_change_date'] = loans.apply(lambda x: validate_date(x, 'first_rate_change_date', loans.columns), axis=1)

(39s)  loans['first_payment_date'] = loans.apply(lambda x: validate_date(x, 'first_payment_date', loans.columns), axis=1)
(39s)  loans['maturity_date'] = loans.apply(lambda x: validate_date(x, 'maturity_date', loans.columns), axis=1)
(37s)  loans['next_rate_change_date'] = loans.apply(lambda x: validate_date(x, 'next_rate_change_date', loans.columns), axis=1)
(36s)  loans['first_PI_date'] = loans.apply(lambda x: validate_date(x, 'first_PI_date', loans.columns), axis=1)

(36s)  loans['servicer_name'] = loans.apply(lambda row: row['servicer_name'][:40].upper().strip(), axis=1)
(38s)  loans['state_name'] = loans.apply(lambda row: str(us.states.lookup(row['state_code'])), axis=1)
(33s) loans['occupancy_status'] = loans.apply(lambda row: get_occupancy_type(row), axis=1)
(37s)  loans['original_interest_rate_range'] = loans.apply(lambda row: get_interest_rate_range(row, 'original'), axis=1)
(36s)  loans['current_interest_rate_range'] = loans.apply(lambda row: get_interest_rate_range(row, 'current'), axis=1)
(33s)  loans['valid_credit_score'] = loans.apply(lambda row: validate_credit_score(row), axis=1)
(60s)  loans['origination_year'] = loans['first_payment_date'].map(lambda x: x.year if x.month > 2 else x.year - 1)
(< 1s) loans['number_of_units'] = loans['unit_count'].map(lambda x: '1' if x == 1 else '2-4')
(32s)  loans['property_type'] = loans.apply(lambda row: validate_property_type(row), axis=1)

这类似于最长调用(origination_year),它从日期对象(年、月等)中提取值。其他的,例如property_type,只是检查不规则的值(例如“N/A”、“NULL”等),但仍然需要一些时间来检查每个值。

td;LR:考虑分配处理。< /强>改进是读取块中的数据并使用多个进程。来源

另一个选择可能是使用。
下面是另一个使用

的好例子,如果您已经有了一个函数
convert\u to\u decimal
,那么编写
lambda x:convert\u to\u decimal(x)
就太麻烦了。这与写λy(λx:convert_to_decimal(x))(y)是一样的。问题1:什么是慢?你有4-5个函数,我们看不到它们的代码,有些函数,如
转换为十进制
我希望它们很快,其他函数,如
验证日期
在哪里传递一列可能是问题,但我们只能推测。至少,在一些
print(time.time())
调用中喷洒会很有帮助。并不是说任何一个特定的呼叫都很慢;总的来说,他们增加的时间比理想的要多,而且至少似乎不需要反复调用每一行应该会有所帮助,但我不知道这是否只是将8.5分钟替换为7.6分钟。发布一个需要一段时间的函数示例(如何
验证\u date
)。完成。Validate_date不接受列,只接受每个列的列名。所有其他功能都是相似的;它们检查它是否无效(空白、某种类型的null等),如果有效,则检查允许值列表中是否存在/转换为一个范围,并返回元素或基于元素值生成的结果。
def validate_date(row, date_type, cols):
    date_element = row[date_type]
    if date_type not in cols:
        return np.nan
    if pd.isnull(date_element) or len(str(date_element).strip()) < 2:  # can be blank, NaN, or "0"
        return np.nan
    if date_element.day == 1:
        return date_element
    else:
        next_month = date_element + relativedelta(months=1)
        return pd.to_datetime(dt.date(next_month.year, next_month.month, 1))
import multiprocessing as mp

def process_frame(df): len(x)

if __name__ == "__main__":

    reader = read_csv(csv-file, chunk_size=CHUNKSIZE)
    pool = mp.Pool(4) # use 4 processes

    funclist = []
    for df in reader:
            # process each data frame
            f = pool.apply_async(process_frame,[df])
            funclist.append(f)

    result = 0
    for f in funclist:
            result += f.get(timeout=10) # timeout in 10 seconds

    print "There are %d rows of data"%(result)