Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/284.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 通过应用第二个数据帧中的规则来更改数据帧_Python_Pyspark - Fatal编程技术网

Python 通过应用第二个数据帧中的规则来更改数据帧

Python 通过应用第二个数据帧中的规则来更改数据帧,python,pyspark,Python,Pyspark,嗨,我有两个数据帧 输入数据帧:- id number idsc mfd 738 as6812 *fage abc van brw amz 745 786-151 *glaeceau smt sp amz 759 b0nadum ankush 574415 mo... admz 764 fdad3-al-c lib anvest-al... amz 887 rec-2s-5

嗨,我有两个数据帧

输入数据帧:-

id   number      idsc                 mfd 
738  as6812      *fage abc van brw    amz 
745  786-151     *glaeceau smt sp     amz 
759  b0nadum     ankush 574415 mo...  admz 
764  fdad3-al-c  lib anvest-al...     amz 
887  rec-2s-5    or abc sur...        c 
64   00954       ankush pure g...     amz 
8    0000686     dor must die         a 
3    000adf623   bsc test 10-pi...    amz 
检查数据帧的条件:-

condition      destinationfield expression                                b_id          
True           idsc             [idsc].lower()                            1 
[mfd]=="amz"   idsc             re.sub(r'\abc\b','a',[idsc])              1 
[mfd]=="admz"  idsc             re.sub(r'and \d+ other item', '', [idsc]) 1 
True           idsc             re.sub(r'[^a-z0-9 ]',r'',[idsc])          1 
True           idsc             [idsc].strip()                            1 
[mfd] == "c"   idsc             re.sub(r'\ankush\b','ank',[idsc])         1 
True           number           re.sub(r'[^0-9]',r'',[number])            1
True           number           [number].strip()                          1 
我希望在输入数据帧上应用条件数据帧的每个规则,并获得一个新的数据帧

如果我的条件为真,那么我需要在所有行上应用它。如果除了true之外还有任何特定值,那么我需要将该条件应用于特定记录

在pyspark中有没有更好的方法来实现这一点,因为正则表达式与python相关。而不是在循环中运行它

id   number     idsc              mfd 
738  as6812     *fage a van brw   amz 
745  786-151    *glaeceau smt sp  amz 
759  b0nadum    ank 574415 mo...  admz 
764  fdad3-al-c lib anvest-al...  amz 
887  rec-2s-5   or a sur...       c 
64   00954      ank pure g...     amz 
8    0000686    dor must die      a 
3    000adf623  bsc test 10-pi... amz 
输入数据管道分离

id| number|idsc|mfd 
738|as6812|*fage abc van brw|amz 
745|786-151|*glaeceau smt sp|amz 
759|b0nadum|ankush 574415 mo...|admz 
764|fdad3-al-c|lib anvest-al...|amz 
887|rec-2s-5|or abc sur...|c 
64| 00954|ankush pure g...|amz 
8|0000686|dor must die a 
3|000adf623|bsc test 10-pi...|amz 
数据管道分离的条件

id| number|idsc|mfd 
738|as6812|*fage abc van brw|amz 
745|786-151|*glaeceau smt sp|amz 
759|b0nadum|ankush 574415 mo...|admz 
764|fdad3-al-c|lib anvest-al...|amz 
887|rec-2s-5|or abc sur...|c 
64| 00954|ankush pure g...|amz 
8|0000686|dor must die a 
3|000adf623|bsc test 10-pi...|amz 
条件|目标字段|表达式| b|u id|

True|idsc|[idsc].lower()|1 
[mfd]=="amz"|idsc|re.sub(r'\abc\b','a',[idsc])|1 
[mfd]=="admz"|idsc|re.sub(r'and \d+ other item', '', [idsc])|1 
True|idsc|re.sub(r'[^a-z0-9 ]',r'',[idsc])|1 
True|idsc|[idsc].strip()|1 
[mfd] == "c"|idsc|re.sub(r'\ankush\b','ank',[idsc])|1 
True|number|re.sub(r'[^0-9]',r'',[number])|1
True|number|[number].strip()|1 
谢谢,
Ankush Reddy

您可以尝试使您的条件数据框可评估

如果它是可计算的,则可以对条件和表达式调用
eval()

def apply_condition(df, df_condition):
 # write a function get_df_evaluable_condition which both does
 # replace "[any_column]" by "df.['any_column']" in createcondition
 # replace "[destinationfield]" by "element" in expression

 df_evaluable_condition = get_df_evaluable_condition(df_evaluable_condition)

 for index, row in df_evaluable_condition.iterrows():
    createcondition = row['createcondition']
    destinationfield = row['destinationfield']
    expression = row['expression']
    # just apply expression where createcondition is true
    df.loc[eval(createcondition), destinationfield] = 
      df.loc[eval(createcondition), destinationfield].apply(lambda element: eval(expression))
顺便说一句,如果表达式包含对不是目标列的列的引用,则最后一行代码将不起作用。你需要更复杂的东西来实现你想要的


如果您不知道您的条件数据帧是什么样子,我不建议使用这种方法。不要对未知字符串调用
eval()

你能给我们一些数据帧吗?Hi@user3483203对这个问题做了修改,我想我们现在应该可以加载到数据帧中了。谢谢,“条件”数据框很大吗?如果不是这样,那么要实现这些转换,最好只使用pyspark.sql.DataFrame函数。例如--
df=df。withColumn(“idsc”,F.lower(“idsc”))
将执行您的第一个条件。在条件数据框中,我有大约14000条记录。@ankushreddy 14000条记录看起来不大。可以在列表中收集它吗?