Python 从RDD中的元组解包项时发生Spark错误

Python 从RDD中的元组解包项时发生Spark错误,python,dataframe,apache-spark,pyspark,rdd,Python,Dataframe,Apache Spark,Pyspark,Rdd,我在Jupyter笔记本上写了一个脚本来读取RDD并执行操作。该脚本在Jupyter上运行良好 rdd= [('xxxxx99', [{'cov_id':'Q', 'cov_cd':'100','cov_amt':'100', 'cov_state':'AZ'}, {'cov_id':'Q', 'cov_cd':'33','cov_amt':'200', 'cov_state':'AZ'}, {'cov_id':'Q',

我在Jupyter笔记本上写了一个脚本来读取RDD并执行操作。该脚本在Jupyter上运行良好

rdd=   [('xxxxx99', [{'cov_id':'Q', 'cov_cd':'100','cov_amt':'100', 'cov_state':'AZ'},
                  {'cov_id':'Q', 'cov_cd':'33','cov_amt':'200', 'cov_state':'AZ'},
                  {'cov_id':'Q', 'cov_cd':'64','cov_amt':'10', 'cov_state':'AZ'}],
                  [{'pol_cat_id':'234','pol_dt':'20100220'}],
                  [{'qor_pol_id':'23492','qor_cd':'30'}]),

     ('xxxxx86', [{'cov_id':'R', 'cov_cd':'20','cov_amt':'100', 'cov_state':'TX'},
                  {'cov_id':'R', 'cov_cd':'44','cov_amt':'500', 'cov_state':'TX'},
                  {'cov_id':'R', 'cov_cd':'66','cov_amt':'50', 'cov_state':'TX'}],
                  [{'pol_cat_id':'532','pol_dt':'20091020'}],
                  [{'qor_pol_id':'49320','qor_cd':'21'}]) ]
              

def flatten_map(record):
    # Unpack items
    id, items, [line], [pls] = record
    pol_id = pls["pol_cat_id"]
    pol_dt = pls["pol_dt"]
    qor_id = pls["qor_pol_id"]
    for item in items:
        yield (id,item["cov_id"],item["cov_cd"], item["cov_amt"], item["cov_state"], pol_id, pol_dt, qor_id), 1


 result = (rdd
    # Expand data
    .flatMap(flatten_map)
    # Flatten tuples
    .map(lambda x: x[0],))) 
但是,在转换为Python脚本时,出现了一个错误:

2019-10-01 14:12:46901:错误:id,项目,[行],[pls]=记录

2019-10-01 14:12:46901:错误:值错误:没有足够的值来解包

预期为1,得到0


有什么建议吗?Python在notebook和py上处理这个问题的方式有什么不同吗?

只是在为正确的变量取正确的值时犯了一些错误

请通过以下代码:

rdd=['xxxxx 99',[{'cov_id':'Q','cov_cd':'100','cov_amt':'100','cov_state':'AZ'}, {'cov_id':'Q','cov_cd':'33','cov_amt':'200','cov_state':'AZ'}, {'cov_id':'Q','cov_cd':'64','cov_amt':'10','cov_state':'AZ'}, [{'pol_cat_id':'234','pol_dt':'20100220'}], [{'qor_pol_id':'23492','qor_cd':'30'}], "xxxxx 86",, {'cov_id':'R','cov_cd':'44','cov_amt':'500','cov_state':'TX'}, {'cov_id':'R','cov_cd':'66','cov_amt':'50','cov_state':'TX'}, [{'pol_cat_id':'532','pol_dt':'20091020'}], [{'qor_pol_id':'49320','qor_cd':'21'}] def展平映射记录: 开箱 id,项目,[行],[pls]=记录 pol_id=行[pol_cat_id] pol_dt=直线[pol_dt] qor_id=pls[qor_pol_id] 对于项目中的项目: 收益id,项目[cov_id],项目[cov_cd],项目[cov_amt],项目[cov_state],pol_id,pol_dt,qor_id,1 结果=spark.sparkContext.Parallelizerd.FlatMapFlatte\u map.maplambda x:x[0] 结果收集
通常IDE与可执行文件的错误是因为内存中存储了一些您不知道的变量。这是Jupiter中的整个脚本吗?你正在执行的是什么?您是如何在命令行上执行此脚本的?我正在将其移动到.py文件,并使用python文件执行.py文件。py@algorythms当然请选择并投票表决我的答案
# OUTPUT
[('xxxxx99', 'Q', '100', '100', 'AZ', '234', '20100220', '23492'), ('xxxxx99', 'Q', '33', '200', 'AZ', '234', '20100220', '23492'), ('xxxxx99', 'Q', '64', '10', 'AZ', '234', '20100220', '23492'), ('xxxxx86', 'R', '20', '100', 'TX', '532', '20091020', '49320'), ('xxxxx86', 'R', '44', '500', 'TX', '532', '20091020', '49320'), ('xxxxx86', 'R', '66', '50', 'TX', '532', '20091020', '49320')]