如何将rdd列表转换为python映射?

如何将rdd列表转换为python映射?,python,mapreduce,pyspark,rdd,Python,Mapreduce,Pyspark,Rdd,这是家庭作业问题。我想将包含“n”个列表的rdd转换为python映射 RDD- [[u'100=NO', u'101=OR', u'102=-0.00955461556684', u'103=0.799738137456', u'104=-0.619426440691', u'105=-0.505799761741', u'106=1.06018348173', u'107=-0.203731351216', u'108=0.242253668965', u'109=20170411', u'

这是家庭作业问题。我想将包含“n”个列表的rdd转换为python映射

RDD-

[[u'100=NO', u'101=OR', u'102=-0.00955461556684', u'103=0.799738137456', u'104=-0.619426440691', u'105=-0.505799761741', u'106=1.06018348173', u'107=-0.203731351216', u'108=0.242253668965', u'109=20170411', u'110=14:47:54'], [u'100=NO', u'101=OR', u'102=1.09790894815', u'103=-0.591742622246', u'104=0.60404467739', u'105=-0.729487378829', u'106=-0.41507842821', u'107=-1.01921955205', u'108=-0.153191948561', u'109=20170411', u'110=14:47:56'], [u'100=NO', u'101=OR', u'102=-0.0845031955962', u'103=0.428040384808', u'104=0.0579505934162', u'105=0.893705789837', u'106=-0.544258436965', u'107=1.10990090862', u'108=0.740638990995', u'109=20170411', u'110=14:47:58'], [u'100=NO', u'101=OL', u'102=1.20406493416', u'103=-0.275962563807', u'104=-0.728142212616', u'105=2.04751448847', u'106=2.10361125056', u'107=0.588650303087', u'108=-0.693327897822', u'109=20170411', u'110=14:48:00']]
我试着像-

sc.parallelize([[main_map.update({i.split('=')[0] : i.split('=')[1]}) for i in j] for j in rdd.toLocalIterator()])
预期答案-

{100 : NO, 101 : OR, 102 : -0.00955461556684, 103 : 0.799738137456, 104 : -0.619426440691, 105 : -0.505799761741, 106 : 1.06018348173, 107 : -0.203731351216 , 108 : 0.242253668965, 109 : 20170411, 110 : 14:47:54}
在第一次迭代中,我想要上面这样的dict

但这不是在python映射中转换它的好方法。他们有什么具体的功能或方法来实现我想要的吗?

只是:

rdd = sc.parallelize(data).map(lambda x: dict(map(lambda y: str(y).split('='), x)))