Python 3.x 如何将PySpark管道rdd(元组内元组)转换为数据帧?
我有一个像贝娄一样的派斯帕克管道RDDPython 3.x 如何将PySpark管道rdd(元组内元组)转换为数据帧?,python-3.x,apache-spark,pyspark,apache-spark-sql,pyspark-sql,Python 3.x,Apache Spark,Pyspark,Apache Spark Sql,Pyspark Sql,我有一个像贝娄一样的派斯帕克管道RDD (1,([1,2,3,4],[5,3,4,5]) (2,([1,2,4,5],[4,5,6,7]) 我想生成如下所示的数据帧: Id-sid-cid 1 1 5 1 2 3 1 3 4 1 4 5 2 1 4 2 2 5 2 4 6 2 5 7 请在这方面帮助我。如果您有像这样的RDD rdd = sc.parallelize([ (1, ([1,2,3,4], [5,3,4,
(1,([1,2,3,4],[5,3,4,5])
(2,([1,2,4,5],[4,5,6,7])
我想生成如下所示的数据帧:
Id-sid-cid
1 1 5
1 2 3
1 3 4
1 4 5
2 1 4
2 2 5
2 4 6
2 5 7
请在这方面帮助我。如果您有像这样的RDD
rdd = sc.parallelize([
(1, ([1,2,3,4], [5,3,4,5])),
(2, ([1,2,4,5], [4,5,6,7]))
])
我只想使用RDD:
rdd.flatMap(lambda rec:
((rec[0], sid, cid) for sid, cid in zip(rec[1][0], rec[1][1]))
).toDF(["id", "sid", "cid"]).show()
# +---+---+---+
# | id|sid|cid|
# +---+---+---+
# | 1| 1| 5|
# | 1| 2| 3|
# | 1| 3| 4|
# | 1| 4| 5|
# | 2| 1| 4|
# | 2| 2| 5|
# | 2| 4| 6|
# | 2| 5| 7|
# +---+---+---+