Arrays 将数组的数组的rdd保存到文本文件spark

Arrays 将数组的数组的rdd保存到文本文件spark,arrays,list,scala,apache-spark,rdd,Arrays,List,Scala,Apache Spark,Rdd,我有一个叫tmp的RDD,就像这样 "org.apache.spark.rdd.RDD[(String, List[(String, String, Double)])]" 其值如下所示 Array[(String, List[(String, String, Double)])] = Array((1076486,List((1076486,1076486,0.0), (1076486,431000,0.7438727490345501), (1076486,351632,3.139055

我有一个叫tmp的RDD,就像这样

"org.apache.spark.rdd.RDD[(String, List[(String, String, Double)])]" 
其值如下所示

Array[(String, List[(String, String, Double)])] = Array((1076486,List((1076486,1076486,0.0), (1076486,431000,0.7438727490345501), (1076486,351632,3.139055446043724), (1076486,431611,6.173095256463185))), (430067,List((430067,430067,0.0), (430067,1037380,4.0390818750047535), (430067,431611,6.396930255172381), (430067,824889,7.265222659014164))))
我的输出应该是列表的内部内容,如下所示

1076486,1076486,0.0
1076486,431000,0.7438727490345501
.
.
430067,1037380,4.0390818750047535
我试过这个

.mapValues(_.toList).saveAsTextFile
它在文件中显示如下

(1076486,List((1076486,1076486,0.0), (1076486,431000,0.7438727490345501), (1076486,351632,3.139055446043724), (1076486,431611,6.173095256463185)))
(430067,List((430067,430067,0.0), (430067,1037380,4.0390818750047535), (430067,431611,6.396930255172381), (430067,824889,7.265222659014164)))
我可以通过下面的代码打印所需的数据

tmp.collect().foreach(a=> {a.foreach(e=>print(e+" "))})
但无法将其保存到文件中

(1076486,List((1076486,1076486,0.0), (1076486,431000,0.7438727490345501), (1076486,351632,3.139055446043724), (1076486,431611,6.173095256463185)))
(430067,List((430067,430067,0.0), (430067,1037380,4.0390818750047535), (430067,431611,6.396930255172381), (430067,824889,7.265222659014164)))

如何获得所需的结果?

只需手动创建输出字符串:

tmp.values.flatMap(_.map{case (x, y, z) => s"$x,$y,$z"})

只需手动创建输出字符串:

tmp.values.flatMap(_.map{case (x, y, z) => s"$x,$y,$z"})

谢谢你。你太棒了!!谢谢你。你太棒了!!