在Spark Scala中使用map()对键值对重新排序
在Spark Scala中,以下pySpark代码的等价物是什么在Spark Scala中使用map()对键值对重新排序,scala,apache-spark,pyspark,Scala,Apache Spark,Pyspark,在Spark Scala中,以下pySpark代码的等价物是什么 rddKeyTwoVal = sc.parallelize([("cat", (0,1)), ("spoon", (2,3))]) rddK2VReorder = rddKeyTwoVal.map(lambda (key, (val1, val2)) : ((key, val1) , val2)) rddK2VReorder.collect() // [(('cat', 0), 1), (('spoon', 2), 3)] --
rddKeyTwoVal = sc.parallelize([("cat", (0,1)), ("spoon", (2,3))])
rddK2VReorder = rddKeyTwoVal.map(lambda (key, (val1, val2)) : ((key, val1) ,
val2))
rddK2VReorder.collect()
// [(('cat', 0), 1), (('spoon', 2), 3)] -- This is the output.
或
输出:
Array(((cat,0),1), ((spoon,2),3))
感谢@Alec建议第一种方法我找到了自己的答案!张贴以帮助社区其他人。这是我上面发布的代码的最干净的Scala版本。产生完全相同的输出
val rddKeyTwoVal = sc.parallelize(Array(("cat", (0,1)), ("spoon", (2,3))))
val rddK2VReorder = rddKeyTwoVal.map{case (key, (val1, val2)) => ((key, val1),val2)}
rddK2VReorder.collect()
//Use the following for a cleaner output.
rddK2VReorder.collect().foreach(println)
输出:
// With collect() menthod.
Array[((String, Int), Int)] = Array(((cat,0),1), ((spoon,2),3))
// If you use the collect().foreach(println)
((cat,0),1)
((spoon,2),3)
虽然rddkeytowaval.map{case(key,(val1,val2))=>((key,val1,val2)}可能是lambda的一个更简洁的翻译…感谢大家的提示输入!看起来我们在同一时间找到了相同的答案。:)这与@shekhar答案完全相同。。。事实上,稍微有点不正确,因为从技术上讲,您没有在Python版本中打印输出,您只是收集了它。谢谢。我修改了答案。
val rddKeyTwoVal = sc.parallelize(Array(("cat", (0,1)), ("spoon", (2,3))))
val rddK2VReorder = rddKeyTwoVal.map{case (key, (val1, val2)) => ((key, val1),val2)}
rddK2VReorder.collect()
//Use the following for a cleaner output.
rddK2VReorder.collect().foreach(println)
// With collect() menthod.
Array[((String, Int), Int)] = Array(((cat,0),1), ((spoon,2),3))
// If you use the collect().foreach(println)
((cat,0),1)
((spoon,2),3)