Spark Scala将字符串行数组映射到pairRDD

Spark Scala将字符串行数组映射到pairRDD,scala,apache-spark,dictionary,rdd,Scala,Apache Spark,Dictionary,Rdd,如何转换这类数据 "Row-Key-001, K1, 10, A2, 20, K3, 30, B4, 42, K5, 19, C20, 20" "Row-Key-002, X1, 20, Y6, 10, Z15, 35, X16, 42" "Row-Key-003, L4, 30, M10, 5, N12, 38, O14, 41, P13, 8" 使用Scala创建spark RDD,以便我们能够: Row-Key-001, K1 Row-Key-001, A2 Row-Key-001, K

如何转换这类数据

"Row-Key-001, K1, 10, A2, 20, K3, 30, B4, 42, K5, 19, C20, 20"
"Row-Key-002, X1, 20, Y6, 10, Z15, 35, X16, 42"
"Row-Key-003, L4, 30, M10, 5, N12, 38, O14, 41, P13, 8"
使用Scala创建spark RDD,以便我们能够:

Row-Key-001, K1
Row-Key-001, A2
Row-Key-001, K3
Row-Key-001, B4
Row-Key-001, K5
Row-Key-001, C20
Row-Key-002, X1
Row-Key-002, Y6
Row-Key-002, Z15
Row-Key-002, X16
Row-Key-003, L4
Row-Key-003, M10
Row-Key-003, N12
Row-Key-003, O14
Row-Key-003, P13
我认为我们可以拆分输入以获得一个行数组,然后在“,”上再次拆分每一行,然后将每一行的第一个元素作为键,每一个备用元素作为值添加到一个映射中


但是需要帮助才能在Scala中实现。

如果您有包含以下数据的文本文件

Row-Key-001, K1, 10, A2, 20, K3, 30, B4, 42, K5, 19, C20, 20
Row-Key-002, X1, 20, Y6, 10, Z15, 35, X16, 42
Row-Key-003, L4, 30, M10, 5, N12, 38, O14, 41, P13, 8
然后您可以使用sparkContext的文本文件api作为

这将为您提供
rdd数据
,然后您可以使用
map
flatMap

rdd.map(_.split(", "))
  .flatMap(x =>  x.tail.grouped(2).map(y => (x.head, y.head)))
这应该给你一个结果

(Row-Key-001,K1)
(Row-Key-001,A2)
(Row-Key-001,K3)
(Row-Key-001,B4)
(Row-Key-001,K5)
(Row-Key-001,C20)
(Row-Key-002,X1)
(Row-Key-002,Y6)
(Row-Key-002,Z15)
(Row-Key-002,X16)
(Row-Key-003,L4)
(Row-Key-003,M10)
(Row-Key-003,N12)
(Row-Key-003,O14)
(Row-Key-003,P13)

我希望答案是有用的

很好,你让我发现了
。分组(n)
@Ramesh Maharjan,非常感谢你。很好,解决了我的问题。你们能推荐一些好的链接来学习Scala中的这类东西吗?我也在学习自己,我正在阅读Scala第三版中的编程;)谢谢你的接受。当你有资格投票时,别忘了投票
(Row-Key-001,K1)
(Row-Key-001,A2)
(Row-Key-001,K3)
(Row-Key-001,B4)
(Row-Key-001,K5)
(Row-Key-001,C20)
(Row-Key-002,X1)
(Row-Key-002,Y6)
(Row-Key-002,Z15)
(Row-Key-002,X16)
(Row-Key-003,L4)
(Row-Key-003,M10)
(Row-Key-003,N12)
(Row-Key-003,O14)
(Row-Key-003,P13)