根据列表、Scala的值将记录拆分为单个记录

根据列表、Scala的值将记录拆分为单个记录,scala,apache-spark,Scala,Apache Spark,我有一张像RDD一样的唱片, 名字,姓氏,出生日期,年龄,电子邮件。以下是一个列表: Vikash, Singh, 19-12-1982, 32, {abc@email.com, def@email.com} 我想把这个破成两张唱片 Vikash, Singh, 19-12-1982, 32, abc@email.com Vikash, Singh, 19-12-1982, 32, def@email.com 如何在Scala中执行此操作?假设您的电子邮件存储在某种可遍历的中,您只需运行平面

我有一张像RDD一样的唱片, 名字,姓氏,出生日期,年龄,电子邮件。以下是一个列表:

Vikash, Singh, 19-12-1982, 32, {abc@email.com, def@email.com}
我想把这个破成两张唱片

Vikash, Singh, 19-12-1982, 32, abc@email.com
Vikash, Singh, 19-12-1982, 32, def@email.com

如何在Scala中执行此操作?

假设您的电子邮件存储在某种可遍历的
中,您只需运行
平面图即可:

val rdd2 = rdd1.flatMap { case (first, last, dob, age, emails) => for {email <- emails} yield (first, last, dob, age, email) }
val rdd2=rdd1.flatMap{case(first,last,dob,age,email)=>for{email val rdd1=sc.parallelize(Seq)(“Vikash”,“Singh”,“19-12-1982”,32,Seqabc@email.com", "def@email.com"))))
...
scala>val rdd2=rdd1.flatMap{case(first,last,dob,age,email)=>for{email rdd2.foreach(println)
...
(Vikash,Singh,19-12-1982,32,abc@email.com)
(Vikash,Singh,19-12-1982,32,def@email.com)

假设您的电子邮件存储在某种可遍历的
中,您只需运行
平面图即可:

val rdd2 = rdd1.flatMap { case (first, last, dob, age, emails) => for {email <- emails} yield (first, last, dob, age, email) }
val rdd2=rdd1.flatMap{case(first,last,dob,age,email)=>for{email val rdd1=sc.parallelize(Seq)(“Vikash”,“Singh”,“19-12-1982”,32,Seqabc@email.com", "def@email.com"))))
...
scala>val rdd2=rdd1.flatMap{case(first,last,dob,age,email)=>for{email rdd2.foreach(println)
...
(Vikash,Singh,19-12-1982,32,abc@email.com)
(Vikash,Singh,19-12-1982,32,def@email.com)

基于@Rohan-Aletty-answer,如果您想使用
map
而不是
for-loop

val rdd1 = sc.parallelize(Seq(("Vikash", "Singh", "19-12-1982", 32, 
                            Seq("abc@email.com", "def@email.com"))))
val rdd2 = rdd1.flatMap { case (first, last, dob, age, emails) => 
                            emails.map(email => (first, last, dob, age, email)) }

println(rdd2.count()) // => 2
rdd2.collect().foreach(println) // => (Vikash,Singh,19-12-1982,32,abc@email.com), (Vikash,Singh,19-12-1982,32,def@email.com)

根据@Rohan Aletty答案,如果您想使用
map
而不是
for loop

val rdd1 = sc.parallelize(Seq(("Vikash", "Singh", "19-12-1982", 32, 
                            Seq("abc@email.com", "def@email.com"))))
val rdd2 = rdd1.flatMap { case (first, last, dob, age, emails) => 
                            emails.map(email => (first, last, dob, age, email)) }

println(rdd2.count()) // => 2
rdd2.collect().foreach(println) // => (Vikash,Singh,19-12-1982,32,abc@email.com), (Vikash,Singh,19-12-1982,32,def@email.com)