更换spark scala中的元件

更换spark scala中的元件,scala,apache-spark,Scala,Apache Spark,我们如何替换spark scala shell中的元素 例如: val t=sc.parallelize(顺序((“100”,列表(“2”,“4”,“NA”,“6”,“8”,“2”))) 我想用0替换NA您可以尝试以下操作,用0替换NA,但会给您一个新的RDD scala> val t= sc.parallelize(Seq(("100",List("2","-4","NA","6","8","2")))) t: org.apache.spark.rdd.RDD[(String, List

我们如何替换spark scala shell中的元素

例如:
val t=sc.parallelize(顺序((“100”,列表(“2”,“4”,“NA”,“6”,“8”,“2”)))


我想用0替换NA

您可以尝试以下操作,用0替换
NA
,但会给您一个新的
RDD

scala> val t= sc.parallelize(Seq(("100",List("2","-4","NA","6","8","2"))))
t: org.apache.spark.rdd.RDD[(String, List[String])] = ParallelCollectionRDD[0] at parallelize at <console>:21
scala> val newRDD = t.map( x => (x._1,x._2.map{case "NA" => 0; case x => x }))
newRDD: org.apache.spark.rdd.RDD[(String, List[Any])] = MapPartitionsRDD[3] at map at <console>:23

scala> newRDD.collect
res5: Array[(String, List[Any])] = Array((100,List(2, -4, 0, 6, 8, 2)))
scala>val t=sc.parallelize(Seq((“100”,List(“2”,“4”,“NA”,“6”,“8”,“2”)))
t:org.apache.spark.rdd.rdd[(String,List[String])]=ParallelCollectionRDD[0]位于parallelize at:21
scala>val newRDD=t.map(x=>(x._1,x._2.map{case“NA”=>0;case x=>x}))
newRDD:org.apache.spark.rdd.rdd[(String,List[Any])]=MapPartitionsRDD[3]位于map at:23
scala>newRDD.collect
res5:Array[(String,List[Any])]=Array((100,List(2,-4,0,6,8,2)))

当您并行化序列时,spark会创建一个提供值的RDD。 此RDD存储在spark中的整个集群中。RDD的本质是不变的, 另一种方法是可以从RDD中过滤掉“NA”值 将它们映射为Int,并将每个元素相乘为零。 并将过滤后的RDD合并到RDD,包括not“NA”元素

示例代码

val t= sc.parallelize(Seq(("100",List("2","-4","NA","6","8","2"))))
val a = t.map(i => i._2).filter(i => i.contains("NA"))
val b = t.map(i => i._2).filter(i => !i.contains("NA")).map(i => (i*0))
val d = a.union(b)