Scala 使用Spark REPL和独立Spark程序时的不同行为

Scala 使用Spark REPL和独立Spark程序时的不同行为,scala,apache-spark,Scala,Apache Spark,当我通过Spark REPL运行此代码时: val sc = new SparkContext("local[4]" , "") val x = sc.parallelize(List( ("a" , "b" , 1) , ("a" , "b" , 1) , ("c" , "b" , 1) , ("a" , "d" , 1))) val byKey = x.map({case (sessionId,uri,count) => (sessionId,uri)->coun

当我通过Spark REPL运行此代码时:

  val sc = new SparkContext("local[4]" , "")

  val x = sc.parallelize(List( ("a" , "b" , 1) , ("a" , "b" , 1) , ("c" , "b" , 1) , ("a" , "d" , 1)))

  val byKey = x.map({case (sessionId,uri,count) => (sessionId,uri)->count})
  val reducedByKey = byKey.reduceByKey(_ + _ , 2)

  val grouped = byKey.groupByKey
  val count = grouped.map{case ((sessionId,uri),count) => ((sessionId),(uri,count.sum))}
  val grouped2 = count.groupByKey
REPL将grouped2的类型显示为:

grouped2: org.apache.spark.rdd.RDD[(String, Seq[(String, Int)])] 
但是,如果我在Spark程序中使用相同的代码,则会返回grouped2的不同类型,如此错误所示:

type mismatch;
  found   : org.apache.spark.rdd.RDD[(String, Iterable[(String, Int)])]
  required: org.apache.spark.rdd.RDD[(String, Seq[(String, Int)])]
  Note: (String, Iterable[(String, Int)]) >: (String, Seq[(String, Int)]), but class RDD is invariant in type T.
    You may wish to define T as -T instead. (SLS 4.5)
  val grouped2 :  org.apache.spark.rdd.RDD[(String, Seq[(String, Int)])] = count.groupByKey
这是独立模式的完整代码:

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.SparkContext._
import org.apache.spark.rdd._

object Tester extends App {

  val sc = new SparkContext("local[4]" , "")

  val x = sc.parallelize(List( ("a" , "b" , 1) , ("a" , "b" , 1) , ("c" , "b" , 1) , ("a" , "d" , 1)))

  val byKey = x.map({case (sessionId,uri,count) => (sessionId,uri)->count})
  val reducedByKey = byKey.reduceByKey(_ + _ , 2)

  val grouped = byKey.groupByKey
  val count = grouped.map{case ((sessionId,uri),count) => ((sessionId),(uri,count.sum))}
  val grouped2 : org.apache.spark.rdd.RDD[(String, Seq[(String, Int)])] = count.groupByKey

}
REPL和Standalone中返回的类型是否应该不相等

更新:在独立组中,2被推断为
RDD[(String,Iterable[Nothing])]
so
val-grouped2:RDD[(字符串,Iterable[Nothing])]=count.groupByKey编译

根据程序的运行方式,有三种可能返回的类型

更新2:IntelliJ似乎错误地推断了类型:

val x : org.apache.spark.rdd.RDD[(String, (String, Int))] = sc.parallelize(List( ("a" , ("b" , 1)) , ("a" , ("b" , 1))))

val grouped = x.groupByKey()
IntelliJ将
分组
推断为
org.apache.spark.rdd.rdd[(字符串,Iterable[Nothing])]


当它应该是
org.apache.spark.rdd.rdd[(String,Iterable[(String,Int)])]
(这是spark REPL 1.0版的推断)

为了完整性:spark API在0.9和1.0之间更改,而
groupByKey
现在返回一对
Iterable
作为其第二个成员,而不是
Seq


不幸的是,在IntelliJ问题上,混淆IntelliJ的类型推断并不难。如果它没有出现任何问题,很可能是错误的。

您是否有可能使用不同的版本?API确实在0.9和1.0之间发生了变化。@TravisBrown您的回答是正确的,REPL是0.9,我从1.0源代码运行相同的代码