Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/macos/9.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala reduceByKey不接受泛型参数_Scala_Generics_Apache Spark - Fatal编程技术网

Scala reduceByKey不接受泛型参数

Scala reduceByKey不接受泛型参数,scala,generics,apache-spark,Scala,Generics,Apache Spark,我想根据提供的函数减少键值中的泛型值 但它给出了错误: value reduceByKey is not a member of org.apache.spark.rdd.RDD[(String, T)] val reduce = (rDD:RDD[(String,T)]) => rDD.reduceByKey((x,y) => y)** ^ reduceByKey不适用于泛型 也 当我试着用咖喱

我想根据提供的函数减少键值中的泛型值 但它给出了错误:

value reduceByKey is not a member of org.apache.spark.rdd.RDD[(String, T)]
val reduce = (rDD:RDD[(String,T)]) => rDD.reduceByKey((x,y) => y)**
                                          ^ 
reduceByKey不适用于泛型

当我试着用咖喱做它的时候

map andThen reduce 
它再次表示找不到,然后是符号

更新

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.rdd.RDD

class Aggragator[T]( val rdd:RDD[String], function:Function[String,T]){

val mapped = rdd.map(x => (x,function.apply(x)))
val reduce = mapped.reduceByKey((x,y) => y)

}

object Aggragator{

val conf = new SparkConf()
.setMaster("local[2]")
 .setAppName("xx")

val sc = new SparkContext(conf)
val rdd = sc.parallelize(List("1","2"))
val reduced = new Aggragator[Int](rdd, (x:String) => x.toInt).reduce.collect()
  def main( args: Array[String] ) {
    println(reduced)
 }}

在哪里定义了
T
?请发布你的问题。T是类的类型,请向我们展示完整的代码示例。这对我来说编译得很好。嗨,我创建了这个类,它在ide中没有显示错误,但是当我编译代码时,它抛出了错误。缺少的是类中的
ClassTag
上下文绑定:
classegragator[T:ClassTag]
在哪里定义了
T
?请发布你的问题。T是类的类型,请向我们展示完整的代码示例。这对我来说编译得很好。嗨,我创建了这个类,它在ide中没有显示错误,但是当我编译代码时,它会抛出错误。缺少的是类中的
ClassTag
上下文绑定:
classegragator[T:ClassTag]
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.rdd.RDD

class Aggragator[T]( val rdd:RDD[String], function:Function[String,T]){

val mapped = rdd.map(x => (x,function.apply(x)))
val reduce = mapped.reduceByKey((x,y) => y)

}

object Aggragator{

val conf = new SparkConf()
.setMaster("local[2]")
 .setAppName("xx")

val sc = new SparkContext(conf)
val rdd = sc.parallelize(List("1","2"))
val reduced = new Aggragator[Int](rdd, (x:String) => x.toInt).reduce.collect()
  def main( args: Array[String] ) {
    println(reduced)
 }}