Spark Scala了解reduceByKey(+;ux)

Spark Scala了解reduceByKey(+;ux),scala,apache-spark,word-count,bigdata,Scala,Apache Spark,Word Count,Bigdata,我无法理解第一个使用scala的spark示例中的reduceByKey(+xy) object WordCount { def main(args: Array[String]): Unit = { val inputPath = args(0) val outputPath = args(1) val sc = new SparkContext() val lines = sc.textFile(inputPath) val wordCounts = lines.flatMap {line

我无法理解第一个使用scala的spark示例中的reduceByKey(+xy)

object WordCount {
def main(args: Array[String]): Unit = {
val inputPath = args(0)
val outputPath = args(1)
val sc = new SparkContext()
val lines = sc.textFile(inputPath)
val wordCounts = lines.flatMap {line => line.split(" ")}
.map(word => (word, 1))
.reduceByKey(_ + _)  **I cant't understand this line**
wordCounts.saveAsTextFile(outputPath)
}
}

Reduce接受两个元素,并在对这两个参数应用函数后生成第三个元素

您显示的代码相当于以下代码

 reduceByKey((x,y)=> x + y)
Scala不需要定义伪变量并编写lambda,它足够聪明,可以看出您试图实现的是在它接收的任何两个参数上应用
func
(在本例中是sum),从而实现语法

 reduceByKey(_ + _) 

reduceByKey接受两个参数,应用一个函数并返回

reduceByKey(+389;)等同于reduceByKey((x,y)=>x+y)

例如:

val numbers = Array(1, 2, 3, 4, 5)
val sum = numbers.reduceLeft[Int](_+_)

println("The sum of the numbers one through five is " + sum)
结果:

The sum of the numbers one through five is 15
numbers: Array[Int] = Array(1, 2, 3, 4, 5)
sum: Int = 15
相同的reduceByKey(+++)等价于reduceByKey((x,y)=>x++y)