Apache spark Spark/Scala中单词的文档计数

Apache spark Spark/Scala中单词的文档计数,apache-spark,Apache Spark,我有一个文本变量,它是scala中字符串的RDD val data = sc.parallelize(List("i am a good boy.Are you a good boy.","You are also working here.","I am posting here today.You are good.")) 我在Scala映射中有另一个变量(如下所示) //需要查找单据计数的单词列表,初始单据计数为1 val dictionary = Map( """good""" -&g

我有一个文本变量,它是scala中字符串的RDD

val data = sc.parallelize(List("i am a good boy.Are you a good boy.","You are also working here.","I am posting here today.You are good."))
我在Scala映射中有另一个变量(如下所示)

//需要查找单据计数的单词列表,初始单据计数为1

val dictionary = Map( """good""" -> 1,"""working""" -> 1,"""posting""" -> 1 ).
我想对每个字典术语进行文档计数,并以键值格式获得输出

对于上述数据,我的输出应如下所示

(good,2)

(working,1)

(posting,1)
我试过的是

dictionary.map { case(k,v) => k -> k.r.findFirstIn(data.map(line => line.trim()).collect().mkString(",")).size}
我所有的单词都被算作1

请帮我修一下这条线


提前感谢。

为什么不使用flatMap创建字典,然后您就可以查询它了

val dictionary = data.flatMap {case line => line.split(" ")}.map {case word => (word, 1)}.reduceByKey(_+_)
如果我在REPL中收集这些信息,我会得到以下结果:

res9: Array[(String, Int)] = Array((here,1), (good.,1), (good,2), (here.,1), (You,1), (working,1), (today.You,1), (boy.Are,1), (are,2), (a,2), (posting,1), (i,1), (boy.,1), (also,1), (I,1), (am,2), (you,1))

显然,您需要做一个比我的简单示例更好的拆分。

首先,您的词典应该是一个集合,因为一般来说,您需要将术语集合映射到包含它们的文档数量

因此,您的数据应该如下所示:

scala> val docs = List("i am a good boy.Are you a good boy.","You are also working here.","I am posting here today.You are good.")
docs: List[String] = List(i am a good boy.Are you a good boy., You are also working here., I am posting here today.You are good.)
您的字典应该如下所示:

scala> val dictionary = Set("good", "working", "posting")
dictionary: scala.collection.immutable.Set[String] = Set(good, working, posting)
scala> dictionary.map(k => k -> docs.count(_.contains(k))) toMap
res4: scala.collection.immutable.Map[String,Int] = Map(good -> 2, working -> 1, posting -> 1)
scala> dictionary.map(k => k -> docs.count(d => foo(d, k))) toMap
res3: scala.collection.immutable.Map[String,Int] = Map(good -> 2, working -> 1, posting -> 1)
然后,您必须实现转换,因为
contains
函数的最简单逻辑可能如下所示:

scala> val dictionary = Set("good", "working", "posting")
dictionary: scala.collection.immutable.Set[String] = Set(good, working, posting)
scala> dictionary.map(k => k -> docs.count(_.contains(k))) toMap
res4: scala.collection.immutable.Map[String,Int] = Map(good -> 2, working -> 1, posting -> 1)
scala> dictionary.map(k => k -> docs.count(d => foo(d, k))) toMap
res3: scala.collection.immutable.Map[String,Int] = Map(good -> 2, working -> 1, posting -> 1)
为了获得更好的解决方案,我建议您根据自己的需求实现特定的功能

(字符串,字符串)=>布尔值

确定文件中是否存在该术语:

scala> def foo(doc: String, term: String): Boolean = doc.contains(term)
foo: (doc: String, term: String)Boolean
最终的解决方案如下所示:

scala> val dictionary = Set("good", "working", "posting")
dictionary: scala.collection.immutable.Set[String] = Set(good, working, posting)
scala> dictionary.map(k => k -> docs.count(_.contains(k))) toMap
res4: scala.collection.immutable.Map[String,Int] = Map(good -> 2, working -> 1, posting -> 1)
scala> dictionary.map(k => k -> docs.count(d => foo(d, k))) toMap
res3: scala.collection.immutable.Map[String,Int] = Map(good -> 2, working -> 1, posting -> 1)
您必须做的最后一件事是使用SparkContext计算结果贴图。首先,您必须定义要并行化的数据。假设我们想要并行化文档集合,那么解决方案可能如下所示:

val docsRDD = sc.parallelize(List(
    "i am a good boy.Are you a good boy.", 
    "You are also working here.", 
    "I am posting here today.You are good."
))
docsRDD.mapPartitions(_.map(doc => dictionary.collect {
  case term if doc.contains(term) => term -> 1
})).map(_.toMap) reduce { case (m1, m2) => merge(m1, m2) }

def merge(m1: Map[String, Int], m2: Map[String, Int]) =
  m1 ++ m2 map { case (k, v) => k -> (v + m1.getOrElse(k, 0)) }