Scala 计算元组中单词的出现次数
我有一个类似于以下示例的数据集:Scala 计算元组中单词的出现次数,scala,apache-spark,Scala,Apache Spark,我有一个类似于以下示例的数据集: tmj_dc_mgmt, Washington, en, 483, 457, 256, ['hiring', 'BusinessMgmt', 'Washington', 'Job'] SRiku0728, 福山市, ja, 6705, 357, 273, ['None'] BesiktaSeyma_, Akyurt, tr, 12921, 1801, 283, ['None'] AnnaKFrick, Virginia, en, 5731, 682, 1120,
tmj_dc_mgmt, Washington, en, 483, 457, 256, ['hiring', 'BusinessMgmt', 'Washington', 'Job']
SRiku0728, 福山市, ja, 6705, 357, 273, ['None']
BesiktaSeyma_, Akyurt, tr, 12921, 1801, 283, ['None']
AnnaKFrick, Virginia, en, 5731, 682, 1120, ['Investment', 'PPP', 'Bogota', 'jobs']
Accprimary, Manchester, en, 1650, 268, 404, ['None']
Wandii_S, Johannesburg, en, 15510, 828, 398, ['None']
方括号内的记录为哈希标记,不包括任何记录
我正在尝试使用Spark和Scala查找数据集中的前10个hashtag
我已经做到了这一点:
val file = sc.textFile("/data")
val tmp1 = file
.map(_.split(","))
.map( p=>p(6))
.map(_.replaceAll("\\[|\\]",""))
.map(_.replaceAll("'",""))
.filter(x => x != " None")
.map(word => (word, 1))
.reduceByKey(_ + _)
我不知道该如何分类,并从中选出前10名,我对Scala和Spark是新手
任何帮助都将不胜感激。也许您可以尝试使用sortBy并采取以下措施:
您可以在its上找到有关RDD函数的更多信息。您可以使用top和自定义排序来实现您想要的功能:
val r = sc.parallelize(Seq(
"tmj_dc_mgmt, Washington, en, 483, 457, 256, ['hiring', 'BusinessMgmt', 'Washington', 'Job']",
"SRiku0728, 福山市, ja, 6705, 357, 273, ['None']",
"BesiktaSeyma_, Akyurt, tr, 12921, 1801, 283, ['None']",
"AnnaKFrick, Virginia, en, 5731, 682, 1120, ['Investment', 'PPP', 'BusinessMgmt', 'Bogota', 'jobs']",
"Accprimary, Manchester, en, 1650, 268, 404, ['None']",
"Wandii_S, Johannesburg, en, 15510, 828, 398, ['None']",
"Wandii_S, Johannesburg, en, 15510, 828, 398, ['Investment']"
))
val tag = ".*\\[([^\\]]*)\\]".r
val ordering = Ordering.by[(String, Int), Int](_._2)
r.collect{case tag(t) => t.split(",\\s*")}.flatMap(_.map(_.drop(1).dropRight(1))).filter(_ != "None").map(_ -> 1)
.reduceByKey(_ + _).top(10)(ordering).foreach(println)
结果:
(BusinessMgmt,2)
(Investment,2)
(Washington,1)
(Bogota,1)
(PPP,1)
(jobs,1)
(Job,1)
(hiring,1)
我修改了您的测试数据以说明多个值
或者,如果驱动程序内存中适合不同的哈希标记,则可以使用countByValue而不是reduceByKey,并在本地执行最终排序:
r.collect{case tag(t) => t.split(",\\s*")}.flatMap(_.map(_.drop(1).dropRight(1))).filter(_ != "None")
.countByValue().toList.sortBy(-_._2).take(10).foreach(println)
另外请注意,我使用了不同的方法来提取hashtag,因为我相信这样做会导致不正确的结果,当您选择第6列时,您会得到['hipping'、['Investment'…而不是完整的列表。它可以工作,谢谢,我也厌倦了这样做'val tmp1=file.map.split,.map p=>p6.map.replaceAll\[\124;\],.map.replaceAll',.filterx=>x!=None.mapword=>word,1.reduceByKey+\uu0.sortByx=>-x.。\u2,x.。\u1.take10.foreachprintln'不确定是否正确我认为这也是一种有效的方法,但对某些人来说可能有点难以辨认。如果你只想要前10名,排序将比前10名慢得多。
r.collect{case tag(t) => t.split(",\\s*")}.flatMap(_.map(_.drop(1).dropRight(1))).filter(_ != "None")
.countByValue().toList.sortBy(-_._2).take(10).foreach(println)