Apache spark 索引器错误:在pyspark外壳上使用reduceByKey操作时,列表索引超出范围
目标:从youtube数据集中找到最高的视频类别 用途:Pypark外壳 预期:类别及其出现次数 实际:将reduceBykey用作索引器时出错错误:列表索引超出范围 我尝试了以下代码:Apache spark 索引器错误:在pyspark外壳上使用reduceByKey操作时,列表索引超出范围,apache-spark,pyspark,Apache Spark,Pyspark,目标:从youtube数据集中找到最高的视频类别 用途:Pypark外壳 预期:类别及其出现次数 实际:将reduceBykey用作索引器时出错错误:列表索引超出范围 我尝试了以下代码: data="/Users/sk/Documents/GitRepository/Udemy_BigData_spark/1.txt" input = sc.textFile(data) results = input.map(lambda x: (x.split(‘\t')[3].encode("utf-8")
data="/Users/sk/Documents/GitRepository/Udemy_BigData_spark/1.txt"
input = sc.textFile(data)
results = input.map(lambda x: (x.split(‘\t')[3].encode("utf-8").replace('"', '').replace("'", '')))results.take(20)
结果如下:
['Comedy', 'Comedy', 'Entertainment', 'People & Blogs', 'People &
Blogs', 'Music', 'Comedy', 'People & Blogs', 'Entertainment',
'Entertainment', 'Entertainment', 'Entertainment', 'Entertainment',
'Entertainment', 'Entertainment', 'Entertainment', 'Entertainment',
'Entertainment', 'Entertainment', 'Entertainment']
results=results.map(lambda x: (x,1))
结果如下:
[('Comedy', 1), ('Comedy', 1), ('Entertainment', 1), ('People & Blogs', 1), ('People & Blogs', 1), ('Music', 1), ('Comedy', 1), ('People & Blogs', 1), ('Entertainment', 1), ('Entertainment', 1), ('Entertainment', 1), ('Entertainment', 1), ('Entertainment', 1), ('Entertainment', 1), ('Entertainment', 1), ('Entertainment', 1), ('Entertainment', 1), ('Entertainment', 1), ('Entertainment', 1), ('Entertainment', 1)]
results=results.reduceByKey(lambda x, y: x + y)
results.take(20)
这会产生一个巨大的错误:
我希望它能向我显示如下结果:
(179049,Music), (127674,Entertainment), (87818,Comedy), (73293,Film &
Animation), (67329,Sports)
我写的代码是用scala编写的
val ds = Seq("A", "B", "C", "A", "B", "C", "D",
"E", "F", "G", "A")
.toDF.as[String].map(x => (x, 1))
ds.groupByKey(x => x._1)
.reduceGroups((l, r) => (l._1, l._2+r._2))
.show
产出:
+-----+------------------------------+
|value|ReduceAggregator(scala.Tuple2)|
+-----+------------------------------+
| F| [F, 1]|
| E| [E, 1]|
| B| [B, 2]|
| D| [D, 1]|
| C| [C, 2]|
| A| [A, 3]|
| G| [G, 1]|
+-----+------------------------------+
嘿,Elior,我正在寻找python代码。有了scala,我可以让它工作。但任务是与pyspark合作: