Apache spark 无法写入和应用groupBy spark数据帧

Apache spark 无法写入和应用groupBy spark数据帧,apache-spark,apache-spark-sql,Apache Spark,Apache Spark Sql,我已经用下面的代码得到了我的spark数据帧 scala> val sqlContext = new org.apache.spark.sql.SQLContext(sc) scala> import sqlContext.implicits._ scala> case class Wiki(project: String, title: String, count: Int, byte_size: String) scala> val data = sc.textF

我已经用下面的代码得到了我的spark数据帧

scala> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
scala> import sqlContext.implicits._

scala> case class Wiki(project: String, title: String, count: Int, byte_size: String)

scala> val data = sc.textFile("s3n://+++/").map(_.split(" ")).map(p => Wiki(p(0), p(1), p(2).trim.toInt, p(3)))

scala> val df = data.toDF()
并尝试写入输出文件:

scala> df.write.parquet("df.parquet")
或使用

scala> df.filter("project = 'en'").select("title","count").groupBy("title").sum().collect()
失败,出现类似错误,如下所示:

WARN TaskSetManager: Lost task 855.0 in stage 0.0 (TID 855, ip-172-31-10-195.ap-northeast-1.compute.internal): org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:251)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
at $line24.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:28)
at $line24.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:28)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:242)
... 8 more

我如何解释这个错误?如何修复它?

请确保拆分始终返回4条记录的数组。可能您有一些格式不正确的条目,或者您用错误的字符拆分了它们

尝试使用以下内容进行筛选:

 val data = sc.textFile("s3n://+++/").map(_.split(" ")).filter(_.size ==4 )map(p => Wiki(p(0), p(1), p(2).trim.toInt, p(3)))

然后查看错误是否继续。拆分后的ArrayIndexOutofBond通常意味着某些记录被错误地解析。在您的情况下,数字2可能意味着无法设置
p(2)
,这意味着其中一条记录只有2个值-
p(0)
p(1)

您的hadoop集群工作正常吗?@Reactormonk我想是的。我在AWS EMR上发射了一个火花束。一切似乎都很好。我在spark Shell中以交互方式工作,您可以添加您的数据帧模式吗?这里有两种不同类型的错误@eliasah更新。这不是精确的printSchema输出。我已经终止了集群,但应该是正确的,意思是你实际上在过滤什么?没有任何意义!
 val data = sc.textFile("s3n://+++/").map(_.split(" ")).filter(_.size ==4 )map(p => Wiki(p(0), p(1), p(2).trim.toInt, p(3)))