Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala:从csv中读取列值为空的数据_Scala_Apache Spark_Apache Spark Sql - Fatal编程技术网

Scala:从csv中读取列值为空的数据

Scala:从csv中读取列值为空的数据,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,环境-spark-3.0.1-bin-hadoop2.7,ScalaLibraryContainer 2.12.3,Scala,SparkSQL,eclipse-jee-Oxy-2-linux-gtk-x8664 我有一个csv文件,有3列数据类型:字符串、长、日期。我已将csv文件转换为datafram,并希望显示它。 但它给出了以下错误 java.lang.ArrayIndexOutOfBoundsException: 2 at org.apache.spark.examples.sql.S

环境-spark-3.0.1-bin-hadoop2.7,ScalaLibraryContainer 2.12.3,Scala,SparkSQL,eclipse-jee-Oxy-2-linux-gtk-x8664

我有一个csv文件,有3列数据类型:字符串、长、日期。我已将csv文件转换为datafram,并希望显示它。 但它给出了以下错误

java.lang.ArrayIndexOutOfBoundsException: 2
at org.apache.spark.examples.sql.SparkSQLExample5$.$anonfun$runInferSchemaExample$2(SparkSQLExample5.scala:30)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:448)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:448)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:127)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
在scala代码中

.map(attributes => Person(attributes(0), attributes(1),attributes(2))).toDF();
若后续行的值少于标头中的值数,则会出现错误。基本上,我正在尝试使用Scala和Spark从csv读取数据,其中列具有空值

行的列数不同。 如果所有行都有3个列值,则它将成功运行

package org.apache.spark.examples.sql

import org.apache.spark.sql.Row
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types._
import java.sql.Date
import org.apache.spark.sql.functions._
import java.util.Calendar;

object SparkSQLExample5 {

 case class Person(name: String, age: String, birthDate: String)

 def main(args: Array[String]): Unit = {
val fromDateTime=java.time.LocalDateTime.now;
val spark = SparkSession.builder().appName("Spark SQL basic example").config("spark.master", "local").getOrCreate();
import spark.implicits._
runInferSchemaExample(spark);
spark.stop()
}

private def runInferSchemaExample(spark: SparkSession): Unit = {
import spark.implicits._
println("1. Creating an RDD of 'Person' object and converting into 'Dataframe' "+ 
    " 2. Registering the DataFrame as a temporary view.")
println("1. Third column of second row is not present.Last value of second row is comma.")
val peopleDF = spark.sparkContext
  .textFile("examples/src/main/resources/test.csv")
  .map(_.split(","))
  .map(attributes => Person(attributes(0), attributes(1),attributes(2))).toDF();
val finalOutput=peopleDF.select("name","age","birthDate")
finalOutput.show();
}
}

csv文件

col1,col2,col3
row21,row22,
row31,row32,

读取csv文件时尝试许可模式,它将为缺少的字段添加NULL

val df=spark.sqlContext.read.format(“csv”).option(“mode”、“PERMISSIVE”).load(“examples/src/main/resources/test.csv”)

你可以找到更多信息
输入:csv文件

col1,col2,col3
row21,row22,
row31,row32,
 val customizedNullDf = finalDf.na.fill("No data")
 customizedNullDf.show(false);
代码:

import org.apache.spark.sql.SparkSession

object ReadCsvFile {

  case class Person(name: String, age: String, birthDate: String)

  def main(args: Array[String]): Unit = {
    val spark = SparkSession.builder().appName("Spark SQL basic example").config("spark.master", "local").getOrCreate();
    readCsvFileAndInferCustomSchema(spark);
    spark.stop()
  }

  private def readCsvFileAndInferCustomSchema(spark: SparkSession): Unit = {
    val df = spark.read.csv("C:/Users/Ralimili/Desktop/data.csv")
    val rdd = df.rdd.mapPartitionsWithIndex { (idx, iter) => if (idx == 0) iter.drop(1) else iter }
    val mapRdd = rdd.map(attributes => {
      Person(attributes.getString(0), attributes.getString(1),attributes.getString(2))
    })
    val finalDf = spark.createDataFrame(mapRdd)
    finalDf.show(false);
  }

}
输出

+-----+-----+---------+
|name |age  |birthDate|
+-----+-----+---------+
|row21|row22|null     |
|row31|row32|null     |
+-----+-----+---------+
+-----+-----+---------+
|name |age  |birthDate|
+-----+-----+---------+
|row21|row22|No data  |
|row31|row32|No data  |
+-----+-----+---------+
如果要填充某些值而不是空值,请使用下面的代码

col1,col2,col3
row21,row22,
row31,row32,
 val customizedNullDf = finalDf.na.fill("No data")
 customizedNullDf.show(false);
输出

+-----+-----+---------+
|name |age  |birthDate|
+-----+-----+---------+
|row21|row22|null     |
|row31|row32|null     |
+-----+-----+---------+
+-----+-----+---------+
|name |age  |birthDate|
+-----+-----+---------+
|row21|row22|No data  |
|row31|row32|No data  |
+-----+-----+---------+

@nagraj036…感谢您的回复!我正在使用“spark.sparkContext”,其中“.option”不可用。您是使用textFile方法还是csv方法加载数据?选项在textFile method I GuesthNKX上不可用,请回复。。。。。。。。我使用的是textFile方法,那么这个(textFile方法)的解决方案是什么?或者我必须使用csv方法?