Apache spark 将架构添加到从文件加载的Spark数据帧时出错

Apache spark 将架构添加到从文件加载的Spark数据帧时出错,apache-spark,apache-spark-sql,data-conversion,Apache Spark,Apache Spark Sql,Data Conversion,下面是test.csv数据示例 val tableDF = spark.read.option("delimiter",",").csv("/Volumes/Data/ap/click/test.csv") import org.apache.spark.sql.types.{StringType, StructField, StructType, IntegerType} val schemaTd = StructType(List(StructField("time_id",Integer

下面是test.csv数据示例

val tableDF = spark.read.option("delimiter",",").csv("/Volumes/Data/ap/click/test.csv")
import org.apache.spark.sql.types.{StringType, StructField, StructType, IntegerType}

val schemaTd = StructType(List(StructField("time_id",IntegerType),StructField("week",IntegerType),StructField("month",IntegerType),StructField("calendar",StringType)))

val result = spark.createDataFrame(tableDF,schemaTd)
文件中除最后一个值以外的所有列都是Int类型,但仍会出错

6659,951,219,2018-03-25 00:00:00
6641,949,219,2018-03-07 00:00:00
6645,949,219,2018-03-11 00:00:00
6638,948,219,2018-03-04 00:00:00
6646,950,219,2018-03-12 00:00:00
6636,948,219,2018-03-02 00:00:00
6643,949,219,2018-03-09 00:00:00

在这种情况下,您应该向
DataFrameReader
提供架构:

import org.apache.spark.sql.types_
val schemaTd=StructType(列表(
StructField(“time_id”,IntegerType),
结构字段(“周”,整数类型),
结构字段(“月”,整数类型),
StructField(“日历”,StringType)))
val tableDF=spark.read.option(“分隔符”、“,”))
.schema(schemaTd)
.csv(“/Volumes/Data/ap/click/test.csv”)

当从
RDD[Row]
创建
Dataset
时(我假设您的实际代码是
spark.createDataFrame(tableDF.RDD,schemaTd)
,否则它不应该真正编译),类型必须与模式一致。您不能提供
String
(CSV阅读器的默认类型)并使用
IntegerType

声明架构,这很有帮助。因此,一旦在.schema中声明,架构就不能更改?如果我需要更改列的数据类型,该怎么办?我该怎么做?
scala> result.show
2018-05-20 17:08:54 ERROR Executor:91 - Exception in task 0.0 in stage 1.0 (TID 1)
java.lang.RuntimeException: Error while encoding: java.lang.RuntimeException: java.lang.String is not a valid external type for schema of int
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, time_id), IntegerType) AS time_id#23
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, week), IntegerType) AS week#24
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, month), IntegerType) AS month#25
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 3, calendar), StringType), true, false) AS calendar#26
    at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:291)
    at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:589)
    at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:589)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.lang.String is not a valid external type for schema of int
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.If$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
    at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:288)