Scala 原因:java.time.format.DateTimeParseException:Text';2020-05-12 10:23:45';无法分析,在索引10处找到未分析的文本
我正在创建一个UDF,它将为我找到一周的第一天 UDF的输入将是数据帧中的一个字符串列,该数据帧将日期时间存储在Scala 原因:java.time.format.DateTimeParseException:Text';2020-05-12 10:23:45';无法分析,在索引10处找到未分析的文本,scala,apache-spark,databricks,Scala,Apache Spark,Databricks,我正在创建一个UDF,它将为我找到一周的第一天 UDF的输入将是数据帧中的一个字符串列,该数据帧将日期时间存储在yyyy-MM-dd hh:MM:ss中 我同意在没有UDF的情况下也可以建立同样的框架,但我想探索所有这样做的选择。到目前为止,我仍然坚持通过UDF实现 重要提示-本周开始日为周一 代码- import org.apache.spark.sql.functions._ import java.time.format.DateTimeFormatter import java.time
yyyy-MM-dd hh:MM:ss
中
我同意在没有UDF的情况下也可以建立同样的框架,但我想探索所有这样做的选择。到目前为止,我仍然坚持通过UDF实现
重要提示-本周开始日为周一
代码-
import org.apache.spark.sql.functions._
import java.time.format.DateTimeFormatter
import java.time.LocalDate
import org.joda.time.DateTimeConstants
val df1 = Seq((1, "2020-05-12 10:23:45", 5000), (2, "2020-11-11 12:12:12", 2000)).toDF("id", "DateTime", "miliseconds")
val findFirstDayOfWeek = udf((x:String) => {
val dateFormat = DateTimeFormatter.ofPattern("yyyy-MM-dd")
val dayOfWeek = LocalDate.parse(x,dateFormat).getDayOfWeek
if (dayOfWeek != DateTimeConstants.MONDAY )
{
val newDate = LocalDate.parse(x).plusDays(DateTimeConstants.MONDAY - dayOfWeek.getValue())
val firstDateOfTheWeek = newDate.format(dateFormat)
firstDateOfTheWeek
}
else
{
val newDate = x
newDate.format(dateFormat)
}
})
val udf_new_df1 = df1.withColumn("week",findFirstDayOfWeek(col("DateTime")))
但是当我运行display(udf\u new\u df1)
时,我得到了这个错误-(在Databricks上)
所以我的问题是,为什么我在解析类型为字符串和格式为yyyy MM dd hh:MM:ss的dateTime时遇到问题?使用
LocalDateTime.parse(x.replace(“”,'T'))
或LocalDate.parse(x.split(“”)(0))
而不是LocalDate.parse(x)
和LocalDate.parse(x,dateFormat)
不确定您为什么要使用自定义项,但您可以在一周的第一天不使用自定义项,如下所示- 一周从星期一开始 使用spark内置功能
date\u trunc
val df1=Seq((1,“2020-05-12 10:23:45”,5000),(2,“2020-11-11 12:12:12”,2000)).toDF(“id”,“日期时间”,“毫秒”)
df1.带列(“周”、“日期”、“周”、“日期时间”))
.show(假)
/**
* +---+-------------------+-----------+-------------------+
*| id | DateTime |毫秒|周|
* +---+-------------------+-----------+-------------------+
* |1 |2020-05-12 10:23:45|5000 |2020-05-11 00:00:00|
* |2 |2020-11-11 12:12:12|2000 |2020-11-09 00:00:00|
* +---+-------------------+-----------+-------------------+
*/
使用自定义项
//转换日期时间->将日期截断为一周的第一天
val findFirstDayOfWeek=udf((x:String)=>{
val dateFormat=DateTimeFormatter.of模式(“yyyy-MM-dd HH:MM:ss”)
val time=LocalDateTime.parse(x,dateFormat)
val dayOfWeek=time.getDayOfWeek
if(dayOfWeek.getValue!=DateTimeConstants.MONDAY){
val newDateTime=time.plusDays(DateTimeConstants.MONDAY-dayOfWeek.getValue())
java.sql.Date.valueOf(newDateTime.toLocalDate)
}否则{
java.sql.Date.valueOf(time.toLocalDate)
}
})
val udf_new_df1=df1。带列(“周”,findFirstDayOfWeek(列(“日期时间”))
udf_新建_df1.show(错误)
udf_new_df1.printSchema()
/**
* +---+-------------------+-----------+----------+
*| id | DateTime |毫秒|周|
* +---+-------------------+-----------+----------+
* |1 |2020-05-12 10:23:45|5000 |2020-05-11|
* |2 |2020-11-11 12:12:12|2000 |2020-11-09|
* +---+-------------------+-----------+----------+
*
*根
*|--id:integer(nullable=false)
*|--DateTime:string(nullable=true)
*|--毫秒:整数(可空=假)
*|--周:日期(可空=真)
*/
Hi@Andriy,感谢您的及时回复。因此,我将IF,ELSE
中的用法替换为-IF(dayOfWeek!=DateTimeConstants.MONDAY){val newDate=LocalDateTime.parse(x.replace(''T')).plusDays(DateTimeConstants.MONDAY-dayOfWeek.getValue())newDate}ELSE{val newDate=x newDate}
,但我仍然收到了相同的错误。你的意思是我应该在使用UDF之前做这个转换吗?谢谢@Someshware,我同意没有使用UDF的要求。我想这样做的唯一原因是因为我想探索执行此操作的所有选项。因此,在通过UDF实现时,我面临一些问题。除了你的回答之外,我还请求你帮助我完成这个UDF。提前感谢。感谢@Someshware,在我之前的一个问题中,我最近发现了java.sql.date
、Calendar
、SimpleDataFormat
等少数几个类设计得很差,并且存在漏洞。资料来源-。但是谢谢你的帮助,我明白我做错了什么。
org.apache.spark.SparkException: Failed to execute user defined function($anonfun$1: (string) => string)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF.eval(ScalaUDF.scala:1066)
at org.apache.spark.sql.catalyst.expressions.Alias.eval(namedExpressions.scala:152)
at org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.apply(InterpretedMutableProjection.scala:62)
at org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation$$anonfun$apply$23$$anonfun$applyOrElse$23.apply(Optimizer.scala:1471)
at org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation$$anonfun$apply$23$$anonfun$applyOrElse$23.apply(Optimizer.scala:1471)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:296)
at org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation$$anonfun$apply$23.applyOrElse(Optimizer.scala:1471)
at org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation$$anonfun$apply$23.applyOrElse(Optimizer.scala:1466)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:280)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:280)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:77)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:279)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:285)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:285)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$8.apply(TreeNode.scala:354)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:208)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:352)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:285)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:285)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:285)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$8.apply(TreeNode.scala:354)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:208)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:352)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:285)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:269)
at org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation$.apply(Optimizer.scala:1466)
at org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation$.apply(Optimizer.scala:1465)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:112)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:109)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:35)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:109)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:101)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:101)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$executeAndTrack$1.apply(RuleExecutor.scala:80)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$executeAndTrack$1.apply(RuleExecutor.scala:80)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:79)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$optimizedPlan$1.apply(QueryExecution.scala:94)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$optimizedPlan$1.apply(QueryExecution.scala:94)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:93)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:93)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$2.apply(QueryExecution.scala:263)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$2.apply(QueryExecution.scala:263)
at org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:147)
at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:263)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withCustomExecutionEnv$1.apply(SQLExecution.scala:102)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:240)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:97)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:170)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withAction(Dataset.scala:3441)
at org.apache.spark.sql.Dataset.collectResult(Dataset.scala:2832)
at com.databricks.backend.daemon.driver.OutputAggregator$.withOutputAggregation0(OutputAggregator.scala:149)
at com.databricks.backend.daemon.driver.OutputAggregator$.withOutputAggregation(OutputAggregator.scala:54)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$getResultBufferInternal$1$$anonfun$apply$1.apply(ScalaDriverLocal.scala:318)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$getResultBufferInternal$1$$anonfun$apply$1.apply(ScalaDriverLocal.scala:303)
at scala.Option.map(Option.scala:146)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$getResultBufferInternal$1.apply(ScalaDriverLocal.scala:303)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$getResultBufferInternal$1.apply(ScalaDriverLocal.scala:267)
at scala.Option.map(Option.scala:146)
at com.databricks.backend.daemon.driver.ScalaDriverLocal.getResultBufferInternal(ScalaDriverLocal.scala:267)
at com.databricks.backend.daemon.driver.DriverLocal.getResultBuffer(DriverLocal.scala:463)
at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:244)
at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:373)
at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:350)
at com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:238)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:233)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:48)
at com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:271)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:48)
at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:350)
at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
at scala.util.Try$.apply(Try.scala:192)
at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:639)
at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:485)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:597)
at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:390)
at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)
at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.time.format.DateTimeParseException: Text '2020-05-12 10:23:45' could not be parsed, unparsed text found at index 10
at java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:1952)
at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1851)
at java.time.LocalDate.parse(LocalDate.java:400)
at linedde9e8e2c7794f68a6e16898b7ed370036.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$1.apply(command-14467074:14)
at linedde9e8e2c7794f68a6e16898b7ed370036.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$1.apply(command-14467074:11)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF$$anonfun$2.apply(ScalaUDF.scala:108)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF$$anonfun$2.apply(ScalaUDF.scala:107)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF.eval(ScalaUDF.scala:1063)
... 100 more
$ scala
Welcome to Scala 2.13.0 (OpenJDK 64-Bit Server VM, Java 1.8.0_252).
Type in expressions for evaluation. Or try :help.
scala> java.time.LocalDateTime.parse("2020-05-12 10:23:45".replace(' ', 'T'))
res0: java.time.LocalDateTime = 2020-05-12T10:23:45
scala> java.time.LocalDate.parse("2020-05-12 10:23:45".split(' ')(0))
res1: java.time.LocalDate = 2020-05-12