Scala datetime的Spark SQL函数适用于AM而非PM中的时间格式

Scala datetime的Spark SQL函数适用于AM而非PM中的时间格式,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,我正在将上午/下午的DateTime转换为24小时。AM的转换工作正常,但PM的转换失败,并返回null。请在下面找到样品 val seq = Seq((1,"abc","123","15/3/2021 02:00:00 AM"),(2,"pqr","456","15/3/2021 04:00:00 PM"),(1,"xyz","789"

我正在将上午/下午的DateTime转换为24小时。AM的转换工作正常,但PM的转换失败,并返回null。请在下面找到样品

val seq = Seq((1,"abc","123","15/3/2021 02:00:00 AM"),(2,"pqr","456","15/3/2021 04:00:00 PM"),(1,"xyz","789","15/3/2021 09:00:00 AM"))

val df = seq.toDF("id","name","addr","time")

val time = df.withColumn("time2",from_unixtime(unix_timestamp($"time","dd/MM/yyyy HH:mm:ss a"),"d MMMMM yyyy HH:mm:ss"))

+---+----+----+---------------------+----------------------+
|id |name|addr|time                 |time2                 |
+---+----+----+---------------------+----------------------+
|1  |abc |123 |15/3/2021 02:00:00 AM|15 March 2021 02:00:00|
|2  |pqr |456 |15/3/2021 04:00:00 PM|null                  |
|1  |xyz |789 |15/3/2021 09:00:00 AM|15 March 2021 09:00:00|
+---+----+----+---------------------+----------------------+

有人能在这里提出建议吗?

使用小写字母
h
表示上午和下午(1-12)的
时钟小时数
(请参阅)。此外,使用1
M
,因为给定格式只有一个月的数字;对于结果中的长格式月份,请使用4
M
而不是5

val time = df.withColumn(
    "time2",
    from_unixtime(unix_timestamp($"time","dd/M/yyyy hh:mm:ss a"),"d MMMM yyyy HH:mm:ss")
)

time.show(false)
+---+----+----+---------------------+----------------------+
|id |name|addr|time                 |time2                 |
+---+----+----+---------------------+----------------------+
|1  |abc |123 |15/3/2021 02:00:00 AM|15 March 2021 02:00:00|
|2  |pqr |456 |15/3/2021 04:00:00 PM|15 March 2021 16:00:00|
|1  |xyz |789 |15/3/2021 09:00:00 AM|15 March 2021 09:00:00|
+---+----+----+---------------------+----------------------+