Apache spark PySpark:Can';t执行datetime years=0001的列操作

Apache spark PySpark:Can';t执行datetime years=0001的列操作,apache-spark,pyspark,Apache Spark,Pyspark,我有一些时间戳格式为“0001 mm dd HH:mm:SS”的数据。我在争取最短的时间。为了获得最短时间,我需要首先转换为DoubleType,因为PySpark dataframes的最小函数显然不适用于时间戳。然而,出于某种原因,datetimes讨厌0001年。不管我做什么,我都不能让它工作。下面,我尝试使用UDF手动将年份增加1,但由于某些原因,它没有注册。但是,我可以使用没有0001年的不同数据列,并将函数中的if语句更改为数据中包含的年份,我可以观察年份的变化 我做错了什么 fro

我有一些时间戳格式为“0001 mm dd HH:mm:SS”的数据。我在争取最短的时间。为了获得最短时间,我需要首先转换为DoubleType,因为PySpark dataframes的最小函数显然不适用于时间戳。然而,出于某种原因,datetimes讨厌0001年。不管我做什么,我都不能让它工作。下面,我尝试使用UDF手动将年份增加1,但由于某些原因,它没有注册。但是,我可以使用没有0001年的不同数据列,并将函数中的if语句更改为数据中包含的年份,我可以观察年份的变化

我做错了什么

from pyspark.sql import SQLContext
import pyspark.sql.functions as sfunc
import pyspark.sql.types as tp
from pyspark import SparkConf
from dateutil.relativedelta import relativedelta

columnname='x'
#columnname='y'
tmpdf.select(columnname).show(5)

def timeyearonecheck(date):
    '''Datetimes breaks down at year = 0001, so bump up the year to 0002'''
    if date.year == 1:
        newdate=date+relativedelta(years=1)
        return newdate
    else:
        return date

def timeConverter(timestamp):
    '''Takes either a TimestampType() or a DateType() and converts it into a 
    float'''
    timetuple=timestamp.timetuple()
    if type(timestamp) == datetime.date:
        timevalue=time.mktime(timetuple)
        return int(timevalue)
    else:
        timevalue=time.mktime(timetuple)+timestamp.microsecond/1000000
        return timevalue

tmptimedf1colname='tmpyeartime'
yearoneudf=sfunc.udf(timeyearonecheck,tp.TimestampType())
tmptimedf1=tmpdf.select(yearoneudf(sfunc.col(columnname)).alias(tmptimedf1colname))
tmptimedf2colname='numbertime'
timeudf=sfunc.udf(timeConverter,tp.DoubleType())
tmptimedf2=tmptimedf1.select(timeudf(sfunc.col(tmptimedf1colname)).alias(tmptimedf2colname))
minimum=tmptimedf2.select(tmptimedf2colname).rdd.min()[0]


+-------------------+
|                  x|
+-------------------+
|0001-01-02 00:00:00|
|0001-01-02 00:00:00|
|0001-01-02 00:00:00|
|0001-01-02 00:00:00|
|0001-01-02 00:00:00|
+-------------------+
only showing top 5 rows

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-42-b5725bf01860> in <module>()
 17 timeudf=sfunc.udf(timeConverter,tp.DoubleType())
 18 
tmptimedf2=tmpdf.select(timeudf(sfunc.col(columnname)).
alias(tmptimedf2colname))
---> 19 minimum=tmptimedf2.select(tmptimedf2colname).rdd.min()[0]
 20 print(minimum)
...
Py4JJavaError: An error occurred while calling 
z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 
in stage 43.0 failed 4 times, most recent failure: Lost task 3.3 in stage 
43.0 (TID 7829, 10.10.12.41, executor 39): 
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
ValueError: year 0 is out of range
但前提是我使用年份为0001的列,而不是没有0001年的“y”列。“y”列工作正常

我不明白为什么我可以为tmpdf显示5个包含0001的值,但我不能选择第一个值,因为它有0001

编辑:如下所述,我真的很想将0001年转换为0002年,因为PySpark的approxQuantile对时间戳不起作用,一般来说,我不太了解数据集,不知道什么年是可以接受的。0001年绝对是一个填充年,但在我的数据中,1970年可能是真实的一年(在我工作的一般情况下)

到目前为止,我已经做到了:

def tmpfunc(timestamp):
    time=datetime.datetime.strptime(timestamp,'%Y-%m-%d %H:%M:%S')
    return time

adf=datadf.select(sfunc.col(columnname).cast("string").alias('a'))
newdf = adf.withColumn('b',sfunc.regexp_replace('a', '0001-', '0002-'))
newdf.show(10)
print(newdf.first())
tmpudf=sfunc.udf(tmpfunc,tp.TimestampType())
newnewdf=newdf.select(tmpudf(sfunc.col('b')).alias('c'))
newnewdf.show(10)
print(newnewdf.first())

+-------------------+-------------------+
|                  a|                  b|
+-------------------+-------------------+
|0001-01-02 00:00:00|0002-01-02 00:00:00|
|0001-01-02 00:00:00|0002-01-02 00:00:00|
|0001-01-02 00:00:00|0002-01-02 00:00:00|
|0001-01-02 00:00:00|0002-01-02 00:00:00|
|0001-01-02 00:00:00|0002-01-02 00:00:00|
|2015-10-13 09:56:09|2015-10-13 09:56:09|
|0001-01-02 00:00:00|0002-01-02 00:00:00|
|2013-11-05 21:28:09|2013-11-05 21:28:09|
|1993-12-24 03:52:47|1993-12-24 03:52:47|
|0001-01-02 00:00:00|0002-01-02 00:00:00|
+-------------------+-------------------+
only showing top 10 rows

Row(a='0001-01-02 00:00:00', b='0002-01-02 00:00:00')
+-------------------+
|                  c|
+-------------------+
|0002-01-03 23:56:02|
|0002-01-03 23:56:02|
|0002-01-03 23:56:02|
|0002-01-03 23:56:02|
|0002-01-03 23:56:02|
|2015-10-13 09:56:09|
|0002-01-03 23:56:02|
|2013-11-05 21:28:09|
|1993-12-24 03:52:47|
|0002-01-03 23:56:02|
+-------------------+
only showing top 10 rows

Row(c=datetime.datetime(2, 1, 2, 0, 0))
正如一位用户在下面评论的那样,“show”中的天数为1天23小时56分钟2秒。为什么,我该如何摆脱它?那么,为什么我的“第一个”调用是正确的,但在应该是(2,1,2,0,0,0)的内容中也缺少一个0

为了获得最短时间,我需要首先转换为DoubleType,因为PySpark数据帧的最小函数显然不适用于timestapms

是的

df=spark.createDataFrame(
[“0001-01-02 00:00:00”,“0001-01-03 00:00:00”],“字符串”
).选择EXPR(“将时间戳(值)设置为x”)
min\u max\u df=df.select(sfunc.min(“x”)、sfunc.max(“x”))
最小值最大值测向显示()
# +-------------------+-------------------+
#|最小(x)|最大(x)|
# +-------------------+-------------------+
# |0001-01-02 00:00:00|0001-01-03 00:00:00|
# +-------------------+-------------------+
失败的部分实际上是转换为本地值:

>>最小值最大值df.first()
回溯(最近一次呼叫最后一次):
...
return datetime.datetime.fromtimestamp(ts//1000000).replace(微秒=ts%1000000)
ValueError:第0年超出范围
最小值的历元时间戳为

>>df.select(sfunc.col(“x”).cast(“long”)).first().x
-62135683200
当转换回最新版本时,它似乎向后移动了2天(Scala代码):

scala>java.time.Instant.ofepochs秒(-62135683200L)
res0:java.time.Instant=0000-12-31T00:00:00Z
因此,在Python中不再有效

假设
0001
只是一个占位符,您可以在解析时忽略它:

df.select(sfunc.to_时间戳(
sfunc.col(“x”).cast(“字符串”),
“0001毫米日HH:MM:ss”)。别名(“x”)
)).选择(
sfunc.min(“x”),
sfunc.max(“x”)
).first()
#行(最小(x)=datetime.datetime(1970,1,2,1,0),最大(x)=datetime.datetime(1970,1,3,1,0))
您也可以直接将结果强制转换为字符串:

df.select(sfunc.min(“x”).cast(“string”)、sfunc.max(“x”).cast(“string”).first()
#行(转换(最小(x)为字符串)='0001-01-02 00:00:00',转换(最大(x)为字符串)='0001-01-03 00:00:00')

啊,是的。datetime的问题出现在代码的后面部分,我尝试使用approxQuantile并获得以下错误:Py4JJavaError:调用o3334.approxQuantile时出错:java.lang.IllegalArgumentException:要求失败:不支持对数据类型为TimestampType的x列进行分位数计算。因此,我仍然需要将0001转换为其他值,最好是0002,因为我不熟悉数据,不想做太多更改。您的倒数第二个代码块似乎很有希望,但它不会转换为0002,也不会抵消年份!=1
tmpdf.select(columnname).first()
def tmpfunc(timestamp):
    time=datetime.datetime.strptime(timestamp,'%Y-%m-%d %H:%M:%S')
    return time

adf=datadf.select(sfunc.col(columnname).cast("string").alias('a'))
newdf = adf.withColumn('b',sfunc.regexp_replace('a', '0001-', '0002-'))
newdf.show(10)
print(newdf.first())
tmpudf=sfunc.udf(tmpfunc,tp.TimestampType())
newnewdf=newdf.select(tmpudf(sfunc.col('b')).alias('c'))
newnewdf.show(10)
print(newnewdf.first())

+-------------------+-------------------+
|                  a|                  b|
+-------------------+-------------------+
|0001-01-02 00:00:00|0002-01-02 00:00:00|
|0001-01-02 00:00:00|0002-01-02 00:00:00|
|0001-01-02 00:00:00|0002-01-02 00:00:00|
|0001-01-02 00:00:00|0002-01-02 00:00:00|
|0001-01-02 00:00:00|0002-01-02 00:00:00|
|2015-10-13 09:56:09|2015-10-13 09:56:09|
|0001-01-02 00:00:00|0002-01-02 00:00:00|
|2013-11-05 21:28:09|2013-11-05 21:28:09|
|1993-12-24 03:52:47|1993-12-24 03:52:47|
|0001-01-02 00:00:00|0002-01-02 00:00:00|
+-------------------+-------------------+
only showing top 10 rows

Row(a='0001-01-02 00:00:00', b='0002-01-02 00:00:00')
+-------------------+
|                  c|
+-------------------+
|0002-01-03 23:56:02|
|0002-01-03 23:56:02|
|0002-01-03 23:56:02|
|0002-01-03 23:56:02|
|0002-01-03 23:56:02|
|2015-10-13 09:56:09|
|0002-01-03 23:56:02|
|2013-11-05 21:28:09|
|1993-12-24 03:52:47|
|0002-01-03 23:56:02|
+-------------------+
only showing top 10 rows

Row(c=datetime.datetime(2, 1, 2, 0, 0))