如何从pyspark中的行获取最大日期

如何从pyspark中的行获取最大日期,pyspark,pyspark-dataframes,Pyspark,Pyspark Dataframes,我想要的是一个新的列,其中包含来自colA和ColB的最长日期。我正在运行相同的代码,当我执行maxDF.show时,我得到以下错误: from pyspark.sql.window import Window from pyspark.sql import functions as F maxcol = func.udf(lambda row: F.max(row)) temp = [(("ID1", '2019-01-01', '2019-02-01')), (("ID2", '2018-0

我想要的是一个新的列,其中包含来自colA和ColB的最长日期。我正在运行相同的代码,当我执行maxDF.show时,我得到以下错误:

from pyspark.sql.window import Window
from pyspark.sql import functions as F
maxcol = func.udf(lambda row: F.max(row))
temp = [(("ID1", '2019-01-01', '2019-02-01')), (("ID2", '2018-01-01', '2019-05-01')), (("ID3", '2019-06-01', '2019-04-01'))]
t1 = spark.createDataFrame(temp, ["ID", "colA", "colB"])
maxDF = t1.withColumn("maxval", maxcol(F.struct([t1[x] for x in t1.columns[1:]])))

输出

from pyspark.sql import SparkSession
from pyspark.sql import functions as F
from pyspark.sql import column

spark = SparkSession.builder.appName("Python Spark").getOrCreate()

temp = [("ID1", '2019-01-01', '2019-02-01'), ("ID2", '2018-01-01', '2019-05-01'),
        ("ID3", '2019-06-01', '2019-04-01')]

t1 = spark.createDataFrame(temp, ["ID", "colA", "colB"])

maxDF = t1.withColumn("maxval", F.greatest(t1["colA"], t1["colB"]))
maxDF.show()

你也可以试试这样的。。。用于首先转换为日期对象,然后比较:

| ID|      colA|      colB|    maxval|
+---+----------+----------+----------+
|ID1|2019-01-01|2019-02-01|2019-02-01|
|ID2|2018-01-01|2019-05-01|2019-05-01|
|ID3|2019-06-01|2019-04-01|2019-06-01|
+---+----------+----------+----------+
返回:

from pyspark.sql.functions import *

temp = [(("ID1", '2019-01-01', '2019-02-01')), (("ID2", '2018-01-01', '2019-05-01')), (("ID3", '2019-06-01', '2019-04-01'))]
t1 = spark.createDataFrame(temp, ["ID", "colA", "colB"])
t2 = t1.select("ID", to_date(t1.colA).alias('colADate'), to_date(t1.colB).alias('colBDate'))
t3 = t2.withColumn('maxDateFromRow', when(t2.colADate > t2.colBDate, t2.colADate).otherwise(t2.colBDate))

t3.show()

您不需要使用
udf
来执行此操作。使用。在您的例子中,您可能正在查找
maxDF=t1.withColumn(“maxval”,F.magest(*t1.columns[1:])
您的代码不起作用,因为您在使用
pyspark.sql.functions.max
时应该使用
\uuuuuu builtin\uuu.max
from pyspark.sql.functions import *

temp = [(("ID1", '2019-01-01', '2019-02-01')), (("ID2", '2018-01-01', '2019-05-01')), (("ID3", '2019-06-01', '2019-04-01'))]
t1 = spark.createDataFrame(temp, ["ID", "colA", "colB"])
t2 = t1.select("ID", to_date(t1.colA).alias('colADate'), to_date(t1.colB).alias('colBDate'))
t3 = t2.withColumn('maxDateFromRow', when(t2.colADate > t2.colBDate, t2.colADate).otherwise(t2.colBDate))

t3.show()
+---+----------+----------+--------------+
| ID|  colADate|  colBDate|maxDateFromRow|
+---+----------+----------+--------------+
|ID1|2019-01-01|2019-02-01|    2019-02-01|
|ID2|2018-01-01|2019-05-01|    2019-05-01|
|ID3|2019-06-01|2019-04-01|    2019-06-01|
+---+----------+----------+--------------+