PySpark获取具有最大日期的dataframe列的值

PySpark获取具有最大日期的dataframe列的值,pyspark,Pyspark,我需要在pyspark数据框中使用窗口上的max date行中的列值创建一个新列。鉴于下面的数据框架,我需要根据最近日期的调整系数为每个资产的每个记录设置一个名为max_adj_factor的新列 +----------------+-------+----------+-----+ |adjustmentFactor|assetId| date| nav| +----------------+-------+----------+-----+ |9.96288362069999|4

我需要在pyspark数据框中使用窗口上的max date行中的列值创建一个新列。鉴于下面的数据框架,我需要根据最近日期的调整系数为每个资产的每个记录设置一个名为max_adj_factor的新列

+----------------+-------+----------+-----+
|adjustmentFactor|assetId|      date|  nav|
+----------------+-------+----------+-----+
|9.96288362069999|4000123|2019-12-20| 18.5|
|9.96288362069999|4000123|2019-12-23|18.67|
|9.96288362069999|4000123|2019-12-24| 18.6|
|9.96288362069999|4000123|2019-12-26|18.57|
|10.0449181987999|4000123|2019-12-27|18.46|
|10.0449181987999|4000123|2019-12-30|18.41|
|10.0449181987999|4000123|2019-12-31|18.34|
|10.0449181987999|4000123|2020-01-02|18.77|
|10.0449181987999|4000123|2020-01-03|19.07|
|10.0449181987999|4000123|2020-01-06|19.16|
|10.0449181987999|4000123|2020-01-07| 19.2|
您可以通过以下方式使用:

df.withColumn(“最大调整因子”\
F.expr(“最大值(调整系数,日期)”)\
.over(Window.partitionBy(“assetId”))\
.show()
输出:

+----------------+-------+----------+-----+----------------+
|调整系数|资产|日期|资产|最大调整系数|
+----------------+-------+----------+-----+----------------+
|9.96288362069999|4000123|2019-12-20| 18.5|10.0449181987999|
|9.96288362069999|4000123|2019-12-23|18.67|10.0449181987999|
|9.96288362069999|4000123|2019-12-24| 18.6|10.0449181987999|
|9.96288362069999|4000123|2019-12-26|18.57|10.0449181987999|
|10.0449181987999|4000123|2019-12-27|18.46|10.0449181987999|
|10.0449181987999|4000123|2019-12-30|18.41|10.0449181987999|
|10.0449181987999|4000123|2019-12-31|18.34|10.0449181987999|
|10.0449181987999|4000123|2020-01-02|18.77|10.0449181987999|
|10.0449181987999|4000123|2020-01-03|19.07|10.0449181987999|
|10.0449181987999|4000123|2020-01-06|19.16|10.0449181987999|
|10.0449181987999|4000123|2020-01-07| 19.2|10.0449181987999|
+----------------+-------+----------+-----+----------------+

您的预期输出是什么?