Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/mercurial/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Dataframe 按创建日期获取不同的行_Dataframe_Pyspark_Databricks - Fatal编程技术网

Dataframe 按创建日期获取不同的行

Dataframe 按创建日期获取不同的行,dataframe,pyspark,databricks,Dataframe,Pyspark,Databricks,我正在使用这样的数据帧: DeviceNumber | CreationDate | Name 1001 | 1.1.2018 | Testdevice 1001 | 30.06.2019 | Device 1002 | 1.1.2019 | Lamp DeviceNumber | CreationDate

我正在使用这样的数据帧:

DeviceNumber        | CreationDate       | Name
1001                | 1.1.2018           | Testdevice
1001                | 30.06.2019         | Device
1002                | 1.1.2019           | Lamp
DeviceNumber        | CreationDate       | Name
1001                | 30.06.2019         | Device
1002                | 1.1.2019           | Lamp
我正在使用databricks和pyspark来完成ETL过程。如何减少数据帧,使每个“DeviceNumber”只有一行,并且这是具有最高“CreationDate”的行?在本例中,我希望结果如下所示:

DeviceNumber        | CreationDate       | Name
1001                | 1.1.2018           | Testdevice
1001                | 30.06.2019         | Device
1002                | 1.1.2019           | Lamp
DeviceNumber        | CreationDate       | Name
1001                | 30.06.2019         | Device
1002                | 1.1.2019           | Lamp

您可以使用PySpark窗口功能:

from pyspark.sql.window import Window
from pyspark.sql import functions as f

# make sure that creation is a date data-type
df = df.withColumn('CreationDate', f.to_timestamp('CreationDate', format='dd.MM.yyyy'))

# partition on device and get a row number by (descending) date
win = Window.partitionBy('DeviceNumber').orderBy(f.col('CreationDate').desc())
df = df.withColumn('rownum', f.row_number().over(win))

# finally take the first row in each group
df.filter(df['rownum']==1).select('DeviceNumber', 'CreationDate', 'Name').show()

------------+------------+------+
|DeviceNumber|CreationDate|  Name|
+------------+------------+------+
|        1002|  2019-01-01|  Lamp|
|        1001|  2019-06-30|Device|
+------------+------------+------+

您可以使用DeviceNumber创建额外的数据帧&它是最新的/max CreationDate

import pyspark.sql.functions as psf

max_df = df\
    .groupBy('DeviceNumber')\
    .agg(psf.max('CreationDate').alias('max_CreationDate'))
然后将
max_df
与原始数据帧连接起来

joining_condition = [ df.DeviceNumber == max_df.DeviceNumber, df.CreationDate == max_df.max_CreationDate ]

df.join(max_df,joining_condition,'left_semi').show()
left\u semi
join在需要第二个数据帧作为查找并且确实需要第二个数据帧中的任何列时非常有用