Python 计算从上次交易日期算起的天数,这是使用窗口函数从Pandas到Pyspark的时间序列实现

Python 计算从上次交易日期算起的天数,这是使用窗口函数从Pandas到Pyspark的时间序列实现,python,pandas,apache-spark,pyspark,Python,Pandas,Apache Spark,Pyspark,熊猫的代码 # Calculate days since last deal for customer / master customer df['booked_date_day'] = pd.to_datetime(df['booked_date_day']) df['customer_whole_days_from_last_deal'] = df[['customer_nbr', 'booked_date_day']].sort_values(['customer_nbr', 'booke

熊猫的代码

# Calculate days since last deal for customer / master customer
df['booked_date_day'] = pd.to_datetime(df['booked_date_day'])
df['customer_whole_days_from_last_deal'] = df[['customer_nbr', 'booked_date_day']].sort_values(['customer_nbr', 'booked_date_day']).drop_duplicates().groupby('customer_nbr')['booked_date_day'].diff()
df['customer_whole_days_from_last_deal'] = df.sort_values(['customer_nbr', 'booked_date_day']).groupby('customer_nbr')['customer_whole_days_from_last_deal'].fillna(method='ffill')
df['customer_whole_days_from_last_deal'] = df['customer_whole_days_from_last_deal'].dt.days
df['master_customer_whole_days_from_last_deal'] = df[['master_cust', 'booked_date_day']].sort_values(['master_cust', 'booked_date_day']).drop_duplicates().groupby('master_cust')['booked_date_day'].diff()
df['master_customer_whole_days_from_last_deal'] = df.sort_values(['master_cust', 'booked_date_day']).groupby('master_cust')['master_customer_whole_days_from_last_deal'].fillna(method='ffill')
df['master_customer_whole_days_from_last_deal'] = df['master_customer_whole_days_from_last_deal'].dt.days
我在PYSPARK中开发的代码:::上面的数据帧是df,下面的数据帧是orders

window1 = Window.partitionBy(orders.customer_nbr).orderBy(orders.booked_date_day)

orders = orders.withColumn("customer_whole_days_from_last_deal", F.datediff(orders.booked_date_day, F.lag(orders.booked_date_day, 1).over(window1)))

window2 = Window.partitionBy(orders.master_cust).orderBy(orders.booked_date_day)

orders = orders.withColumn("master_whole_days_from_last_deal", F.datediff(orders.booked_date_day, F.lag(orders.booked_date_day, 1).over(window2)))


window_ff1= Window.partitionBy(orders.customer_nbr).orderBy(orders.booked_date_day).rowsBetween(-sys.maxsize, 0)
filled_column1 = last(orders['customer_whole_days_from_last_deal'], ignorenulls=True).over(window_ff1)
orders = orders.withColumn('customer_whole_days_from_last_deal', filled_column1)

window_ff2= Window.partitionBy(orders.master_cust).orderBy(orders.booked_date_day).rowsBetween(-sys.maxsize, 0)
filled_column2 = last(orders['master_whole_days_from_last_deal'], ignorenulls=True).over(window_ff2)
orders = orders.withColumn('master_whole_days_from_last_deal', filled_column2)
我正在为客户/主客户计算上次交易后的天数,请让我知道我在pyspark中做错了什么,因为我在pandas中得到了一行,但在pyspark中得到了更多行

然后使用广播连接,我将数据帧“df1”和“df2”连接到“orders”,这解决了我的问题

df1 = orders.select('customer_nbr','booked_date_day').sort("customer_nbr","booked_date_day").dropDuplicates()
window1 = Window.partitionBy(df1.customer_nbr).orderBy(df1.booked_date_day)
df1 = df1.withColumn("customer_whole_days_from_last_deal", F.datediff(df1.booked_date_day, F.lag(df1.booked_date_day, 1).over(window1)))
window_ff1= Window.partitionBy(df1.customer_nbr).orderBy(df1.booked_date_day).rowsBetween(-sys.maxsize, 0)
filled_column1 = last(df1['customer_whole_days_from_last_deal'], ignorenulls=True).over(window_ff1)
df1 = df1.withColumn('customer_whole_days_from_last_deal', filled_column1)


df2 = orders.select('master_cust','booked_date_day').sort("master_cust","booked_date_day").dropDuplicates()
window2 = Window.partitionBy(df2.master_cust).orderBy(df2.booked_date_day)
df2 = df2.withColumn("master_whole_days_from_last_deal", F.datediff(df2.booked_date_day, F.lag(df2.booked_date_day, 1).over(window2)))
window_ff2= Window.partitionBy(df2.master_cust).orderBy(df2.booked_date_day).rowsBetween(-sys.maxsize, 0)
filled_column2 = last(df2['master_whole_days_from_last_deal'], ignorenulls=True).over(window_ff2)
df2 = df2.withColumn('master_whole_days_from_last_deal', filled_column2)