Apache spark 在PySpark中创建第二个会话列

Apache spark 在PySpark中创建第二个会话列,apache-spark,pyspark,apache-spark-sql,spark-dataframe,pyspark-sql,Apache Spark,Pyspark,Apache Spark Sql,Spark Dataframe,Pyspark Sql,在给定以下数据帧的情况下,创建显示第二个会话的列的最有效方法是什么: from pyspark import SparkContext from pyspark.sql import HiveContext, Window from pyspark.sql import functions as F sc = SparkContext("local") sqlContext = HiveContext(sc) df = sqlContext.createDataFrame([ ("u

在给定以下数据帧的情况下,创建显示第二个会话的列的最有效方法是什么:

from pyspark import SparkContext
from pyspark.sql import HiveContext, Window
from pyspark.sql import functions as F

sc = SparkContext("local")
sqlContext = HiveContext(sc)

df = sqlContext.createDataFrame([
    ("u1", "g1", 0),
    ("u2", "g2", 1),
    ("u1", "g2", 2),
    ("u1", "g3", 3),
], ["UserID", "GameID", "Time"])

df.show()

+------+------+----+
|UserID|GameID|Time|
+------+------+----+
|    u1|    g1|   0|
|    u2|    g2|   1|
|    u1|    g2|   2|
|    u1|    g3|   3|
+------+------+----+
所需输出

我还想保持时间,如果第一场比赛作为一个专栏

+------+------+-----+-----+
|UserID|MinTim|Game1|Game2|
+------+------+-----+-----+
|    u1|     0|   g1|   g2|
|    u1|     2|   g2|   g3|
+------+------+-----+-----+
我曾考虑在UserID上使用窗口分区,然后在(0,1)之间使用rowsBetween,但遇到了一些问题

使用Spark 1.6,但开放使用2.0解决方案。

可能重复:
w = Window().partitionBy("UserID").orderBy(F.col("Time"))

(df
 .select("UserID",
         "Time",
         F.col("GameID").alias("Game1"),
         F.lead("GameID").over(w).alias("Game2"))
 .na.drop(subset="Game2")
).show()

+------+----+-----+-----+
|UserID|Time|Game1|Game2|
+------+----+-----+-----+
|    u1|   0|   g1|   g2|
|    u1|   2|   g2|   g3|
+------+----+-----+-----+