Apache spark 在pyspark数据框中选择orderby之后的第n行

Apache spark 在pyspark数据框中选择orderby之后的第n行,apache-spark,pyspark,apache-spark-sql,pyspark-dataframes,Apache Spark,Pyspark,Apache Spark Sql,Pyspark Dataframes,我想为每组名称选择第二行。我使用orderby按名称排序,然后按购买日期/时间戳排序。我必须为每个名称选择第二次购买(按datetime) 以下是用于构建dataframe的数据: data = [ ('George', datetime(2020, 3, 24, 3, 19, 58), datetime(2018, 2, 24, 3, 22, 55)), ('Andrew', datetime(2019, 12, 12, 17, 21, 30), datetime(2019, 7, 2

我想为每组名称选择第二行。我使用orderby按名称排序,然后按购买日期/时间戳排序。我必须为每个名称选择第二次购买(按datetime)

以下是用于构建dataframe的数据:

data = [
  ('George', datetime(2020, 3, 24, 3, 19, 58), datetime(2018, 2, 24, 3, 22, 55)),
  ('Andrew', datetime(2019, 12, 12, 17, 21, 30), datetime(2019, 7, 21, 2, 14, 22)),
  ('Micheal', datetime(2018, 11, 22, 13, 29, 40), datetime(2018, 5, 17, 8, 10, 19)),
  ('Maggie', datetime(2019, 2, 8, 3, 31, 23), datetime(2019, 5, 19, 6, 11, 33)),
  ('Ravi', datetime(2019, 1, 1, 4, 19, 47), datetime(2019, 1, 1, 4, 22, 55)),
  ('Xien', datetime(2020, 3, 2, 4, 33, 51), datetime(2020, 5, 21, 7, 11, 50)),
  ('George', datetime(2020, 3, 24, 3, 19, 58), datetime(2020, 3, 24, 3, 22, 45)),
  ('Andrew', datetime(2019, 12, 12, 17, 21, 30), datetime(2019, 9, 19, 1, 14, 11)),
  ('Micheal', datetime(2018, 11, 22, 13, 29, 40), datetime(2018, 8, 19, 7, 11, 37)),
  ('Maggie', datetime(2019, 2, 8, 3, 31, 23), datetime(2018, 2, 19, 6, 11, 42)),
  ('Ravi', datetime(2019, 1, 1, 4, 19, 47), datetime(2019, 1, 1, 4, 22, 17)),
  ('Xien', datetime(2020, 3, 2, 4, 33, 51), datetime(2020, 6, 21, 7, 11, 11)),
  ('George', datetime(2020, 3, 24, 3, 19, 58), datetime(2020, 4, 24, 3, 22, 54)),
  ('Andrew', datetime(2019, 12, 12, 17, 21, 30), datetime(2019, 8, 30, 3, 12, 41)),
  ('Micheal', datetime(2018, 11, 22, 13, 29, 40), datetime(2017, 5, 17, 8, 10, 38)),
  ('Maggie', datetime(2019, 2, 8, 3, 31, 23), datetime(2020, 3, 19, 6, 11, 12)),
  ('Ravi', datetime(2019, 1, 1, 4, 19, 47), datetime(2018, 2, 1, 4, 22, 24)),
  ('Xien', datetime(2020, 3, 2, 4, 33, 51), datetime(2018, 9, 21, 7, 11, 41)),
]
 
df = sqlContext.createDataFrame(data, ['name', 'trial_start', 'purchase'])
df.show(truncate=False)
我按名称订购数据,然后购买

df.orderBy("name","purchase").show()
要产生结果:

+-------+-------------------+-------------------+
|   name|        trial_start|           purchase|
+-------+-------------------+-------------------+
| Andrew|2019-12-12 22:21:30|2019-07-21 06:14:22|
| Andrew|2019-12-12 22:21:30|2019-08-30 07:12:41|
| Andrew|2019-12-12 22:21:30|2019-09-19 05:14:11|
| George|2020-03-24 07:19:58|2018-02-24 08:22:55|
| George|2020-03-24 07:19:58|2020-03-24 07:22:45|
| George|2020-03-24 07:19:58|2020-04-24 07:22:54|
| Maggie|2019-02-08 08:31:23|2018-02-19 11:11:42|
| Maggie|2019-02-08 08:31:23|2019-05-19 10:11:33|
| Maggie|2019-02-08 08:31:23|2020-03-19 10:11:12|
|Micheal|2018-11-22 18:29:40|2017-05-17 12:10:38|
|Micheal|2018-11-22 18:29:40|2018-05-17 12:10:19|
|Micheal|2018-11-22 18:29:40|2018-08-19 11:11:37|
|   Ravi|2019-01-01 09:19:47|2018-02-01 09:22:24|
|   Ravi|2019-01-01 09:19:47|2019-01-01 09:22:17|
|   Ravi|2019-01-01 09:19:47|2019-01-01 09:22:55|
|   Xien|2020-03-02 09:33:51|2018-09-21 11:11:41|
|   Xien|2020-03-02 09:33:51|2020-05-21 11:11:50|
|   Xien|2020-03-02 09:33:51|2020-06-21 11:11:11|
+-------+-------------------+-------------------+
如何获得每个名称的第二行?在熊猫身上很容易。我可以用N。我一直在研究sql,但没有找到解决方案。如有任何建议,我们将不胜感激

我想要的结果是:

+-------+-------------------+-------------------+
|   name|        trial_start|           purchase|
+-------+-------------------+-------------------+
| Andrew|2019-12-12 22:21:30|2019-08-30 07:12:41|
| George|2020-03-24 07:19:58|2020-03-24 07:22:45|
| Maggie|2019-02-08 08:31:23|2019-05-19 10:11:33|
|Micheal|2018-11-22 18:29:40|2018-05-17 12:10:19|
|   Ravi|2019-01-01 09:19:47|2019-01-01 09:22:17|
|   Xien|2020-03-02 09:33:51|2020-05-21 11:11:50|
+-------+-------------------+-------------------+

尝试使用
窗口行编号()
功能,然后按
购买
订购后,仅过滤
2

示例:

from pyspark.sql import *
from pyspark.sql.functions import *

w=Window.partitionBy("name").orderBy(col("purchase"))

df.withColumn("rn",row_number().over(w)).filter(col("rn") ==2).drop(*["rn"]).show()
df.createOrReplaceTempView("tmp")

spark.sql("SET spark.sql.parser.quotedRegexColumnNames=true")

sql("select `(rn)?+.+` from (select *,row_number() over(partition by name order by purchase) rn from tmp) e where rn =2").\
show()
sqlapi:

from pyspark.sql import *
from pyspark.sql.functions import *

w=Window.partitionBy("name").orderBy(col("purchase"))

df.withColumn("rn",row_number().over(w)).filter(col("rn") ==2).drop(*["rn"]).show()
df.createOrReplaceTempView("tmp")

spark.sql("SET spark.sql.parser.quotedRegexColumnNames=true")

sql("select `(rn)?+.+` from (select *,row_number() over(partition by name order by purchase) rn from tmp) e where rn =2").\
show()

尝试使用
窗口行编号()
功能,然后按
购买
订购后,仅过滤
2

示例:

from pyspark.sql import *
from pyspark.sql.functions import *

w=Window.partitionBy("name").orderBy(col("purchase"))

df.withColumn("rn",row_number().over(w)).filter(col("rn") ==2).drop(*["rn"]).show()
df.createOrReplaceTempView("tmp")

spark.sql("SET spark.sql.parser.quotedRegexColumnNames=true")

sql("select `(rn)?+.+` from (select *,row_number() over(partition by name order by purchase) rn from tmp) e where rn =2").\
show()
sqlapi:

from pyspark.sql import *
from pyspark.sql.functions import *

w=Window.partitionBy("name").orderBy(col("purchase"))

df.withColumn("rn",row_number().over(w)).filter(col("rn") ==2).drop(*["rn"]).show()
df.createOrReplaceTempView("tmp")

spark.sql("SET spark.sql.parser.quotedRegexColumnNames=true")

sql("select `(rn)?+.+` from (select *,row_number() over(partition by name order by purchase) rn from tmp) e where rn =2").\
show()

谢谢我让你的sql工作了。我还以另一种方式做了这件事:
sqlContext.sql(“选择名称,试用开始,购买\从\(\select name,试用开始,购买,\row\u number()(按名称划分,按购买asc排序)作为rn\from table\)x\where x.rn=2;”)。show()
我想让您的窗口方法开始工作。我在col上有一个错误,我还没有弄清楚。有什么想法吗<代码>从pyspark.sql导入*从pyspark.sql.functions导入*w=Window.partitionBy(“name”).orderBy(col(“purchase”)df.withColumn(“rn”,row_number().over(w)).filter(col(“rn”)==2.drop([“rn”).show()
在34 w=Window.partitionBy(“name”).orderBy(col”)(“purchase”)--->5 df.with Column(“rn”,row_number().over(w)).filter(col(“rn”)==2.drop([“rn”).show()~/spark/spark-3.0.0-bin-hadoop2.7/python/pyspark/sql/dataframe.py in drop(self,*cols)2141 jdf=self.\u jdf.drop(col.\u jc)2142其他:->2143 raise TypeError(“col应该是字符串或列”)2144 else:2145 for cols中的col:TypeError:col应该是字符串或列
尝试使用
.drop(*“rn”])
我们需要用
*
中删除
中的列表,我错误地复制了它来回答..!你可以在这个网站上测试所有正则表达式这将是一个很好的开始谢谢。我让你的sql工作起来了。我也用另一种方式来做:
sqlContext.sql(“选择名称,试用开始,购买\从\(\select name,试用开始,购买,\row\u number)()over(按名称划分按购买顺序asc)作为rn\from table\)x\where x.rn=2;“”。show()
我想让您的窗口方法工作。我在col上遇到一个错误,我还没有弄清楚。有什么想法吗?
从pyspark.sql导入*从pyspark.sql.functions导入*w=Window.partitionBy(“name”).orderBy(col(“购买”))df.withColumn(“rn”,row_number()。over(w)).filter(col(“rn”)==2.drop([“rn”]).show()
TypeError Traceback(最近一次调用)在34w=Window.partitionBy(“name”).orderBy(col(“purchase”)--->5df.withColumn(“rn”,row_number()。over(w)).filter(col(“rn”)==2.drop([“rn”).show”).show()drop中的~/spark/spark-3.0.0-bin-hadoop2.7/python/pyspark/sql/dataframe.py(self,*cols)2141 jdf=self.\u jd.drop(col.\u jc)2142其他:->2143 raise TypeError(“col应该是字符串或列”)2144 else:2145 for col in cols:TypeError:col应该是字符串或列
尝试使用
.drop(*[“rn”])
我们需要使用
*
drop
中取消列表的测试,我错误地复制了以回答..!您可以在此站点中测试所有正则表达式这将是一个好的开始