Pyspark 创建新列以将与另一列中的另一个重复值对应的值排列在一行中

Pyspark 创建新列以将与另一列中的另一个重复值对应的值排列在一行中,pyspark,spark-dataframe,Pyspark,Spark Dataframe,我有一个类似于此示例的数据帧: 我希望获得新的数据帧,如下所示: 更新:2 import pyspark.sql.types as typ import pyspark.sql.functions as fn import datetime from pyspark.sql.functions import * labels=[('name', typ.StringType()),('month', typ.StringType()),('degree',typ.FloatType(

我有一个类似于此示例的数据帧:

我希望获得新的数据帧,如下所示:

更新:2

import pyspark.sql.types as typ
import pyspark.sql.functions as fn
import datetime
from pyspark.sql.functions import *




labels=[('name', typ.StringType()),('month', typ.StringType()),('degree',typ.FloatType())]

schema=typ.StructType([typ.StructField(e[0],e[1],True) for e in labels])

degree_df = spark.read.csv("file:///home/Ahmad/ahmad_tst/TEST.csv", header= False,schema=schema)
table_count_c= degree_df.stat.crosstab("name","month").withColumnRenamed('name_month','name')

table_count_d=degree_df.groupBy("name","month").agg((min("degree")),(max("degree")))

 table_count_d.show()
+-----+-----+-----------+-----------+
| name|month|min(degree)|max(degree)|
+-----+-----+-----------+-----------+
|Ahmad|  May|       38.0|       38.0|
|Ahmad|April|       40.0|       49.0|
| Emma|  May|       45.0|       50.0|
+-----+-----+-----------+-----------+



table_count_c= degree_df.stat.crosstab("name","month").withColumnRenamed('name_month','name')

 table_count_c.show()
+-----+-----+---+
| name|April|May|
+-----+-----+---+
|Ahmad|    2|  1|
| Emma|    0|  2|
+-----+-----+---+

table_4c= table_count_c.join(table_count_d, "name" , 'left_outer')

   table_4c.show()
+-----+-----+---+-----+-----------+-----------+
| name|April|May|month|min(degree)|max(degree)|
+-----+-----+---+-----+-----------+-----------+
|Ahmad|    2|  1|April|       40.0|       49.0|
|Ahmad|    2|  1|  May|       38.0|       38.0|
| Emma|    0|  2|  May|       45.0|       50.0|
+-----+-----+---+-----+-----------+-----------+
更新:3

根据以下建议“
您可以通过对表\u count\u d本身执行左外连接来获得类似于您所追求的东西”

产生的数据帧如下所示;

我希望获得如下数据帧:

+-----+-----+---+-----+-----------+-----------+-----+-----------+-----------+
| name|April|May|month|min(degree)|max(degree)|month|min(degree)|max(degree)|
+-----+-----+---+-----+-----------+-----------+-----+-----------+-----------+
|Ahmad|    2|  1|  May|       38.0|       38.0|April|       40.0|       49.0|
| Emma|    0|  2|  May|       45.0|       50.0|April|       00.0|       00.0|
+-----+-----+---+-----+-----------+-----------+-----+-----------+-----------+

有没有办法用PySpark 2.0.1做到这一点这里有两个选项;第一个稍微优雅一些(特别是如果你有两个月以上的时间),但不能完全满足你的需求;第二个确实生成了它,但更详细。(如果你明确描述你想要实现的目标的逻辑,这会有所帮助)

1。使用左外连接

这个想法如上所述,在唯一id列上设置一个条件,以防止同一对出现两次

import pyspark.sql.functions as func

sc = SparkContext.getOrCreate()
sql_sc = SQLContext(sc)

df1 = sql_sc.createDataFrame([("Ahmad", "May", '38.0', '38.0'), ("Ahmad", "April", '40.0', '49.0'), ("Emma", "May", '45.0', '50.0')], 
                         ("name", "month", "min(degree)", "max(degree)"))

# add a unique id column
df1 = df1.withColumn('id', func.monotonically_increasing_id())

#self join - rename columns to maintain unique column name
df2 = df1
for c in df2.columns:
    df2 = df2.withColumnRenamed(c, c + '_2')

# use the id column to prevent the same pair from appearing twice
dfx = df1.join(df2, (df1['name'] == df2['name_2']) & (df1['month'] != df2['month_2']) & (df1['id'] < df2['id_2']) , 'left_outer' )
dfx.show()
2。每月拆分数据

df_4 = df1.where(func.col('month') == 'April')
df_5 = df1.where(func.col('month') == 'May')

df_5.join(df_4, df_5['name'] == df_4['name'], 'outer').show()
屈服:

+-----+-----+-----------+-----------+-----------+-----+-----+-----------+-----------+-----------+
| name|month|min(degree)|max(degree)|         id| name|month|min(degree)|max(degree)|         id|
+-----+-----+-----------+-----------+-----------+-----+-----+-----------+-----------+-----------+
|Ahmad|  May|       38.0|       38.0|17179869184|Ahmad|April|       40.0|       49.0|42949672960|
| Emma|  May|       45.0|       50.0|60129542144| null| null|       null|       null|       null|
+-----+-----+-----------+-----------+-----------+-----+-----+-----------+-----------+-----------+
结果DF如下所示:

+-----+-----+---+---------------------------------+---------------------------------+-------------------------------+-------------------------------+
| name|April|May|April_first(`min(degree)`, false)|April_first(`max(degree)`, false)|May_first(`min(degree)`, false)|May_first(`max(degree)`, false)|
+-----+-----+---+---------------------------------+---------------------------------+-------------------------------+-------------------------------+
|Ahmad|    2|  1|                             40.0|                             49.0|                           38.0|                           38.0|
| Emma|    0|  2|                             null|                             null|                           45.0|                           50.0|
+-----+-----+---+---------------------------------+---------------------------------+-------------------------------+-------------------------------+

之后,您可以随意重命名列

您好,欢迎访问该网站!到目前为止你试过什么?你看过文件了吗?你在哪里卡住了?这次更新后你有什么想法吗谢谢你的更新。然而,为了最大化从社区获得帮助的机会,我建议遵循,并且也要考虑编辑标题来反映你真正想要的东西(我会帮助它,但不确定目标是什么——你是否试图在每个月产生一组不同的列)?通过在表_count _d上执行左外联接,您可以得到与您所追求的类似的结果。谢谢您的想法,但它没有给出所需的结果。再次感谢您的努力!您可能希望发布并接受自己的解决方案,以便其他人可以使用您的方法。
import pyspark.sql.types as typ
import pyspark.sql.functions as fn
from pyspark.sql.functions import *
from pyspark.sql import DataFrame



labels=[('name', typ.StringType()),('month', typ.StringType()),('degree',typ.FloatType())]

schema=typ.StructType([typ.StructField(e[0],e[1],True) for e in labels])

degree_df = spark.read.csv("file:///home/Ahmad/ahmad_tst/TEST.csv", header= False,schema=schema)

table_count_d=degree_df.groupBy("name","month").agg((min("degree")),(max("degree")))

table_count_c= degree_df.stat.crosstab("name","month").withColumnRenamed('name_month','name')

table1=table_count_c.join(table_count_d, "name" , 'left_outer')
df1 = table1.groupby('name').pivot('month').agg(fn.first('min(degree)'),fn.first('min(degree)'))
df1.show()
+-----+-----+---+---------------------------------+---------------------------------+-------------------------------+-------------------------------+
| name|April|May|April_first(`min(degree)`, false)|April_first(`max(degree)`, false)|May_first(`min(degree)`, false)|May_first(`max(degree)`, false)|
+-----+-----+---+---------------------------------+---------------------------------+-------------------------------+-------------------------------+
|Ahmad|    2|  1|                             40.0|                             49.0|                           38.0|                           38.0|
| Emma|    0|  2|                             null|                             null|                           45.0|                           50.0|
+-----+-----+---+---------------------------------+---------------------------------+-------------------------------+-------------------------------+