Pyspark Pypark为目标比率加权的行排序

Pyspark Pypark为目标比率加权的行排序,pyspark,pyspark-sql,Pyspark,Pyspark Sql,我想重新排列一组得分的学生,以保证我是否录取了N名学生 从我的列表的顶部,我至少可以得到某一类别的一部分 所以如果我们有这个数据框的输入 +----------+------+-----+ |STUDENT_ID| TYPE|SCORE| +----------+------+-----+ | A|female|100.0| | B|female| 99.0| | C|female| 88.0| | D|female| 77.0|

我想重新排列一组得分的学生,以保证我是否录取了N名学生 从我的列表的顶部,我至少可以得到某一类别的一部分

所以如果我们有这个数据框的输入

+----------+------+-----+
|STUDENT_ID|  TYPE|SCORE|
+----------+------+-----+
|         A|female|100.0|
|         B|female| 99.0|
|         C|female| 88.0|
|         D|female| 77.0|
|         E|female| 66.0|
|         F|female| 55.0|
|         G|female| 44.0|
|         H|female| 33.0|
|         I|  male| 22.0|
|         J|  male| 11.0|
+----------+------+-----+
我们的目标是让我的人口中的2%在任何时候都是男性,我会像这样重新计算

+----------+------+-----+
|STUDENT_ID|  TYPE|SCORE|
+----------+------+-----+
|         I|  male| 22.0|
|         A|female|100.0|
|         B|female| 99.0|
|         C|female| 88.0|
|         D|female| 77.0|
|         J|  male| 11.0|
|         E|female| 66.0|
|         F|female| 55.0|
|         G|female| 44.0|
|         H|female| 33.0|
+----------+------+-----+
现在如果我拿我的前1,2,3,4,5。。。10名来自我国人口的学生,我保证我将达到我的目标 .2男性比例,但仍按从最好到最差的顺序考虑男性

即使我的女性已经被调低了一点,我仍然在确保她们被按照从最好到最坏的顺序对待

这里还有几个例子


输入

+----------+------+-----+          
|STUDENT_ID|  TYPE|SCORE|          
+----------+------+-----+          
|         A|female|100.0|          
|         B|female| 99.0|          
|         C|female| 88.0|          
|         D|female| 77.0|          
|         E|female| 66.0|          
|         F|female| 55.0|          
|         G|female| 44.0|          
|         H|female| 33.0|          
|         I|  male| 22.0|          
|         J|  male| 11.0|          
+----------+------+-----+         
100%的产量应为男性,因此所有产量均上移至顶部

+----------+------+-----+
|STUDENT_ID|  TYPE|SCORE|
+----------+------+-----+
|         I|  male| 22.0|
|         J|  male| 11.0|
|         A|female|100.0|
|         B|female| 99.0|
|         C|female| 88.0|
|         D|female| 77.0|
|         E|female| 66.0|
|         F|female| 55.0|
|         G|female| 44.0|
|         H|female| 33.0|
+----------+------+-----+

输入

20%的产量应该是男性,但其中一个已经到位,所以我们只需要移动1

+----------+------+-----+
|STUDENT_ID|  TYPE|SCORE|
+----------+------+-----+
|         A|male  |100.0|
|         B|female| 99.0|
|         C|female| 88.0|
|         D|female| 77.0|
|         E|female| 66.0|
|         I|  male| 22.0|
|         F|female| 55.0|
|         G|female| 44.0|
|         H|female| 33.0|
|         J|  male| 11.0|
+----------+------+-----+

下面的代码适用于某些情况,但不适用于其他情况

它获取输入数据帧,对其进行排序,按类型对其进行排序,然后根据所需的比率调整排序

from pyspark.sql.types import StructType, StructField, IntegerType, DoubleType, StringType
import pyspark.sql.functions as f

temp_struct = StructType([
    StructField('STUDENT_ID',  StringType()),
    StructField('TYPE',  StringType()),
    StructField('SCORE',  DoubleType())
])


temp_df = spark.createDataFrame([
    ['A',  'female', 100.0],
    ['B',  'female', 99.0],
    ['C',  'female', 88.0],
    ['D',  'female', 77.0],
    ['E',  'female', 66.0],
    ['F',  'female', 55.0],
    ['G',  'female', 44.0],
    ['H',  'female', 33.0],
    ['I',  'male', 22.0],
    ['J',  'male', 11.0]
], temp_struct)

print('Initial DF')
temp_df.show()

window_by_score_desc = Window.orderBy(f.col('SCORE').desc())
temp_df = temp_df.withColumn('RANK', f.row_number().over(window_by_score_desc)).orderBy(f.col('RANK').asc())
print('With RANK DF')
temp_df.show()

window_by_type_rank = Window.partitionBy(f.col('TYPE')).orderBy(f.col('RANK').asc())
temp_df = temp_df.withColumn('TYPE_RANK', f.row_number().over(window_by_type_rank)).orderBy(f.col('RANK').asc())
print('With TYPE RANK DF')
temp_df.show()

def weight_for_type_and_ratio(input_df, student_type, student_ratio):
    section_size = float(1 / student_ratio)
    return input_df.withColumn('ADJUSTED_RANK', 
                               f.when(f.col('TYPE') == student_type, 
                                       (f.col('TYPE_RANK') - 1) * (section_size-1) + .5).otherwise(f.col('RANK')))


print('FINAL WITH ADJUSTED RANK DF')
weight_for_type_and_ratio(temp_df, 'male', .2).orderBy(f.col('ADJUSTED_RANK').asc()).show()
这段代码适用于某些情况。。。。 输入:

它给出了正确调整的排序输出

+----------+------+-----+----+---------+-------------+
|STUDENT_ID|  TYPE|SCORE|RANK|TYPE_RANK|ADJUSTED_RANK|
+----------+------+-----+----+---------+-------------+
|         I|  male| 22.0|   9|        1|          0.5|
|         A|female|100.0|   1|        1|          1.0|
|         B|female| 99.0|   2|        2|          2.0|
|         C|female| 88.0|   3|        3|          3.0|
|         D|female| 77.0|   4|        4|          4.0|
|         J|  male| 11.0|  10|        2|          4.5|
|         E|female| 66.0|   5|        5|          5.0|
|         F|female| 55.0|   6|        6|          6.0|
|         G|female| 44.0|   7|        7|          7.0|
|         H|female| 33.0|   8|        8|          8.0|
+----------+------+-----+----+---------+-------------+
但不适用于其他情况,特别是当某些记录已经存在且不需要调整时

输入DF: 初始DF

+----------+------+-----+
|STUDENT_ID|  TYPE|SCORE|
+----------+------+-----+
|         A|  male|100.0|
|         B|female| 99.0|
|         C|female| 88.0|
|         D|female| 77.0|
|         E|female| 66.0|
|         F|female| 55.0|
|         G|female| 44.0|
|         H|female| 33.0|
|         I|  male| 22.0|
|         J|  male| 11.0|
+----------+------+-----+
它给出了不正确的输出:

+----------+------+-----+----+---------+-------------+
|STUDENT_ID|  TYPE|SCORE|RANK|TYPE_RANK|ADJUSTED_RANK|
+----------+------+-----+----+---------+-------------+
|         A|  male|100.0|   1|        1|          0.5|
|         B|female| 99.0|   2|        1|          2.0|
|         C|female| 88.0|   3|        2|          3.0|
|         D|female| 77.0|   4|        3|          4.0|
|         I|  male| 22.0|   9|        2|          4.5|
|         E|female| 66.0|   5|        4|          5.0|
|         F|female| 55.0|   6|        5|          6.0|
|         G|female| 44.0|   7|        6|          7.0|
|         H|female| 33.0|   8|        7|          8.0|
|         J|  male| 11.0|  10|        3|          8.5|
+----------+------+-----+----+---------+-------------+
其中男性I的调整等级太高


对这个问题的不同方法有什么想法吗。不寻求太多的代码更改,可能只是一个不同的思维过程。

如果你只想确保当你接受N名学生时,你会得到某一类别的某一部分,我认为通过使用。请查看下面的代码:

从pyspark.sql.types导入StructType、StructField、IntegerType、DoubleType、StringType
导入pyspark.sql.F函数
temp_struct=StructType([
StructField('STUDENT_ID',StringType()),
StructField('TYPE',StringType()),
StructField('SCORE',DoubleType())
])
temp_df=spark.createDataFrame([
[A',“女性”,100.0],
[B',“女性”,99.0],
[C',“女性”,88.0],
[D',“女性”,77.0],
['E','female',66.0],
[F',“女性”,55.0],
[G','女性',44.0],
[H',“女性”,33.0],
[I','男性',22.0],
['J','男性',11.0]
],临时结构)
#您希望获得的学生总数
总数=5
#类别的一部分
分数男性=0.2
#只需选择和限制每个类别的行,并使用联合来获得单个数据帧
临时测向过滤器(临时测向类型=='male')。限制(int(total*fractionMale))。联合(临时测向过滤器(临时测向类型=='male')。限制(int(total*(1-fractionMale)))。显示()
输出:

+-------------+---+----+
|学生ID |类型|分数|
+----------+------+-----+ 
|I |男| 22.0 |
|A |女性| 100.0 |
|B |女性| 99.0 |
|C |女性| 88.0 |
|D |女性| 77.0 |
+----------+------+-----+
不幸的是,我们不能像Spark那样使用,而且您肯定会得到每个类别的预期总数。也就是说,以下内容并不总是返回5行预期分数

total=5
分数男性=0.2
countMale=temp_df.filter(temp_df.TYPE=='male')。count()
countFemale=temp_df.count()-countMale
sampleFractionMale=(总计*fractionMale)/countMale
sampleFractionFemale=(总计*(1-FractionFemale))/countFemale
temp_df.sampleBy(“TYPE”,分数={'male':sampleFractionMale,'male':sampleFractionFemale}).show()
输出:

+-------------+---+----+
|学生ID |类型|分数|
+----------+------+-----+ 
|A |女性| 100.0 |
|B |女性| 99.0 |
|C |女性| 88.0 |
|D |女性| 77.0 |
|女| 55.0 |
|I |男| 22.0 |
+----------+------+-----+

如果你只想确保当你录取N名学生时,你会得到某一类别的某一部分,我认为通过使用。请查看下面的代码:

从pyspark.sql.types导入StructType、StructField、IntegerType、DoubleType、StringType
导入pyspark.sql.F函数
temp_struct=StructType([
StructField('STUDENT_ID',StringType()),
StructField('TYPE',StringType()),
StructField('SCORE',DoubleType())
])
temp_df=spark.createDataFrame([
[A',“女性”,100.0],
[B',“女性”,99.0],
[C',“女性”,88.0],
[D',“女性”,77.0],
['E','female',66.0],
[F',“女性”,55.0],
[G','女性',44.0],
[H',“女性”,33.0],
[I','男性',22.0],
['J','男性',11.0]
],临时结构)
#您希望获得的学生总数
总数=5
#类别的一部分
分数男性=0.2
#只需选择和限制每个类别的行,并使用联合来获得单个数据帧
临时测向过滤器(临时测向类型=='male')。限制(int(total*fractionMale))。联合(临时测向过滤器(临时测向类型=='male')。限制(int(total*(1-fractionMale)))。显示()
输出:

+-------------+---+----+
|学生ID |类型|分数|
+----------+------+-----+ 
|I |男| 22.0 |
|A |女性| 100.0 |
|B |女性| 99.0 |
|C |女性| 88.0 |
|D |女性| 77.0 |
+----------+------+-----+
不幸的是,我们不能像Spark那样使用,而且您肯定会得到每个类别的预期总数。也就是说,以下内容并不总是返回5行预期分数

total=5
分数男性=0.2
countMale=temp_df.filter(temp_df.TYPE=='male')。count()
countFemale=temp_df.count()-countMale
sampleFractionMale=(总计*fractionMale)/countMale
sampleFractionFemale=(总计*(1-FractionFemale))/countFemale
temp_df.sampleBy(“TYPE”,分数={'male':sampleFractionMale,'male':sampleFractionFemale}).show()
输出:

+----------+------+-----+ |STUDENT_ID| TYPE|SCORE| +----------+------+-----+ | A| male|100.0| | B|female| 99.0| | C|female| 88.0| | D|female| 77.0| | E|female| 66.0| | F|female| 55.0| | G|female| 44.0| | H|female| 33.0| | I| male| 22.0| | J| male| 11.0| +----------+------+-----+
+----------+------+-----+----+---------+-------------+
|STUDENT_ID|  TYPE|SCORE|RANK|TYPE_RANK|ADJUSTED_RANK|
+----------+------+-----+----+---------+-------------+
|         A|  male|100.0|   1|        1|          0.5|
|         B|female| 99.0|   2|        1|          2.0|
|         C|female| 88.0|   3|        2|          3.0|
|         D|female| 77.0|   4|        3|          4.0|
|         I|  male| 22.0|   9|        2|          4.5|
|         E|female| 66.0|   5|        4|          5.0|
|         F|female| 55.0|   6|        5|          6.0|
|         G|female| 44.0|   7|        6|          7.0|
|         H|female| 33.0|   8|        7|          8.0|
|         J|  male| 11.0|  10|        3|          8.5|
+----------+------+-----+----+---------+-------------+