Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/325.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 将平局设置为高的SAS排名转换为PySpark_Python_Pyspark_Sas - Fatal编程技术网

Python 将平局设置为高的SAS排名转换为PySpark

Python 将平局设置为高的SAS排名转换为PySpark,python,pyspark,sas,Python,Pyspark,Sas,我试图在PySpark中复制以下SAS代码: PROC RANK DATA = aud_baskets OUT = aud_baskets_ranks GROUPS=10 TIES=HIGH; BY customer_id; VAR expenditure; RANKS basket_rank; RUN; 这样做的目的是将所有支出按每个客户id块进行排序。数据如下所示: +-----------+--------------+-----------+ |customer_id|transact

我试图在PySpark中复制以下SAS代码:

PROC RANK DATA = aud_baskets OUT = aud_baskets_ranks GROUPS=10 TIES=HIGH;
BY customer_id;
VAR expenditure;
RANKS basket_rank;
RUN;
这样做的目的是将所有支出按每个客户id块进行排序。数据如下所示:

+-----------+--------------+-----------+
|customer_id|transaction_id|expenditure|
+-----------+--------------+-----------+
|          A|             1|         34|
|          A|             2|         90|
|          B|             1|         89|
|          A|             3|          6|
|          B|             2|          8|
|          B|             3|          7|
|          C|             1|         96|
|          C|             2|          9|
+-----------+--------------+-----------+
在PySpark中,我尝试了以下方法:

spendWindow = Window.partitionBy('customer_id').orderBy(col('expenditure').asc())
aud_baskets = (aud_baskets_ranks.withColumn('basket_rank', ntile(10).over(spendWindow)))
问题是PySpark不允许用户改变处理领带的方式,就像SAS那样(据我所知)。我需要在PySpark中设置此行为,以便在每次出现一个边缘情况时将值上移到下一层,而不是将它们放到下面的级别


或者有没有一种方法可以自定义编写这种方法?

使用密集排列,它将在出现联系时给出相同的排列,并且不会跳过下一个排列 ntile函数将每个分区中的记录组拆分为n个部分。你的情况是哪一个是10

从pyspark.sql.functions导入稠密等级
spendWindow=Window.partitionBy('customer_id').orderBy(col('expense').asc())
aud_baskets=aud_baskets_ranks.withColumn('basket_rank',density_rank.over(spendWindow))

请尝试以下代码。它是由一个称为链轮的自动化工具生成的。它应该注意领带

df = (aud_baskets)
for (colToRank,rankedName) in zip(['expenditure'],['basket_rank']): 
    wA = Window.orderBy(asc(colToRank))
    df_w_rank = (df.withColumn('raw_rank', rank().over(wA)))
    ties = df_w_rank.groupBy('raw_rank').count().filter("""count > 1""")
    df_w_rank = (df_w_rank.join(ties,['raw_rank'],'left').withColumn(rankedName,expr("""case when count is not null
                                                                                     then (raw_rank + count - 1) else
                                                                                     raw_rank end""")))
    rankedNameGroup = rankedName
    n = df_w_rank.count()
    df_with_rank_groups = (df_w_rank.withColumn(rankedNameGroup,expr("""FLOOR({rankedName}
                                                                     *{k}/({n}+1))""".format(k=10, n=n,
                                                                     rankedName=rankedName))))
    df = df_with_rank_groups
aud_baskets_ranks = df_with_rank_groups.drop('raw_rank', 'count')

密集排名在这种情况下不起作用,因为它将根据每个客户id内的支出对每一行进行排名。SAS代码将每个客户id的支出放入10个箱子中。我所指的“关系”问题是,在一行支出为3的情况下,有两个箱子可以对该值进行排序,[2,3]和[3,4],PySpark将把它踢到bin[2,3],而SAS通过参数TIE=HIGH将该值推到另一个bin@罗宾金