Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Pyspark数据帧按分区级别排序还是按总体排序?_Python_Apache Spark_Pyspark_Pyspark Sql - Fatal编程技术网

Python Pyspark数据帧按分区级别排序还是按总体排序?

Python Pyspark数据帧按分区级别排序还是按总体排序?,python,apache-spark,pyspark,pyspark-sql,Python,Apache Spark,Pyspark,Pyspark Sql,当我在pyspark数据帧上执行orderBy时,它是否跨所有分区(即整个结果)对数据进行排序?还是在分区级别进行排序? 如果是后者,那么有人能建议如何跨数据执行orderBy吗? 我在末尾有一个订购单 我当前的代码: def extract_work(self, days_to_extract): source_folders = self.work_folder_provider.get_work_folders(s3_source_folder=self.work_sou

当我在pyspark数据帧上执行orderBy时,它是否跨所有分区(即整个结果)对数据进行排序?还是在分区级别进行排序? 如果是后者,那么有人能建议如何跨数据执行orderBy吗? 我在末尾有一个订购单

我当前的代码:

def extract_work(self, days_to_extract):

        source_folders = self.work_folder_provider.get_work_folders(s3_source_folder=self.work_source,
                                                                    warehouse_ids=self.warehouse_ids,
                                                                    days_to_extract=days_to_extract)
        source_df = self._load_from_s3(source_folders)

        # Partition and de-dupe the data-frame retaining latest
        source_df = self.data_frame_manager.partition_and_dedupe_data_frame(source_df,
                                                                            partition_columns=['binScannableId', 'warehouseId'],
                                                                            sort_key='cameraCaptureTimestampUtc',
                                                                            desc=True)
        # Filter out anything that does not qualify for virtual count.
        source_df = self._virtual_count_filter(source_df)

        history_folders = self.work_folder_provider.get_history_folders(s3_history_folder=self.history_source,
                                                                        days_to_extract=days_to_extract)
        history_df = self._load_from_s3(history_folders)

        # Filter out historical items
        if history_df:
            source_df = source_df.join(history_df, 'binScannableId', 'leftanti')
        else:
            self.logger.error("No History was found")

        # Sort by defectProbability
        source_df = source_df.orderBy(desc('defectProbability'))

        return source_df

def partition_and_dedupe_data_frame(data_frame, partition_columns, sort_key, desc): 
          if desc: 
            window = Window.partitionBy(partition_columns).orderBy(F.desc(sort_key)) 
          else: 
            window = Window.partitionBy(partition_columns).orderBy(F.asc(sort_key)) 

          data_frame = data_frame.withColumn('rank', F.rank().over(window)).filter(F.col('rank') == 1).drop('rank') 
          return data_frame

def _virtual_count_filter(self, source_df):
        df = self._create_data_frame()
        for key in self.virtual_count_thresholds.keys():
            temp_df = source_df.filter((source_df['expectedQuantity'] == key) & (source_df['defectProbability'] > self.virtual_count_thresholds[key]))
            df = df.union(temp_df)
        return df
当我执行df.explain()时,我得到以下结果-

Physical Plan == *Sort [defectProbability#2 DESC NULLS LAST], true, 0 +- Exchange rangepartitioning(defectProbability#2 DESC NULLS LAST, 25) +- *Project [expectedQuantity#0, cameraCaptureTimestampUtc#1, defectProbability#2, binScannableId#3, warehouseId#4, defectResult#5] +- *Filter ((isnotnull(rank#35) && (rank#35 = 1)) && (((((((expectedQuantity#0 = 0) && (defectProbability#2 > 0.99)) || ((expectedQuantity#0 = 1) && (defectProbability#2 > 0.98))) || ((expectedQuantity#0 = 2) && (defectProbability#2 > 0.99))) || ((expectedQuantity#0 = 3) && (defectProbability#2 > 0.99))) || ((expectedQuantity#0 = 4) && (defectProbability#2 > 0.99))) || ((expectedQuantity#0 = 5) && (defectProbability#2 > 0.99)))) +- Window [rank(cameraCaptureTimestampUtc#1) windowspecdefinition(binScannableId#3, warehouseId#4, cameraCaptureTimestampUtc#1 DESC NULLS LAST, ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS rank#35], [binScannableId#3, warehouseId#4], [cameraCaptureTimestampUtc#1 DESC NULLS LAST] +- *Sort [binScannableId#3 ASC NULLS FIRST, warehouseId#4 ASC NULLS FIRST, cameraCaptureTimestampUtc#1 DESC NULLS LAST], false, 0 +- Exchange hashpartitioning(binScannableId#3, warehouseId#4, 25) +- Union :- Scan ExistingRDD[expectedQuantity#0,cameraCaptureTimestampUtc#1,defectProbability#2,binScannableId#3,warehouseId#4,defectResult#5] +- *FileScan json [expectedQuantity#13,cameraCaptureTimestampUtc#14,defectProbability#15,binScannableId#16,warehouseId#17,defectResult#18] Batched: false, Format: JSON, Location: InMemoryFileIndex[s3://vbi-autocount-chunking-prod-nafulfillment2/TPA1/2019/04/25/12/vbi-ac-chunk..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<expectedQuantity:int,cameraCaptureTimestampUtc:string,defectProbability:double,binScannabl... 
物理计划==*排序[defectProbability#2 DESC NULLS LAST],true,0+-交换范围划分(defectProbability#2 DESC NULLS LAST,25)+-*项目[expectedQuantity#0,CameraCaptureTimestUTC#1,defectProbability#2,BinscannebleID#3,warehouseId#4,defectResult#5]-[isnotnull(rank#35)&(rank#1)&&rank=1)(((((((预计数量0=1)和(缺陷概率0=1)及(缺陷概率2>0.98)0.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.98)124((预计数量0 0 0=0 0 0 0 0 0 0 0.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0|(预计数量#0=5)和(缺陷概率2>0.99))+-窗口[rank(CameraCaptureTimestUTC#1)windowspecdefinition(BinscannebleID#3,warehouseId#4,CameraCaptureTimestUTC#1 DESC最后为空,前一行和当前行之间的行)为秩#35],[BinscannebleID#3,warehouseId#4],[CameraCaptureTimestimestUTC#1 DESC最后为空][BinscannebleID#3 ASC先为空,warehouseId#4 ASC先为空,CameraCaptureTimestUTC#1 DESC LAST为空],false,0+-交换哈希分区(BinscannebleID#3,warehouseId#4,25)+-联合:-扫描现有RDD[expectedQuantity#0,CameraCaptureTimestimestUTC#1,缺陷概率#2,BinscannebleID#3,warehouseId#4,结果#json+-5][expectedQuantity#13,CameraCaptureTimestUTC#14,缺陷概率#15,BinScanableID#16,warehouseId#17,缺陷结果#18]批处理:false,格式:JSON,位置:InMemoryFileIndex[s3://vbi-autocount-chunking-prod-nafulfillment2/TPA1/2019/04/25/12/vbi ac…,分区过滤器:[],推送过滤器:[],ReadSchema:Structure
orderBy()
是一种“大范围转换”,这意味着Spark需要触发“洗牌”和“阶段拆分(1个分区到多个输出分区)”从而检索分布在集群中的所有分区拆分以执行
orderBy()

如果您查看解释计划,它有一个重新分区指示符,带有默认的200个输出分区(spark.sql.shuffle.partitionsconfiguration),这些分区在执行后写入磁盘。这告诉您,当执行spark“action”时,将发生“广泛转换”即“shuffle”

其他“广泛转换”包括:
distinct()、groupBy()和join()=>*有时*

from pyspark.sql.functions import desc
df = spark.range(10).orderBy(desc("id"))
df.show()
df.explain()

+---+
| id|
+---+
|  9|
|  8|
|  7|
|  6|
|  5|
|  4|
|  3|
|  2|
|  1|
|  0|
+---+

== Physical Plan ==
*(2) Sort [id#6L DESC NULLS LAST], true, 0
+- Exchange rangepartitioning(id#6L DESC NULLS LAST, 200)
   +- *(1) Range (0, 10, step=1, splits=8)


感谢您的回复,我已将数据框明确划分为两列以删除重复项。然后我尝试按第三列对其进行排序。我发现数据未在所有数据框中进行排序。是否需要代码段?当然…请提供更多代码…您还可以使用
.dropDuplicates()
要删除重复的行…不知道为什么要在这里分区我有一个非常定制的删除重复项的方法。看到这个问题,我想在我的代码中添加,我将数据框分为两列并删除重复项,然后再按第三列对数据框进行排序。我编写了一个测试,验证数据框中的项是否已排序,并且嘿,不是。我最初认为测试有问题,但似乎很好。我从下面答案的注释中添加了你的代码。在代码中,你显式地按一些列和数据顺序进行分区,这只会导致分区级别排序。实际上,我在上述分区之后进行了另一次排序。请在问题中添加这一点也打开(单击按钮)。完成。感谢您的查看。