Apache spark 针对Exchange分区的Spark物理计划为false/true

Apache spark 针对Exchange分区的Spark物理计划为false/true,apache-spark,sql-execution-plan,Apache Spark,Sql Execution Plan,在物理计划中显示此选项 repartitionedDF.explain 我注意到假在某些情况下也可能是真的 这意味着什么?我知道它一次,但已经忘记了。经过一些挖掘,我相信它指的是noUserSpecifiedNumPartition变量。如果执行重新分区,此布尔变量将为false,因为您指定了分区数。否则它是真的。试着做一个简单的orderBy,我认为你应该得到true 我是通过做实验发现这一点的 == Physical Plan == Exchange hashpartitioning(pu

在物理计划中显示此选项

repartitionedDF.explain
我注意到假在某些情况下也可能是真的


这意味着什么?我知道它一次,但已经忘记了。

经过一些挖掘,我相信它指的是
noUserSpecifiedNumPartition
变量。如果执行重新分区,此布尔变量将为
false
,因为您指定了分区数。否则它是
真的
。试着做一个简单的
orderBy
,我认为你应该得到
true

我是通过做实验发现这一点的

== Physical Plan ==
Exchange hashpartitioning(purchase_month#25, 10), false, [id=#6]
+- LocalTableScan [item#23, price#24, purchase_month#25]
灵感来自。其输出为(仅截断为相关部分):

其中
true
false
与物理计划很好地对应:

{
  "class" : "org.apache.spark.sql.execution.exchange.ShuffleExchangeExec",
  "num-children" : 1,
  "outputPartitioning" : [ {
    "class" : "org.apache.spark.sql.catalyst.plans.physical.RangePartitioning",
    "num-children" : 1,
    "ordering" : [ 0 ],
    "numPartitions" : 200
  }, {
    "class" : "org.apache.spark.sql.catalyst.expressions.SortOrder",
    "num-children" : 1,
    "child" : 0,
    "direction" : {
      "object" : "org.apache.spark.sql.catalyst.expressions.Ascending$"
    },
    "nullOrdering" : {
      "object" : "org.apache.spark.sql.catalyst.expressions.NullsFirst$"
    },
    "sameOrderExpressions" : {
      "object" : "scala.collection.immutable.Set$EmptySet$"
    }
  }, {
    "class" : "org.apache.spark.sql.catalyst.expressions.AttributeReference",
    "num-children" : 0,
    "name" : "series",
    "dataType" : "string",
    "nullable" : true,
    "metadata" : { },
    "exprId" : {
      "product-class" : "org.apache.spark.sql.catalyst.expressions.ExprId",
      "id" : 16,
      "jvmId" : "35ee1aa5-f899-4fca-a8a6-a06c3eaabe5c"
    },
    "qualifier" : [ ]
  } ],
  "child" : 0,
  "noUserSpecifiedNumPartition" : true
}, {
  "class" : "org.apache.spark.sql.execution.exchange.ShuffleExchangeExec",
  "num-children" : 1,
  "outputPartitioning" : [ {
    "class" : "org.apache.spark.sql.catalyst.plans.physical.HashPartitioning",
    "num-children" : 1,
    "expressions" : [ 0 ],
    "numPartitions" : 200
  }, {
    "class" : "org.apache.spark.sql.catalyst.expressions.AttributeReference",
    "num-children" : 0,
    "name" : "series",
    "dataType" : "string",
    "nullable" : true,
    "metadata" : { },
    "exprId" : {
      "product-class" : "org.apache.spark.sql.catalyst.expressions.ExprId",
      "id" : 16,
      "jvmId" : "35ee1aa5-f899-4fca-a8a6-a06c3eaabe5c"
    },
    "qualifier" : [ ]
  } ],
  "child" : 0,
  "noUserSpecifiedNumPartition" : false
}
df.repartition('series).orderBy('series).解释
==实际计划==
*(1) 排序[series#16 ASC NULLS FIRST],真,0
+-Exchange rangepartitioning(series#16 ASC NULLS FIRST,200),true,[id=#192]
+-Exchange哈希分区(系列#16200),false,[id=#190]
+-FileScan csv[series#16,timestamp#17,value#18]批处理:false,DataFilters:[],格式:csv,位置:InMemoryFileIndex[file:/tmp/df.csv],PartitionFilters:[],PushedFilters:[],ReadSchema:struct

只是一个简单的问题,您知道pyspark是否可以使用
[…]queryExecution.ExecutePlan.prettyJson吗?@Kafels您可以使用
df访问java对象。\u jdf.queryExecution().ExecutePlan().prettyJson()
{
  "class" : "org.apache.spark.sql.execution.exchange.ShuffleExchangeExec",
  "num-children" : 1,
  "outputPartitioning" : [ {
    "class" : "org.apache.spark.sql.catalyst.plans.physical.RangePartitioning",
    "num-children" : 1,
    "ordering" : [ 0 ],
    "numPartitions" : 200
  }, {
    "class" : "org.apache.spark.sql.catalyst.expressions.SortOrder",
    "num-children" : 1,
    "child" : 0,
    "direction" : {
      "object" : "org.apache.spark.sql.catalyst.expressions.Ascending$"
    },
    "nullOrdering" : {
      "object" : "org.apache.spark.sql.catalyst.expressions.NullsFirst$"
    },
    "sameOrderExpressions" : {
      "object" : "scala.collection.immutable.Set$EmptySet$"
    }
  }, {
    "class" : "org.apache.spark.sql.catalyst.expressions.AttributeReference",
    "num-children" : 0,
    "name" : "series",
    "dataType" : "string",
    "nullable" : true,
    "metadata" : { },
    "exprId" : {
      "product-class" : "org.apache.spark.sql.catalyst.expressions.ExprId",
      "id" : 16,
      "jvmId" : "35ee1aa5-f899-4fca-a8a6-a06c3eaabe5c"
    },
    "qualifier" : [ ]
  } ],
  "child" : 0,
  "noUserSpecifiedNumPartition" : true
}, {
  "class" : "org.apache.spark.sql.execution.exchange.ShuffleExchangeExec",
  "num-children" : 1,
  "outputPartitioning" : [ {
    "class" : "org.apache.spark.sql.catalyst.plans.physical.HashPartitioning",
    "num-children" : 1,
    "expressions" : [ 0 ],
    "numPartitions" : 200
  }, {
    "class" : "org.apache.spark.sql.catalyst.expressions.AttributeReference",
    "num-children" : 0,
    "name" : "series",
    "dataType" : "string",
    "nullable" : true,
    "metadata" : { },
    "exprId" : {
      "product-class" : "org.apache.spark.sql.catalyst.expressions.ExprId",
      "id" : 16,
      "jvmId" : "35ee1aa5-f899-4fca-a8a6-a06c3eaabe5c"
    },
    "qualifier" : [ ]
  } ],
  "child" : 0,
  "noUserSpecifiedNumPartition" : false
}
df.repartition('series).orderBy('series).explain
== Physical Plan ==
*(1) Sort [series#16 ASC NULLS FIRST], true, 0
+- Exchange rangepartitioning(series#16 ASC NULLS FIRST, 200), true, [id=#192]
   +- Exchange hashpartitioning(series#16, 200), false, [id=#190]
      +- FileScan csv [series#16,timestamp#17,value#18] Batched: false, DataFilters: [], Format: CSV, Location: InMemoryFileIndex[file:/tmp/df.csv], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<series:string,timestamp:string,value:string>