Apache spark spark:PersistingPartitionBy不';行不通

Apache spark spark:PersistingPartitionBy不';行不通,apache-spark,pyspark,Apache Spark,Pyspark,我试图查看在partitionBy保存后续操作之后,rdd上的persist()是否保存了后续操作,spark ui似乎表明我没有保存任何操作 如果persist有效,我假设应该跳过第7或第8阶段 (无论哪种情况,我的测试代码都可能错误,请告诉我。) 这是我正在使用的代码 from pyspark import SparkContext, SparkConf from pyspark.rdd import portable_hash from pyspark.sql import Spa

我试图查看在
partitionBy
保存后续操作之后,rdd上的
persist()
是否保存了后续操作,spark ui似乎表明我没有保存任何操作

如果
persist
有效,我假设应该跳过第7或第8阶段

(无论哪种情况,我的测试代码都可能错误,请告诉我。)

这是我正在使用的代码

 from pyspark import SparkContext, SparkConf
 from pyspark.rdd import portable_hash
 from pyspark.sql import SparkSession, Row
 from pyspark.storagelevel import StorageLevel

 transactions = [                                                                                                                                                  
     {'name': 'Bob', 'amount': 100, 'country': 'United Kingdom'},                                                                                                  
     {'name': 'James', 'amount': 15, 'country': 'United Kingdom'},                                                                                                 
     {'name': 'Marek', 'amount': 51, 'country': 'Poland'},
     {'name': 'Johannes', 'amount': 200, 'country': 'Germany'},
     {'name': 'Paul', 'amount': 75, 'country': 'Poland'},
 ]

                                                                                                                                                               conf = SparkConf().setAppName("word count4").setMaster("local[3]")                                                                                            sc = SparkContext(conf = conf)
 lines = sc.textFile("in/word_count.text")
 words = lines.flatMap(lambda line: line.split(" "))

 rdd = words.map(lambda word: (word, 1))

 rdd = rdd.partitionBy(4)                                                                                                                                      
 rdd = rdd.persist(StorageLevel.MEMORY_ONLY)                                                                                                                   
 rdd = rdd.reduceByKey(lambda x, y: x+y)

 for count, word in rdd.collect():
     print("{} : {}".format(word, count))

 rdd = rdd.sortByKey(ascending=False)

 for count, word in rdd.collect():
     print("{} : {}".format(word, count))

你的期望是不正确的。如果你检查DAG

(4) PythonRDD[28] at collect at <ipython-input-15-a9f47c6b3258>:3 []
 |  MapPartitionsRDD[27] at mapPartitions at PythonRDD.scala:133 []
 |  ShuffledRDD[26] at partitionBy at NativeMethodAccessorImpl.java:0 []
 +-(4) PairwiseRDD[25] at sortByKey at <ipython-input-15-a9f47c6b3258>:1 []
    |  PythonRDD[24] at sortByKey at <ipython-input-15-a9f47c6b3258>:1 []
    |  MapPartitionsRDD[20] at mapPartitions at PythonRDD.scala:133 []
    |      CachedPartitions: 4; MemorySize: 6.6 KB; ExternalBlockStoreSize: 0.0 B; DiskSize: 0.0 B
    |  ShuffledRDD[19] at partitionBy at NativeMethodAccessorImpl.java:0 []
    +-(1) PairwiseRDD[18] at partitionBy at <ipython-input-13-fff304ea68c9>:6 []
       |  PythonRDD[17] at partitionBy at <ipython-input-13-fff304ea68c9>:6 []
       |  in/word_count.text MapPartitionsRDD[16] at textFile at NativeMethodAccessorImpl.java:0 []
       |  in/word_count.text HadoopRDD[15] at textFile at NativeMethodAccessorImpl.java:0 []
(4)PythonRDD[28]位于收集地点:3[]
|MapPartitionsRDD[27]位于PythonRDD.scala的mapPartitions:133[]
|NativeMethodAccessorImpl.java分区中的Shuffledd[26]0[]
+-(4) sortByKey的成对者[25]在:1[]
|蟒蛇[24]位于sortByKey:1[]
|MapPartitionsRDD[20]位于PythonRDD.scala的mapPartitions:133[]
|缓存分区:4个;记忆化:6.6KB;外部BlockStoreSize:0.0B;磁盘大小:0.0 B
|NativeMethodAccessorImpl.java分区中的Shuffledd[19]0[]
+-(1) 成对分布[18]分区比:6[]
|PythonRDD[17]at partitionBy at:6[]
|NativeMethodAccessorImpl.java的textFile处的/word_count.text MapPartitionsRDD[16]中:0[]
|NativeMethodAccessorImpl.java的textFile中的/word_count.text HadoopRDD[15]中:0[]

您将看到,缓存组件只是促成上述阶段的众多操作之一。虽然缓存的数据确实被重用,但剩余的操作(为
sortByKey
准备shuffle)仍然需要计算。

Hi,感谢您的回答。我想我有两个问题。1.我想既然我做了
partitionedBy
(我认为默认情况下是按pairdd键进行分区),
sortByKey
就不需要洗牌了(这种做法的全部目的是减少洗牌)。通过查看您发布的输出,很难看到缓存的数据确实被重用。(我如何判断?或者我是否可以查找资源以了解如何读取输出?)