Scala 如何在PySpark中压缩两个RDD?

Scala 如何在PySpark中压缩两个RDD?,scala,hadoop,apache-spark,pyspark,rdd,Scala,Hadoop,Apache Spark,Pyspark,Rdd,我一直在尝试将两个RDD合并到averagePoints1和kpoints2以下。它不断抛出这个错误 ValueError: Can not deserialize RDD with different number of items in pair: (2, 1) 我尝试了很多方法,但我不能确定这两个RDD是相同的,具有相同数量的分区。我下一步要做的是在两个列表上应用欧几里德距离函数来测量差异,因此如果有人知道如何解决这个错误,或者有不同的方法,我会非常感激 提前谢谢 averagePoi

我一直在尝试将两个RDD合并到averagePoints1和kpoints2以下。它不断抛出这个错误

ValueError: Can not deserialize RDD with different number of items in pair: (2, 1)
我尝试了很多方法,但我不能确定这两个RDD是相同的,具有相同数量的分区。我下一步要做的是在两个列表上应用欧几里德距离函数来测量差异,因此如果有人知道如何解决这个错误,或者有不同的方法,我会非常感激

提前谢谢

 averagePoints1 = averagePoints.map(lambda x: x[1])
 averagePoints1.collect()
 Out[15]:
 [[34.48939954847243, -118.17286894440112],
 [41.028994230117945, -120.46279399895184],
 [37.41157578999635, -121.60431843383599],
 [34.42627845075509, -113.87191272382309],
 [39.00897622397381, -122.63680410846844]] 

  kpoints2 = sc.parallelize(kpoints,4)
  In [17]:

  kpoints2.collect()
  Out[17]:
  [[34.0830381107, -117.960562808],
  [38.8057258629, -120.990763316],
  [38.0822414157, -121.956922473],
  [33.4516748053, -116.592291648],
  [38.1808762414, -122.246825578]]
检查这个答案


对于未来的搜索者,这是我在结尾时遵循的解决方案

kpoints2是RDD的样本平均点是RDD的平均点,我将编写一个while循环,直到收敛,所以这个解决方案没有帮助。你还有其他想法吗!
a= [[34.48939954847243, -118.17286894440112],
 [41.028994230117945, -120.46279399895184],
 [37.41157578999635, -121.60431843383599],
 [34.42627845075509, -113.87191272382309],
 [39.00897622397381, -122.63680410846844]] 
b= [[34.0830381107, -117.960562808],
  [38.8057258629, -120.990763316],
  [38.0822414157, -121.956922473],
  [33.4516748053, -116.592291648],
  [38.1808762414, -122.246825578]]

rdda = sc.parallelize(a)
rddb = sc.parallelize(b)
c = rdda.zip(rddb)
print(c.collect())
newSample=newCenters.collect() #new centers as a list
    samples=zip(newSample,sample) #sample=> old centers
    samples1=sc.parallelize(samples)
    totalDistance=samples1.map(lambda (x,y):distanceSquared(x[1],y))