Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/308.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
K意味着python_Python_Apache Spark_K Means_Lib - Fatal编程技术网

K意味着python

K意味着python,python,apache-spark,k-means,lib,Python,Apache Spark,K Means,Lib,我正在尝试使用python实现spark的kmean算法,我正在使用库中的python代码 代码正在运行,但为了绘制图形,我需要操纵sameModel=KMeansModel.loadsc,KMeansModel2中的对象,我不知道如何操作。。我应该在csv文件中加载sc吗?救命啊 from __future__ import print_function # $example on$ from numpy import array from math import sqrt

我正在尝试使用python实现spark的kmean算法,我正在使用库中的python代码 代码正在运行,但为了绘制图形,我需要操纵sameModel=KMeansModel.loadsc,KMeansModel2中的对象,我不知道如何操作。。我应该在csv文件中加载sc吗?救命啊

  from __future__ import print_function
  # $example on$
  from numpy import array
  from math import sqrt
  # $example off$
  from pyspark import SparkContext
  # $example on$
  from pyspark.mllib.clustering import KMeans, KMeansModel
  # $example off$
  if __name__ == "__main__":
    sc = SparkContext(appName="KMeansExample")  # SparkContext
    # $example on$
    # Load and parse the data
    data = sc.textFile("kmeans_data2.txt")
    parsedData = data.map(lambda line: array([float(x) for x in      line.split(' ')]))
    # Build the model (cluster the data)
    clusters = KMeans.train(parsedData, 2, maxIterations=10,    initializationMode="random")
    # Evaluate clustering by computing Within Set Sum of Squared Errors
    def error(point):
        center = clusters.centers[clusters.predict(point)]
        return sqrt(sum([x**2 for x in (point - center)]))
WSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x,y:x + y)
    print("Within Set Sum of Squared Error = " + str(WSSSE))
    # Save and load model
    clusters.save(sc, "KMeansModel2")
    sameModel = KMeansModel.load(sc, "KMeansModel2")
    print (sameModel)
    # $example off$
    sc.stop()

如果您想使用模式predict,您需要在运行的spark上下文中执行此操作,例如,您可以