Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/295.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
spark mlib python-保存模型时解决错误_Python_Apache Spark - Fatal编程技术网

spark mlib python-保存模型时解决错误

spark mlib python-保存模型时解决错误,python,apache-spark,Python,Apache Spark,我正在犯错误。下面是一个线性回归的例子。我有spark 1.6.1和python 3.5.1。我应该做什么改变 from pyspark.mllib.regression import LabeledPoint, LinearRegressionWithSGD, LinearRegressionModel # Load and parse the data def parsePoint(line): values = [float(x) for x in line.replace(',

我正在犯错误。下面是一个线性回归的例子。我有spark 1.6.1和python 3.5.1。我应该做什么改变

from pyspark.mllib.regression import LabeledPoint, LinearRegressionWithSGD, LinearRegressionModel

# Load and parse the data
def parsePoint(line):
    values = [float(x) for x in line.replace(',', ' ').split(' ')]
    return LabeledPoint(values[0], values[1:])

data = sc.textFile("data/mllib/ridge-data/lpsa.data")
parsedData = data.map(parsePoint)

# Build the model
model = LinearRegressionWithSGD.train(parsedData, iterations=100, step=0.00000001)

# Evaluate the model on training data
valuesAndPreds = parsedData.map(lambda p: (p.label, model.predict(p.features)))
MSE = valuesAndPreds.map(lambda v: (v[0] - v[1])**2).reduce(lambda x, y: x + y) / valuesAndPreds.count()
print("Mean Squared Error = " + str(MSE))

# Save and load model
>>> model.save(sc, "myModelPath")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "c:\spark-1.6.1-bin-hadoop2.6\spark-1.6.1-bin-hadoop2.6\python\pyspark\mllib\regression.py", line 185, in save
    java_model.save(sc._jsc.sc(), path)
  File "c:\spark-1.6.1-bin-hadoop2.6\spark-1.6.1-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\java_gateway.py", line 813, in __call__
  File "c:\spark-1.6.1-bin-hadoop2.6\spark-1.6.1-bin-hadoop2.6\python\pyspark\sql\utils.py", line 45, in deco
    return f(*a, **kw)

只需确保不存在同名的模型/目录。如果将
myModelPath
重命名为其他值,或者删除文件夹
myModelPath
,则上述代码可以正常工作

据我所知,不可复制。唯一需要的更改是元组参数解包删除。我的意思是:
lambda(v,p):(v-p)**2)
我从页面复制了整个代码。你应该能够复制代码…我做了完全相同的事情:)我甚至下载了新的二进制文件,而不是我每天使用的增量构建。也许这是Windows特有的:/
MSE = valuesAndPreds.map(lambda v: (v[0] - v[1])**2).reduce(lambda x, y: x + y) / valuesAndPreds.count()