Apache spark Spark MlLib线性回归(线性最小二乘),给出随机结果

Apache spark Spark MlLib线性回归(线性最小二乘),给出随机结果,apache-spark,machine-learning,apache-spark-mllib,Apache Spark,Machine Learning,Apache Spark Mllib,我是火花和机器学习方面的新手。 我已经成功地学习了一些Mllib教程,但我无法让它正常工作: 我在这里找到了示例代码: (部分线性回归与GD) 代码如下: import org.apache.spark.mllib.regression.LabeledPoint import org.apache.spark.mllib.regression.LinearRegressionModel import org.apache.spark.mllib.regression.LinearRegress

我是火花和机器学习方面的新手。 我已经成功地学习了一些Mllib教程,但我无法让它正常工作:

我在这里找到了示例代码:

(部分线性回归与GD)

代码如下:

import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.regression.LinearRegressionModel
import org.apache.spark.mllib.regression.LinearRegressionWithSGD
import org.apache.spark.mllib.linalg.Vectors

// Load and parse the data
val data = sc.textFile("data/mllib/ridge-data/lpsa.data")
val parsedData = data.map { line =>
  val parts = line.split(',')
  LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(' ').map(_.toDouble)))
}.cache()

// Building the model
val numIterations = 100
val model = LinearRegressionWithSGD.train(parsedData, numIterations)

// Evaluate model on training examples and compute training error
val valuesAndPreds = parsedData.map { point =>
  val prediction = model.predict(point.features)
  (point.label, prediction)
}
val MSE = valuesAndPreds.map{case(v, p) => math.pow((v - p), 2)}.mean()
println("training Mean Squared Error = " + MSE)

// Save and load model
model.save(sc, "myModelPath")
val sameModel = LinearRegressionModel.load(sc, "myModelPath")
(这正是网站上的内容)

结果是

训练均方误差=6.2087803138063045

给予

我这里的问题是预测看起来完全是随机的(而且是错误的),因为它是网站示例的完美副本,使用相同的输入数据(训练集),我不知道去哪里查找,我是否遗漏了什么

请给我一些建议或线索在哪里搜索,我可以阅读和实验


感谢

线性回归是基于SGD的,需要调整步长,有关更多详细信息,请参阅

在您的示例中,如果将步长设置为0.1,则会得到更好的结果(MSE=0.5)

有关更真实数据集的另一个示例,请参见

正如zero323所解释的,将截距设置为true将解决问题。如果未设置为true,则回归线将被迫穿过原点,这在本例中是不合适的。(不确定,为什么示例代码中不包含此项)

因此,要解决您的问题,请更改代码(Pyspark)中的以下行:


虽然没有明确提到,但这也是上面问题中“selvinsource”的代码起作用的原因。在本例中,更改步长没有多大帮助

我正在使用python,我尝试了你的建议,添加步长=0.001,但我仍然得到了非常奇怪的随机权重值和截距值,比如-
(权重=[-1.1598652153e+75],截距=-6.02077624272919e+69)
,因此我的预测值也非常错误。我的详细帖子是:我不熟悉python api,唯一的建议是尝试各种步长,确保训练数据与线性算法兼容。
valuesAndPreds.collect
    Array[(Double, Double)] = Array((-0.4307829,-1.8383286021929077),
 (-0.1625189,-1.4955700806407322), (-0.1625189,-1.118820892849544), 
(-0.1625189,-1.6134108278724875), (0.3715636,-0.45171266551058276), 
(0.7654678,-1.861316066986158), (0.8544153,-0.3588282725617985), 
(1.2669476,-0.5036812148225209), (1.2669476,-1.1534698170911792), 
(1.2669476,-0.3561392231695041), (1.3480731,-0.7347031705813306), 
(1.446919,-0.08564658011814863), (1.4701758,-0.656725375080344), 
(1.4929041,-0.14020483324910105), (1.5581446,-1.9438858658143454), 
(1.5993876,-0.02181165554398845), (1.6389967,-0.3778677315868635), 
(1.6956156,-1.1710092824030043), (1.7137979,0.27583044213064634), 
(1.8000583,0.7812664902440078), (1.8484548,0.94605507153074), 
(1.8946169,-0.7217282082851512), (1.9242487,-0.24422843221437684),...
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.regression.LinearRegressionModel
import org.apache.spark.mllib.regression.LinearRegressionWithSGD
import org.apache.spark.mllib.linalg.Vectors

// Load and parse the data
val data = sc.textFile("data/mllib/ridge-data/lpsa.data")
val parsedData = data.map { line =>
  val parts = line.split(',')
  LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(' ').map(_.toDouble)))
}.cache()

// Build the model
var regression = new LinearRegressionWithSGD().setIntercept(true)
regression.optimizer.setStepSize(0.1)
val model = regression.run(parsedData)

// Evaluate model on training examples and compute training error
val valuesAndPreds = parsedData.map { point =>
  val prediction = model.predict(point.features)
  (point.label, prediction)
}
val MSE = valuesAndPreds.map{case(v, p) => math.pow((v - p), 2)}.mean()
println("training Mean Squared Error = " + MSE)
model = LinearRegressionWithSGD.train(parsedData, numIterations)
model = LinearRegressionWithSGD.train(parsedData, numIterations, intercept=True)