Scala 如何从spark pipeline logistic模型中提取可变权重?
我目前正在尝试学习Spark管道(Spark 1.6.0)。我将数据集(训练和测试)作为oas.sql.DataFrame对象导入。执行以下代码后,生成的模型为Scala 如何从spark pipeline logistic模型中提取可变权重?,scala,apache-spark,pipeline,Scala,Apache Spark,Pipeline,我目前正在尝试学习Spark管道(Spark 1.6.0)。我将数据集(训练和测试)作为oas.sql.DataFrame对象导入。执行以下代码后,生成的模型为oas.ml.tuning.CrossValidatorModel 您可以使用model.transform(test)根据Spark中的测试数据进行预测。然而,我想比较模型用于预测的权重与来自R的权重。如何提取预测因子的权重和模型的截距(如果有)?Scala代码是: import sqlContext.implicits._ impor
oas.ml.tuning.CrossValidatorModel
您可以使用model.transform
(test)根据Spark中的测试数据进行预测。然而,我想比较模型用于预测的权重与来自R的权重。如何提取预测因子的权重和模型的截距(如果有)?Scala代码是:
import sqlContext.implicits._
import org.apache.spark.mllib.linalg.{Vectors, Vector}
import org.apache.spark.SparkContext
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.classification.{LogisticRegression, LogisticRegressionModel}
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.tuning.{ParamGridBuilder, CrossValidator}
val conTrain = sc.textFile("AbsolutePath2Train.txt")
val conTest = sc.textFile("AbsolutePath2Test.txt")
// parse text and convert to sql.DataFrame
val train = conTrain.map { line =>
val parts = line.split(",")
LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(" +").map(_.toDouble)))
}.toDF()
val test =conTest.map{ line =>
val parts = line.split(",")
LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(" +").map(_.toDouble)))
}.toDF()
// set parameter space and evaluation method
val lr = new LogisticRegression().setMaxIter(400)
val pipeline = new Pipeline().setStages(Array(lr))
val paramGrid = new ParamGridBuilder().addGrid(lr.regParam, Array(0.1, 0.01)).addGrid(lr.fitIntercept).addGrid(lr.elasticNetParam, Array(0.0, 0.5, 1.0)).build()
val cv = new CrossValidator().setEstimator(pipeline).setEvaluator(new BinaryClassificationEvaluator).setEstimatorParamMaps(paramGrid).setNumFolds(2)
// fit logistic model
val model = cv.fit(train)
// If you want to predict with test
val pred = model.transform(test)
我的spark环境无法访问。因此,这些代码将被重新键入并重新检查。我希望他们是正确的。到目前为止,我试着在网上搜索,询问其他人。关于我的编码,欢迎提出建议和批评。我也在寻找同样的东西。你可能已经有了答案,但不管怎样,答案就在这里
import org.apache.spark.ml.classification.LogisticRegressionModel
val lrmodel = model.bestModel.asInstanceOf[LogisticRegressionModel]
print(model.weight, model.intercept)
我仍然不确定如何从上面的“模型”中提取权重。但是,通过对流程进行重组,spark-1.6.0可以实现以下功能:
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.tuning.{ParamGridBuilder, TrainValidationSplit}
val lr = new LogisticRegression().setMaxIter(400)
val paramGrid = new ParamGridBuilder().addGrid(lr.regParam, Array(0.1, 0.01)).addGrid(lr.fitIntercept).addGrid(lr.elasticNetParam, Array(0.0, 0.5, 1.0)).build()
val trainValidationSplit = new TrainValidationSplit().setEstimator(lr).setEvaluator(new BinaryClassificationEvaluator).setEstimatorParamMaps(paramGrid).setTrainRatio(0.8)
val restructuredModel = trainValidationSplit.fit(train)
val lrmodel = restructuredModel.bestModel.asInstanceOf[LogisticRegressionModel]
lrmodel.weigths
lrmodel.intercept
我注意到这里的“lrmodel”和上面生成的“model”之间的区别:
model.bestModel-->给出oas.ml.model[\u]=管道_****
restructuredModel.bestModel-->给出oas.ml.Model[\u]=logreg_****
这就是为什么我们可以将resturedmodel.bestModel转换为LogisticRegressionModel,但不能转换为model.bestModel。当我理解差异的原因时,我会补充更多
// set parameter space and evaluation method
val lr = new LogisticRegression().setMaxIter(400)
val pipeline = new Pipeline().setStages(Array(lr))
val paramGrid = new ParamGridBuilder().addGrid(lr.regParam, Array(0.1, 0.01)).addGrid(lr.fitIntercept).addGrid(lr.elasticNetParam, Array(0.0, 0.5, 1.0)).build()
val cv = new CrossValidator().setEstimator(pipeline).setEvaluator(new BinaryClassificationEvaluator).setEstimatorParamMaps(paramGrid).setNumFolds(2)
// you can print lr model coefficients as below
val model = cv.bestModel.asInstanceOf[PipelineModel]
val lrModel = model.stages(0).asInstanceOf[LogisticRegressionModel]
println(s"LR Model coefficients:\n${lrModel.coefficients.toArray.mkString("\n")}")
两个步骤:
在Spark1.6.0上尝试此操作,但产生错误“oas.ml.PipelineModel无法转换为oas.ml.Classification.LogisticRegressionModel”。我补充了一个关于如何以类似方式实现这一点的答案。谢谢~对不起,我上的是Spark 1.5.2