Apache spark 如何计算训练模型的原木损失?

Apache spark 如何计算训练模型的原木损失?,apache-spark,apache-spark-mllib,apache-spark-ml,Apache Spark,Apache Spark Mllib,Apache Spark Ml,我正在为逻辑回归构建一个ML管道 val lr = new LogisticRegression() lr.setMaxIter(100).setRegParam(0.001) val pipeline = new Pipeline().setStages(Array(geoDimEncoder,clientTypeEncoder, devTypeDimIdEncoder,pubClientIdEncoder,tmpltIdEncoder,

我正在为逻辑回归构建一个ML管道

val lr = new LogisticRegression()
lr.setMaxIter(100).setRegParam(0.001)

val pipeline = new Pipeline().setStages(Array(geoDimEncoder,clientTypeEncoder,
               devTypeDimIdEncoder,pubClientIdEncoder,tmpltIdEncoder,
               hourEncoder,assembler,lr))

val model = pipeline.fit(trainingDF)
现在,当模型被训练时,我希望看到训练集的概率,并计算某些验证参数,比如log loss。但是,我用“模型”找不到这个

我在任何地方都能找到的唯一东西是

model.transform(testDF).select(....)

如何使用培训集获取指标以进行培训集验证?

请检查以下方法,这些方法应该适用于您:

val lr = new LogisticRegression()
  .setMaxIter(10)
  .setRegParam(0.3)
  .setElasticNetParam(0.8)

val lrModel = lr.fit(data)

val trainingSummary = lrModel.summary

// Obtain the objective per iteration.
val objectiveHistory = trainingSummary.objectiveHistory
println("objectiveHistory:")
objectiveHistory.foreach(loss => println(loss))