Apache spark 线性回归scala.MatchError:

Apache spark 线性回归scala.MatchError:,apache-spark,apache-spark-ml,Apache Spark,Apache Spark Ml,在Spark 1.6.1和2.0中使用ParamGridBuilder时,我遇到一个scala.MatchError val paramGrid = new ParamGridBuilder() .addGrid(lr.regParam, Array(0.1, 0.01)) .addGrid(lr.fitIntercept) .addGrid(lr.elasticNetParam, Array(0.0, 0.5, 1.0)) .build() 错误是 org.apache.sp

在Spark 1.6.1和2.0中使用ParamGridBuilder时,我遇到一个scala.MatchError

val paramGrid = new ParamGridBuilder()
  .addGrid(lr.regParam, Array(0.1, 0.01))
  .addGrid(lr.fitIntercept)
  .addGrid(lr.elasticNetParam, Array(0.0, 0.5, 1.0))
  .build()
错误是

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 57.0 failed 1 times, most recent failure: Lost task 0.0 in stage 57.0 (TID 257, localhost): 
scala.MatchError: [280000,1.0,[2400.0,9373.0,3.0,1.0,1.0,0.0,0.0,0.0]] (of class org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema)


问题是在这种情况下我应该如何使用ParamGridBuilder

这里的问题是输入模式而不是
ParamGridBuilder
。Price列作为整数加载,而
LinearRegression
应为双精度。您可以通过将列显式强制转换为所需类型来修复它:

val houses=sqlContext.read.format(“com.databricks.spark.csv”)
.选项(“标题”、“正确”)
.选项(“推断模式”、“真”)
.加载(…)
.withColumn(“价格”,美元“价格”).cast(“双精度”)

谢谢,从最初的示例中忽略了这一点,因为对于他们为什么选择Double You's welcome没有任何评论。它应该基于模式验证,不要在作业中间抛出异常。不幸的是,ML充满了这样的小故障。似乎有效