Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 执行Apache spark ML管道时出错_Scala_Apache Spark_Apache Spark Sql_Spray_Spark Dataframe - Fatal编程技术网

Scala 执行Apache spark ML管道时出错

Scala 执行Apache spark ML管道时出错,scala,apache-spark,apache-spark-sql,spray,spark-dataframe,Scala,Apache Spark,Apache Spark Sql,Spray,Spark Dataframe,我们使用的是ApacheSpark1.6、Scala2.10.5和SBT0.13.9 执行简单管道时: def buildPipeline(): Pipeline = { val tokenizer = new Tokenizer() tokenizer.setInputCol("Summary") tokenizer.setOutputCol("LemmatizedWords") val hashingTF = new HashingTF() hashi

我们使用的是ApacheSpark1.6、Scala2.10.5和SBT0.13.9

执行简单管道时:

def buildPipeline(): Pipeline = {
    val tokenizer = new Tokenizer()
    tokenizer.setInputCol("Summary")
    tokenizer.setOutputCol("LemmatizedWords")
    val hashingTF = new HashingTF()
    hashingTF.setInputCol(tokenizer.getOutputCol)
    hashingTF.setOutputCol("RawFeatures")

    val pipeline = new Pipeline()
    pipeline.setStages(Array(tokenizer, hashingTF))
    pipeline
}
执行ML管道拟合方法时,请获取以下错误。 任何关于可能发生的事情的评论都会很有帮助

**java.lang.RuntimeException: error reading Scala signature of org.apache.spark.mllib.linalg.Vector: value linalg is not a package**

[error] org.apache.spark.ml.feature.HashingTF$$typecreator1$1.apply(HashingTF.scala:66)
[error] org.apache.spark.sql.catalyst.ScalaReflection$class.localTypeOf(ScalaReflection.scala:642)

[error] org.apache.spark.sql.catalyst.ScalaReflection$.localTypeOf(ScalaReflection.scala:30)
[error] org.apache.spark.sql.catalyst.ScalaReflection$class.schemaFor(ScalaReflection.scala:630)
[error] org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:30)
[error] org.apache.spark.sql.functions$.udf(functions.scala:2576)
[error] org.apache.spark.ml.feature.HashingTF.transform(HashingTF.scala:66)
[error] org.apache.spark.ml.PipelineModel$$anonfun$transform$1.apply(Pipeline.scala:297)
[error] org.apache.spark.ml.PipelineModel$$anonfun$transform$1.apply(Pipeline.scala:297)
[error] org.apache.spark.ml.PipelineModel.transform(Pipeline.scala:297)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
build.sbt

scalaVersion in ThisBuild := "2.10.5"
scalacOptions := Seq("-unchecked", "-deprecation", "-encoding", "utf8")  

val sparkV = "1.6.0"
val sprayV = "1.3.2"
val specs2V = "2.3.11"
val slf4jV = "1.7.5"
val grizzledslf4jV = "1.0.2"
val akkaV = "2.3.14"

libraryDependencies in ThisBuild ++= { Seq(
  ("org.apache.spark" %% "spark-mllib" % sparkV) % Provided,  
  ("org.apache.spark" %% "spark-core" % sparkV) % Provided, 
  "com.typesafe.akka" %% "akka-actor" % akkaV,
  "io.spray" %% "spray-can" % sprayV,
  "io.spray" %% "spray-routing" % sprayV,
  "io.spray" %% "spray-json" % sprayV, 
  "io.spray" %% "spray-testkit" % "1.3.1" % "test", 
  "org.specs2" %% "specs2-core" % specs2V % "test",
  "org.specs2" %% "specs2-mock" % specs2V % "test",
  "org.specs2" %% "specs2-junit" % specs2V % "test",
  "org.slf4j" % "slf4j-api" % slf4jV,
  "org.clapper" %% "grizzled-slf4j" % grizzledslf4jV
) }
你应该尝试使用

org.apache.spark.ml.linalg.Vector和

org.apache.spark.mllib.linalg.Vectors覆盖您现在使用的内容,即

org.apache.spark.mllib.linalg.Vectors


希望这能解决您的问题。

感谢您抽出时间研究此问题。添加spark sql没有产生影响。另一方面,如果管道拟合在期货的上下文之外运行,那么问题似乎不会发生。有没有想过为什么会这样?我不认为未来本身就是个问题。更可能是执行上下文的问题。你能解释一下如何使用这个吗?可能是MCVE?如何运行该示例。这可能在sbt控制台内吗<代码>已提供则不包括库。