正在尝试编译gensort.scala,在未加载数据时获取:[error]无法获取工件。IvyNode=net.java.dev.jets3tjets3t;0.6.1

正在尝试编译gensort.scala,在未加载数据时获取:[error]无法获取工件。IvyNode=net.java.dev.jets3tjets3t;0.6.1,scala,sbt,Scala,Sbt,scala和sbt新手,不知道如何继续。我是否缺少更多依赖项 复制步骤: 将gensort.scala代码保存在~/spark-1.3.0/project中/ 开始构建:我的服务器$~/spark-1.3.0/project/sbt >跑 gensort.scala: ~/spark-1.3.0/project/build.sbt中的生成定义文件: lazy val root = (project in file(".")). settings( nam

scala和sbt新手,不知道如何继续。我是否缺少更多依赖项

复制步骤:

将gensort.scala代码保存在~/spark-1.3.0/project中/ 开始构建:我的服务器$~/spark-1.3.0/project/sbt >跑 gensort.scala:

~/spark-1.3.0/project/build.sbt中的生成定义文件:

        lazy val root = (project in file(".")).
      settings(
      name := "gensort",
      version := "1.0",
      scalaVersion := "2.11.6"
)

libraryDependencies ++= Seq(
     "org.apache.spark" % "spark-examples_2.10" % "1.1.1",
     "org.apache.spark" % "spark-core_2.11" % "1.3.0",
     "org.apache.spark" % "spark-streaming-mqtt_2.11" % "1.3.0",
     "org.apache.spark" % "spark-streaming_2.11" % "1.3.0",
     "org.apache.spark" % "spark-network-common_2.10" % "1.2.0",
     "org.apache.spark" % "spark-network-shuffle_2.10" % "1.3.0",
     "org.apache.hadoop" % "hadoop-core" % "1.2.1"
)

非常感谢任何关于如何前进的见解。谢谢-丹尼斯

你不应该混合使用2.10和2.11,它们不是二进制兼容的。您的libraryDependencies应该如下所示:

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-examples" % "1.1.1",
  "org.apache.spark" %% "spark-core" % "1.3.0",
  "org.apache.spark" %% "spark-streaming-mqtt" % "1.3.0",
  "org.apache.spark" %% "spark-streaming" % "1.3.0",
  "org.apache.spark" %% "spark-network-common" % "1.2.0",
  "org.apache.spark" %% "spark-network-shuffle" % "1.3.0",
  "org.apache.hadoop" % "hadoop-core" % "1.2.1"
)
%%表示Scala版本作为后缀添加到库id中。在进行此更改后,我出现了一个错误,因为找不到依赖项。它位于这里:

resolvers += "poho" at "https://repo.eclipse.org/content/repositories/paho-releases"
尽管如此,spark示例似乎不适用于2.11。将缩放厌恶更改为


解决了所有依赖性问题,编译成功。

非常有效,感谢您的专业知识-Dennis@DennisIgnacio如果一个答案回答了你的问题,你应该接受它,以便向别人表明你对得到答案感兴趣。
scalaVersion := "2.10.5"