Scala 运行sbt程序集时出错:sbt重复数据消除错误
我正面临下面帖子中描述的问题,建议的答案没有帮助。 我的build.sbt文件包含Scala 运行sbt程序集时出错:sbt重复数据消除错误,scala,sbt,apache-spark,sbt-assembly,Scala,Sbt,Apache Spark,Sbt Assembly,我正面临下面帖子中描述的问题,建议的答案没有帮助。 我的build.sbt文件包含 name := "Simple" version := "0.1.0" scalaVersion := "2.10.4" libraryDependencies ++= Seq( "org.twitter4j" % "twitter4j-stream" % "3.0.3" ) //libraryDependencies += "org.apache.spark" %% "spark-core" %
name := "Simple"
version := "0.1.0"
scalaVersion := "2.10.4"
libraryDependencies ++= Seq(
"org.twitter4j" % "twitter4j-stream" % "3.0.3"
)
//libraryDependencies += "org.apache.spark" %% "spark-core" % "1.0.2"
libraryDependencies += "org.apache.spark" %% "spark-streaming" % "1.0.2"
libraryDependencies += "org.apache.spark" %% "spark-streaming-twitter" % "1.0.2"
libraryDependencies += "com.github.nscala-time" %% "nscala-time" % "0.4.2"
libraryDependencies ++= Seq(
("org.apache.spark"%%"spark-core"%"1.0.2").
exclude("org.eclipse.jetty.orbit", "javax.servlet").
exclude("org.eclipse.jetty.orbit", "javax.transaction").
exclude("org.eclipse.jetty.orbit", "javax.mail").
exclude("org.eclipse.jetty.orbit", "javax.activation").
exclude("commons-beanutils", "commons-beanutils-core").
exclude("commons-collections", "commons-collections").
exclude("commons-collections", "commons-collections").
exclude("com.esotericsoftware.minlog", "minlog")
)
resolvers += "Akka Repository" at "http://repo.akka.io/releases/"
mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) =>
{
case PathList("javax", "servlet", xs @ _*) => MergeStrategy.first
case PathList("javax", "transaction", xs @ _*) => MergeStrategy.first
case PathList("javax", "mail", xs @ _*) => MergeStrategy.first
case PathList("javax", "activation", xs @ _*) => MergeStrategy.first
case PathList(ps @ _*) if ps.last endsWith ".html" => MergeStrategy.first
case "application.conf" => MergeStrategy.concat
case "unwanted.txt" => MergeStrategy.discard
case x => old(x)
}
}
name:=“简单”
版本:=“0.1.0”
规模规避:=“2.10.4”
libraryDependencies++=Seq(
“org.twitter4j”%“twitter4j流”%“3.0.3”
)
//libraryDependencies+=“org.apache.spark”%%“spark核心”%%“1.0.2”
libraryDependencies+=“org.apache.spark”%%“spark流媒体”%%“1.0.2”
libraryDependencies+=“org.apache.spark”%%“spark流媒体推特”%%“1.0.2”
libraryDependencies+=“com.github.nscala时间”%%“nscala时间”%%“0.4.2”
libraryDependencies++=Seq(
(“org.apache.spark”%%“spark核心”%%“1.0.2”)。
排除(“org.eclipse.jetty.orbit”、“javax.servlet”)。
排除(“org.eclipse.jetty.orbit”、“javax.transaction”)。
排除(“org.eclipse.jetty.orbit”、“javax.mail”)。
排除(“org.eclipse.jetty.orbit”、“javax.activation”)。
排除(“commons beanutils”、“commons beanutils core”)。
排除(“公用集合”、“公用集合”)。
排除(“公用集合”、“公用集合”)。
排除(“com.esotericsoftware.minlog”、“minlog”)
)
解析程序+=“Akka存储库”位于http://repo.akka.io/releases/"
在assembly mergeStrategy.first中使用mergeStrategy
案例路径列表(“javax”,“transaction”,xs@*)=>MergeStrategy.first
案例路径列表(“javax”、“mail”、xs@*)=>MergeStrategy.first
案例路径列表(“javax”,“activation”,xs@*)=>MergeStrategy.first
如果ps.last endsWith“.html”=>MergeStrategy.first,则为案例路径列表(ps@*)
案例“application.conf”=>MergeStrategy.concat
案例“多余的.txt”=>MergeStrategy.discard
案例x=>旧(x)
}
}
关于如何解决上述问题的任何建议?如果您计划从Spark运行程序,那么我强烈建议添加所有Spark依赖项,如提供的
,
所示,以便将它们从汇编任务中排除
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.0.2" % "provided",
"org.apache.spark" %% "spark-streaming" % "1.0.2" % "provided",
"org.apache.spark" %% "spark-streaming-twitter" % "1.0.2" % "provided")
在另一种情况下,您需要从类路径中删除那些jar
s,或者在mergeStrategy
中添加适当的行,在您的情况下可能是这样的
case PathList("META-INF", "ECLIPSEF.RSA") => MergeStrategy.first
如果您仍然希望处理Spark的依赖关系,插件应该会有所帮助。还请注意,其他Spark依赖项,如
Spark streaming
和Spark streaming twitter
可能也需要exclude
指令 因此,为了消除恼人的“重复数据消除”消息,我没有为排除消息而烦恼,它似乎对我没有帮助。我从sbt代码复制并粘贴了defaultMergeStrategy
,只是将上面写着deduplicate
的行改为first
。我还必须在结尾添加一个通俗易懂的内容,以坚持首先使用
。老实说,我不知道这意味着什么,也不知道为什么它会让恼人的信息消失。。。我没有时间获得sbt的博士学位,我希望我的代码只是构建!!因此,合并战略变成:
mergeStrategy in assembly <<= (mergeStrategy in assembly) ((old) => {
case x if Assembly.isConfigFile(x) =>
MergeStrategy.concat
case PathList(ps @ _*) if Assembly.isReadme(ps.last) || Assembly.isLicenseFile(ps.last) =>
MergeStrategy.rename
case PathList("META-INF", xs @ _*) =>
(xs map {_.toLowerCase}) match {
case ("manifest.mf" :: Nil) | ("index.list" :: Nil) | ("dependencies" :: Nil) =>
MergeStrategy.discard
case ps @ (x :: xs) if ps.last.endsWith(".sf") || ps.last.endsWith(".dsa") =>
MergeStrategy.discard
case "plexus" :: xs =>
MergeStrategy.discard
case "services" :: xs =>
MergeStrategy.filterDistinctLines
case ("spring.schemas" :: Nil) | ("spring.handlers" :: Nil) =>
MergeStrategy.filterDistinctLines
case _ => MergeStrategy.first // Changed deduplicate to first
}
case PathList(_*) => MergeStrategy.first // added this line
})
在程序集中合并策略
MergeStrategy.concat
如果Assembly.isReadme(ps.last)| Assembly.isLicenseFile(ps.last)=>
MergeStrategy.rename
案例路径列表(“META-INF”,xs@*)=>
(xs映射{{toLowerCase})匹配{
大小写(“manifest.mf”::Nil)|(“index.list”::Nil)|(“依赖项”::Nil)=>
合并策略。放弃
如果ps.last.endsWith(“.sf”)| ps.last.endsWith(“.dsa”)=>
合并策略。放弃
案例“丛”::xs=>
合并策略。放弃
案例“服务”::xs=>
MergeStrategy.filterDistinctLines
大小写(“spring.schemas”::Nil)|(“spring.handlers”::Nil)=>
MergeStrategy.filterDistinctLines
案例=>MergeStrategy.first//将重复数据消除更改为first
}
案例路径列表(*)=>MergeStrategy.first//添加了此行
})
请详细说明“如果您计划从Spark运行您的程序,那么我强烈建议按照提供的方式添加所有Spark依赖项。”如何添加它们?@Siva这意味着,如果Spark正在运行,这些JAR在您部署工作时已经可用,无需随应用程序一起提供。查看我的更新答案。当我尝试添加上述合并策略时,又出现了一个错误,[error]C:\Users\xxx\.ivy2\cache\com.esotericsoftware.kryo\kryo\bundles\kryo-2.21.jar:com/esotericsoftware/minlog/Log$Logger.class错误]C:\Users\xxx\.ivy2\cache\com.esotericsoftware.minlog\minlog\jars\minlog-1.2.jar:com/esotericsoftware/minlog/Log$Logger.class问题在于它违背了整个装配点。。。也就是说,在构建一个胖罐子的时候,不必担心类路径。这行得通吗?我试过了,那个胖罐子小得令人怀疑。
mergeStrategy in assembly <<= (mergeStrategy in assembly) ((old) => {
case x if Assembly.isConfigFile(x) =>
MergeStrategy.concat
case PathList(ps @ _*) if Assembly.isReadme(ps.last) || Assembly.isLicenseFile(ps.last) =>
MergeStrategy.rename
case PathList("META-INF", xs @ _*) =>
(xs map {_.toLowerCase}) match {
case ("manifest.mf" :: Nil) | ("index.list" :: Nil) | ("dependencies" :: Nil) =>
MergeStrategy.discard
case ps @ (x :: xs) if ps.last.endsWith(".sf") || ps.last.endsWith(".dsa") =>
MergeStrategy.discard
case "plexus" :: xs =>
MergeStrategy.discard
case "services" :: xs =>
MergeStrategy.filterDistinctLines
case ("spring.schemas" :: Nil) | ("spring.handlers" :: Nil) =>
MergeStrategy.filterDistinctLines
case _ => MergeStrategy.first // Changed deduplicate to first
}
case PathList(_*) => MergeStrategy.first // added this line
})