elasticsearch 从spark-saveToEs将数据保存到elasticsearch时出错,elasticsearch,apache-spark,elasticsearch,Apache Spark" /> elasticsearch 从spark-saveToEs将数据保存到elasticsearch时出错,elasticsearch,apache-spark,elasticsearch,Apache Spark" />

elasticsearch 从spark-saveToEs将数据保存到elasticsearch时出错

elasticsearch 从spark-saveToEs将数据保存到elasticsearch时出错,elasticsearch,apache-spark,elasticsearch,Apache Spark,我试图将rdd的输出保存到elasticsearch中。但当我试图发送它时,即使包含了几个elasticsearch spark库,我也会面临一个错误。我是弹性搜索新手,任何帮助都将不胜感激。谢谢 import org.apache.spark.{SparkConf, SparkContext} import org.elasticsearch.spark._ object ElasticSpark { def main(args: Array[String]) { val logfile

我试图将rdd的输出保存到elasticsearch中。但当我试图发送它时,即使包含了几个elasticsearch spark库,我也会面临一个错误。我是弹性搜索新手,任何帮助都将不胜感激。谢谢

import org.apache.spark.{SparkConf, SparkContext}
import org.elasticsearch.spark._

object ElasticSpark {

def main(args: Array[String]) {

val logfile = "/Users/folder/Desktop/logfile.rtf";
val conf = new SparkConf().setMaster("local[1]").setAppName("RddTest");   // set master can be given any cpu cores as local[*], spark clustr, mesos,
conf.set("es.index.auto.create", "true")
val sc = new SparkContext(conf);

val logdata = sc.textFile(logfile); // number of partitions
val NumA = logdata.filter(line=>line.contains("a")).count();
val wordcount = logdata.flatMap(line=>line.split(" ")).map(word=>(word,1)).reduceByKey((a, b)=> a+ b);

println(wordcount.collect()); // doubt
wordcount.saveAsTextFile("/Users/folder/Desktop/sample") // success
wordcount.saveToEs("spark/docs")

}
}
错误
ES支持不是Spark distribution的一部分,它是elasticsearch hadoop的一部分,所以您需要提供这种依赖关系。如果使用Maven,请将以下内容添加到pom.xml中:

<dependency>
  <groupId>org.elasticsearch</groupId>
  <artifactId>elasticsearch-hadoop</artifactId>
  <version>2.2.0</version>
</dependency>

在导入sbt项目时,当我将其包括在sbt中时,我得到了这个错误:错误:错误:
……
[error]未解析的依赖项:clj stacktrace#clj stacktrace;0.2.2:未找到[错误]未解析的依赖项:环形#环形码头适配器;0.3.11:未找到[错误]未解析的依赖项:环#环servlet;0.3.11:未找到[错误]未解析的依赖项:com.twitter#carbonite;1.4.0:未找到[错误]未解析的依赖项:级联#级联hadoop;2.6.3:未找到[错误]未解决的依赖项:org.pentaho#pentaho aggdesigner算法;5.1.5-jhyde:没有找到您是对的,必须在build.sbt中添加额外的解析器。我已经编辑了我的答案。顺便说一句,我并没有在elastic文档中找到提到的事实,所以我只是从elasticsearch hadoop Gradle项目中获取存储库。它现在构建,经过测试。
<dependency>
  <groupId>org.elasticsearch</groupId>
  <artifactId>elasticsearch-hadoop</artifactId>
  <version>2.2.0</version>
</dependency>
libraryDependencies += "org.elasticsearch" % "elasticsearch-hadoop" % "2.2.0" % "compile"
resolvers ++= Seq("clojars" at "https://clojars.org/repo",
                  "conjars" at "http://conjars.org/repo",
                  "plugins" at "http://repo.spring.io/plugins-release",
                  "sonatype" at "http://oss.sonatype.org/content/groups/public/")