elasticsearch,spark-streaming,amazon-emr,aws-elasticsearch,Apache Spark,elasticsearch,Spark Streaming,Amazon Emr,Aws Elasticsearch" /> elasticsearch,spark-streaming,amazon-emr,aws-elasticsearch,Apache Spark,elasticsearch,Spark Streaming,Amazon Emr,Aws Elasticsearch" />

Apache spark Spark至AWS ElasticSearch服务

Apache spark Spark至AWS ElasticSearch服务,apache-spark,elasticsearch,spark-streaming,amazon-emr,aws-elasticsearch,Apache Spark,elasticsearch,Spark Streaming,Amazon Emr,Aws Elasticsearch,我正在本地机器上运行spark。我已在AWS ElasticSearch服务中启动并运行ElasticSearch。我试图遵循此处指定的文档: 我使用的Elasticsearch spark版本是 <dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch-spark-20_2.10</a

我正在本地机器上运行spark。我已在AWS ElasticSearch服务中启动并运行ElasticSearch。我试图遵循此处指定的文档:

我使用的Elasticsearch spark版本是

        <dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch-spark-20_2.10</artifactId>
            <version>7.5.0</version>
        </dependency>
我试着去调试到底是什么问题。这是我在第745行的package
org.elasticsearch.hadoop.rest.RestClient.java中找到的

Map<String, Object> result = get("", null);
Map result=get(“,null);

不确定他们为什么要将get方法中的URI设置为空字符串。任何帮助都将不胜感激。

我也有同样的问题。你的帖子帮我解决了我的问题。我的一个区别是“es.nodes”字段;在端点前面添加“https://”对我来说很有效。我也有同样的问题。你的帖子帮我解决了我的问题。我的一个区别是“es.nodes”字段;在端点前面添加“https://”对我来说很有效。
import static org.elasticsearch.spark.streaming.api.java.JavaEsSparkStreaming.saveToEs;

public class ElasticSearchManager {

    public static void sendToEs(JavaDStream<Map<String, Object>> javaDStream) {
        ZonedDateTime dateTime = LocalDateTime.now().atZone(ZoneId.systemDefault());
        saveToEs(javaDStream,
                dateTime.format(DateTimeFormatter.ofPattern("YYYY-MM-dd")));

    }
}

org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverClusterInfo(InitializationUtils.java:340)
    at org.elasticsearch.spark.rdd.EsSpark$.doSaveToEs(EsSpark.scala:104)
    at org.elasticsearch.spark.streaming.EsSparkStreaming$$anonfun$doSaveToEs$1.apply(EsSparkStreaming.scala:71)
    at org.elasticsearch.spark.streaming.EsSparkStreaming$$anonfun$doSaveToEs$1.apply(EsSparkStreaming.scala:71)
    at org.apache.spark.streaming.dstream.DStream.$anonfun$foreachRDD$2(DStream.scala:628)
    at org.apache.spark.streaming.dstream.DStream.$anonfun$foreachRDD$2$adapted(DStream.scala:628)
    at org.apache.spark.streaming.dstream.ForEachDStream.$anonfun$generateJob$2(ForEachDStream.scala:51)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
    at org.apache.spark.streaming.dstream.ForEachDStream.$anonfun$generateJob$1(ForEachDStream.scala:51)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at scala.util.Try$.apply(Try.scala:213)
    at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.$anonfun$run$1(JobScheduler.scala:257)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:257)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: [GET] on [] failed; server[search-************.us-west-1.es.amazonaws.com:443] returned [400|Bad Request:]
    at org.elasticsearch.hadoop.rest.RestClient.checkResponse(RestClient.java:477)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:434)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:428)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:388)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:392)
    at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:168)
    at org.elasticsearch.hadoop.rest.RestClient.mainInfo(RestClient.java:745)
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverClusterInfo(InitializationUtils.java:330)
    ... 19 more
Map<String, Object> result = get("", null);