Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/docker/10.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java ElasticSearch docker“;HTTP/1.1 429请求太多”;360k文档之后_Java_Docker_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch - Fatal编程技术网 elasticsearch,Java,Docker,elasticsearch" /> elasticsearch,Java,Docker,elasticsearch" />

Java ElasticSearch docker“;HTTP/1.1 429请求太多”;360k文档之后

Java ElasticSearch docker“;HTTP/1.1 429请求太多”;360k文档之后,java,docker,elasticsearch,Java,Docker,elasticsearch,我正在尝试ElasticSearch 7.9,希望在一百万个文档上做一个基准测试。我使用“单节点”docker图像 我使用高级java客户机使用BulkRequest索引文档。在360k个请求之后,我总是得到一个太多请求异常,即使我在每个10k文档之后都放了一些sleep(1000)语句 我尝试将jvm.options中的内存从1G增加到8G,但这没有影响它 是否有增加请求数量的选项 我的笔记本电脑有4个内核和16GB,docker没有任何限制 错误详细信息: {"error"

我正在尝试ElasticSearch 7.9,希望在一百万个文档上做一个基准测试。我使用“单节点”docker图像

我使用高级java客户机使用
BulkRequest
索引文档。在360k个请求之后,我总是得到一个
太多请求
异常,即使我在每个
10k
文档之后都放了一些
sleep(1000)
语句

我尝试将
jvm.options
中的内存从
1G
增加到
8G
,但这没有影响它

是否有增加请求数量的选项

我的笔记本电脑有4个内核和16GB,docker没有任何限制

错误详细信息:

{"error":{"root_cause":[{"type":"es_rejected_execution_exception","reason":"rejected execution of coordinating operation [coordinating_and_primary_bytes=0, replica_bytes=0, all_bytes=0, coordinating_operation_bytes=108400734, max_coordinating_and_primary_bytes=107374182]"}],"type":"es_rejected_execution_exception","reason":"rejected execution of coordinating operation [coordinating_and_primary_bytes=0, replica_bytes=0, all_bytes=0, coordinating_operation_bytes=108400734, max_coordinating_and_primary_bytes=107374182]"},"status":429}
索引代码
CreateIndexRequest CreateIndexRequest=新的CreateIndexRequest(索引);
createIndexRequest.mapping(
“{\n”+
“\”属性\“:{\n”+
“\”类别\“:{\n”+
“\”类型\“:\”关键字\“\n”+
},\n+
“\“title\”:{\n”+
“\”类型\“:\”关键字\“\n”+
},\n+
“\“naam\”:{\n”+
“\”类型\“:\”关键字\“\n”+
“}\n”+
“}\n”+
"}",
XContentType.JSON);
CreateIndexResponse CreateIndexResponse=client.index().create(createIndexRequest,RequestOptions.DEFAULT);
对于(int b=0;b{
IndexRequest IndexRequest=新IndexRequest()。
source(objectMapper.convertValue(book,Map.class)).index(index.id(book.id());
bulkRequest.add(indexRequest);
});
System.out.println(“确定,批次:+b”);
bulkRequest.timeout(TimeValue.timeValueSeconds(20));
试一试{
睡眠(1_000);
}捕捉(中断异常e){
e、 printStackTrace();
}
试一试{
bulk(bulkRequest,RequestOptions.DEFAULT);
系统输出打印项次(“Ok2”);
}捕获(IOE异常){
e、 printStackTrace();
//System.out.println(objectMapper.convertValue(book,Map.class));
}
}

好的,我找到了。我只是不断地向
BulkRequest
添加请求,而不是清除它

            CreateIndexRequest createIndexRequest = new CreateIndexRequest(index);
            createIndexRequest.mapping(
                    "{\n" +
                            "  \"properties\": {\n" +
                            "    \"category\": {\n" +
                            "      \"type\": \"keyword\"\n" +
                            "    },\n" +
                            "    \"title\": {\n" +
                            "      \"type\": \"keyword\"\n" +
                            "    },\n" +
                            "    \"naam\": {\n" +
                            "      \"type\": \"keyword\"\n" +
                            "    }\n" +
                            "  }\n" +
                            "}",
                    XContentType.JSON);
            CreateIndexResponse createIndexResponse = client.indices().create(createIndexRequest, RequestOptions.DEFAULT);

            for (int b=0;b<100; b++) {
                List<Book> bookList = new ArrayList<>();
                for (int i = 0; i < 10_000; i++) {
                    int item = b*100_000 + i;
                    bookList.add(new Book("" + item,
                            item % 2 == 0 ? "aap" : "banaan",
                            item % 4 == 0 ? "naam1" : "naam2",
                            "Rob" + item,
                            "The great start" + item/100,
                            item));
                }
                bookList.forEach(book -> {
                    IndexRequest indexRequest = new IndexRequest().
                            source(objectMapper.convertValue(book, Map.class)).index(index).id(book.id());
                    bulkRequest.add(indexRequest);
                });
                System.out.println("Ok, batch: " + b);
                bulkRequest.timeout(TimeValue.timeValueSeconds(20));
                try {
                    Thread.sleep(1_000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                try {
                    client.bulk(bulkRequest, RequestOptions.DEFAULT);
                    System.out.println("Ok2");
                } catch (IOException e) {
                    e.printStackTrace();
//            System.out.println(objectMapper.convertValue(book, Map.class));
                }
            }