elasticsearch,Performance,Indexing,elasticsearch" /> elasticsearch,Performance,Indexing,elasticsearch" />

Performance BulkRequestBuilder的Elasticsearch索引速度变慢

Performance BulkRequestBuilder的Elasticsearch索引速度变慢,performance,indexing,elasticsearch,Performance,Indexing,elasticsearch,大家好,elasticsearch大师 我有数以百万计的数据要通过elasticsearch Java API进行索引。 elasticsearch的群集节点数为三个(1个作为主节点+2个节点) 下面是我的代码片段 Settings settings = ImmutableSettings.settingsBuilder() .put("cluster.name", "MyClusterName").build(); TransportClient client = new Tran

大家好,elasticsearch大师

我有数以百万计的数据要通过elasticsearch Java API进行索引。 elasticsearch的群集节点数为三个(1个作为主节点+2个节点)

下面是我的代码片段

Settings settings = ImmutableSettings.settingsBuilder()
     .put("cluster.name", "MyClusterName").build();

TransportClient client = new TransportClient(settings);
String hostname = "myhost ip";
int port = 9300; 
client.addTransportAddress(new InetSocketTransportAddress(hostname, port));

BulkRequestBuilder bulkBuilder = client.prepareBulk();
BufferedReader br = new BufferedReader(new InputStreamReader(new DataInputStream(new FileInputStream("my_file_path"))));
long bulkBuilderLength = 0;
String readLine = "";
String index = "my_index_name";
String type = "my_type_name";
String id = "";

while((readLine = br.readLine()) != null){

    id = somefunction(readLine);
    String json = new ObjectMapper().writeValueAsString(readLine);
    bulkBuilder.add(client.prepareIndex(index, type, id)
        .setSource(json));
    bulkBuilderLength++;
    if(bulkBuilderLength % 1000== 0){
        logger.info("##### " + bulkBuilderLength + " data indexed.");
        BulkResponse bulkRes = bulkBuilder.execute().actionGet();
        if(bulkRes.hasFailures()){
            logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
        }
    }
}

br.close();

if(bulkBuilder.numberOfActions() > 0){
    logger.info("##### " + bulkBuilderLength + " data indexed.");
    BulkResponse bulkRes = bulkBuilder.execute().actionGet();
    if(bulkRes.hasFailures()){
        logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
    }
    bulkBuilder = client.prepareBulk();
}
它工作正常,但在数千个文档之后,性能迅速降低

我已经尝试将“刷新间隔”的设置值更改为-1,“副本数”更改为0但是,性能下降的情况是相同的。

如果我使用bigdesk监控集群的状态,GC值每秒钟达到1,如下面的屏幕截图所示

有人能帮我吗

提前谢谢

=====================================================================================

最后,我解决了这个问题。(见答案)

问题的原因是我没有重新创建一个新的BulkRequestBuilder。 在我像下面这样更改代码片段之后,性能从未下降

多谢各位

Settings settings = ImmutableSettings.settingsBuilder()
     .put("cluster.name", "MyClusterName").build();

TransportClient client = new TransportClient(settings);
String hostname = "myhost ip";
int port = 9300; 
client.addTransportAddress(new InetSocketTransportAddress(hostname, port));

BulkRequestBuilder bulkBuilder = client.prepareBulk();
BufferedReader br = new BufferedReader(new InputStreamReader(new DataInputStream(new FileInputStream("my_file_path"))));
long bulkBuilderLength = 0;
String readLine = "";
String index = "my_index_name";
String type = "my_type_name";
String id = "";

while((readLine = br.readLine()) != null){

    id = somefunction(readLine);
    String json = new ObjectMapper().writeValueAsString(readLine);
    bulkBuilder.add(client.prepareIndex(index, type, id)
        .setSource(json));
    bulkBuilderLength++;
    if(bulkBuilderLength % 1000== 0){
        logger.info("##### " + bulkBuilderLength + " data indexed.");
        BulkResponse bulkRes = bulkBuilder.execute().actionGet();
        if(bulkRes.hasFailures()){
            logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
        }
        bulkBuilder = client.prepareBulk();  // This line is my mistake and the solution !!!
    }
}

br.close();

if(bulkBuilder.numberOfActions() > 0){
    logger.info("##### " + bulkBuilderLength + " data indexed.");
    BulkResponse bulkRes = bulkBuilder.execute().actionGet();
    if(bulkRes.hasFailures()){
        logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
    }
    bulkBuilder = client.prepareBulk();
}

这里的问题是,在批量执行之后,不会再次重新创建新的批量

这意味着您要一次又一次地重新索引相同的第一个数据


顺便说一句,看看BulkProcessor类。使用起来肯定更好。

看看如何使用BulkProcessor。请添加一些文字来解释为什么BulkProcessor使用起来肯定更好?基本上是因为它为您实现了一切自动化。作为开发人员,编写的代码更少。谢谢。你知道其中一个执行速度比另一个快吗?BulkProcessor是BulkAPI之上的一层。同样的吞吐量。