elasticsearch 索引在弹性搜索中非常慢,elasticsearch,logstash,elasticsearch,Logstash" /> elasticsearch 索引在弹性搜索中非常慢,elasticsearch,logstash,elasticsearch,Logstash" />

elasticsearch 索引在弹性搜索中非常慢

elasticsearch 索引在弹性搜索中非常慢,elasticsearch,logstash,elasticsearch,Logstash,无论我做什么,我都不能将索引增加到10000个事件/秒以上。我在一个logstash实例中每秒从卡夫卡获得13000个事件。我在不同的机器上运行3个Logstash,读取来自同一卡夫卡主题的数据 我已经设置了一个麋鹿集群,其中有3个Logstash从Kafka读取数据并将它们发送到我的elastic集群 我的集群包含3个日志存储、3个弹性主节点、3个弹性客户端节点和50个弹性数据节点 Logstash 2.0.4 Elastic Search 5.0.2 Kibana 5.0.2 具有相同配置

无论我做什么,我都不能将索引增加到10000个事件/秒以上。我在一个logstash实例中每秒从卡夫卡获得13000个事件。我在不同的机器上运行3个Logstash,读取来自同一卡夫卡主题的数据

我已经设置了一个麋鹿集群,其中有3个Logstash从Kafka读取数据并将它们发送到我的elastic集群

我的集群包含3个日志存储、3个弹性主节点、3个弹性客户端节点和50个弹性数据节点

Logstash 2.0.4
Elastic Search 5.0.2
Kibana 5.0.2
具有相同配置的所有Citrix VM:

Red Hat Linux-7
英特尔(R)至强(R)CPU E5-2630 v3@2.40GHz 6核
32 GB内存
2 TB纺丝介质

日志存储配置文件:

 output {
    elasticsearch {
      hosts => ["dataNode1:9200","dataNode2:9200","dataNode3:9200" upto "**dataNode50**:9200"]
      index => "logstash-applogs-%{+YYYY.MM.dd}-1"
      workers => 6
      user => "uname"
      password => "pwd"
    }
}
index                         shard prirep state       docs  store
logstash-applogs-2017.01.23-3 11    r      STARTED 30528186   35gb
logstash-applogs-2017.01.23-3 11    p      STARTED 30528186 30.3gb
logstash-applogs-2017.01.23-3 9     p      STARTED 30530585 35.2gb
logstash-applogs-2017.01.23-3 9     r      STARTED 30530585 30.5gb
logstash-applogs-2017.01.23-3 1     r      STARTED 30526639 30.4gb
logstash-applogs-2017.01.23-3 1     p      STARTED 30526668 30.5gb
logstash-applogs-2017.01.23-3 14    p      STARTED 30539209 35.5gb
logstash-applogs-2017.01.23-3 14    r      STARTED 30539209   35gb
logstash-applogs-2017.01.23-3 12    p      STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 12    r      STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 15    p      STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 15    r      STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 19    r      STARTED 30533725 35.3gb
logstash-applogs-2017.01.23-3 19    p      STARTED 30533725 36.4gb
logstash-applogs-2017.01.23-3 18    r      STARTED 30525190 30.2gb
logstash-applogs-2017.01.23-3 18    p      STARTED 30525190 30.3gb
logstash-applogs-2017.01.23-3 8     p      STARTED 30526785 35.8gb
logstash-applogs-2017.01.23-3 8     r      STARTED 30526785 35.3gb
logstash-applogs-2017.01.23-3 3     p      STARTED 30526960 30.4gb
logstash-applogs-2017.01.23-3 3     r      STARTED 30526960 30.2gb
logstash-applogs-2017.01.23-3 5     p      STARTED 30522469 35.3gb
logstash-applogs-2017.01.23-3 5     r      STARTED 30522469 30.8gb
logstash-applogs-2017.01.23-3 6     p      STARTED 30539580 30.9gb
logstash-applogs-2017.01.23-3 6     r      STARTED 30539580 30.3gb
logstash-applogs-2017.01.23-3 7     p      STARTED 30535488 30.3gb
logstash-applogs-2017.01.23-3 7     r      STARTED 30535488 30.4gb
logstash-applogs-2017.01.23-3 2     p      STARTED 30524575 35.2gb
logstash-applogs-2017.01.23-3 2     r      STARTED 30524575 35.3gb
logstash-applogs-2017.01.23-3 10    p      STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 10    r      STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 16    p      STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 16    r      STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 4     r      STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 4     p      STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 17    r      STARTED 30528132 30.2gb
logstash-applogs-2017.01.23-3 17    p      STARTED 30528132 30.4gb
logstash-applogs-2017.01.23-3 13    r      STARTED 30521873 30.3gb
logstash-applogs-2017.01.23-3 13    p      STARTED 30521873 30.4gb
logstash-applogs-2017.01.23-3 0     r      STARTED 30520172 30.4gb
logstash-applogs-2017.01.23-3 0     p      STARTED 30520172 30.5gb
Elasticsearch数据节点的elasticisearch.yml文件:

 cluster.name: my-cluster-name
 node.name: node46-data-46
 node.master: false
 node.data: true
 bootstrap.memory_lock: true
 path.data: /apps/dataES1/data
 path.logs: /apps/dataES1/logs
 discovery.zen.ping.unicast.hosts: ["master1","master2","master3"]
 network.host: hostname
 http.port: 9200

The only change that I made in my **jvm.options** file is

-Xms15g
-Xmx15g
我所做的系统配置更改如下:

vm.max\u map\u count=262144

在/etc/security/limits.conf中,我添加了:

elastic       soft    nofile          65536
elastic       hard    nofile          65536
elastic       soft    memlock         unlimited
elastic       hard    memlock         unlimited
elastic       soft    nproc     65536
elastic       hard    nproc     unlimited
索引率

活动数据节点之一:

 output {
    elasticsearch {
      hosts => ["dataNode1:9200","dataNode2:9200","dataNode3:9200" upto "**dataNode50**:9200"]
      index => "logstash-applogs-%{+YYYY.MM.dd}-1"
      workers => 6
      user => "uname"
      password => "pwd"
    }
}
index                         shard prirep state       docs  store
logstash-applogs-2017.01.23-3 11    r      STARTED 30528186   35gb
logstash-applogs-2017.01.23-3 11    p      STARTED 30528186 30.3gb
logstash-applogs-2017.01.23-3 9     p      STARTED 30530585 35.2gb
logstash-applogs-2017.01.23-3 9     r      STARTED 30530585 30.5gb
logstash-applogs-2017.01.23-3 1     r      STARTED 30526639 30.4gb
logstash-applogs-2017.01.23-3 1     p      STARTED 30526668 30.5gb
logstash-applogs-2017.01.23-3 14    p      STARTED 30539209 35.5gb
logstash-applogs-2017.01.23-3 14    r      STARTED 30539209   35gb
logstash-applogs-2017.01.23-3 12    p      STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 12    r      STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 15    p      STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 15    r      STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 19    r      STARTED 30533725 35.3gb
logstash-applogs-2017.01.23-3 19    p      STARTED 30533725 36.4gb
logstash-applogs-2017.01.23-3 18    r      STARTED 30525190 30.2gb
logstash-applogs-2017.01.23-3 18    p      STARTED 30525190 30.3gb
logstash-applogs-2017.01.23-3 8     p      STARTED 30526785 35.8gb
logstash-applogs-2017.01.23-3 8     r      STARTED 30526785 35.3gb
logstash-applogs-2017.01.23-3 3     p      STARTED 30526960 30.4gb
logstash-applogs-2017.01.23-3 3     r      STARTED 30526960 30.2gb
logstash-applogs-2017.01.23-3 5     p      STARTED 30522469 35.3gb
logstash-applogs-2017.01.23-3 5     r      STARTED 30522469 30.8gb
logstash-applogs-2017.01.23-3 6     p      STARTED 30539580 30.9gb
logstash-applogs-2017.01.23-3 6     r      STARTED 30539580 30.3gb
logstash-applogs-2017.01.23-3 7     p      STARTED 30535488 30.3gb
logstash-applogs-2017.01.23-3 7     r      STARTED 30535488 30.4gb
logstash-applogs-2017.01.23-3 2     p      STARTED 30524575 35.2gb
logstash-applogs-2017.01.23-3 2     r      STARTED 30524575 35.3gb
logstash-applogs-2017.01.23-3 10    p      STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 10    r      STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 16    p      STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 16    r      STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 4     r      STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 4     p      STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 17    r      STARTED 30528132 30.2gb
logstash-applogs-2017.01.23-3 17    p      STARTED 30528132 30.4gb
logstash-applogs-2017.01.23-3 13    r      STARTED 30521873 30.3gb
logstash-applogs-2017.01.23-3 13    p      STARTED 30521873 30.4gb
logstash-applogs-2017.01.23-3 0     r      STARTED 30520172 30.4gb
logstash-applogs-2017.01.23-3 0     p      STARTED 30520172 30.5gb
$sudo iotop-o

索引详细信息:

 output {
    elasticsearch {
      hosts => ["dataNode1:9200","dataNode2:9200","dataNode3:9200" upto "**dataNode50**:9200"]
      index => "logstash-applogs-%{+YYYY.MM.dd}-1"
      workers => 6
      user => "uname"
      password => "pwd"
    }
}
index                         shard prirep state       docs  store
logstash-applogs-2017.01.23-3 11    r      STARTED 30528186   35gb
logstash-applogs-2017.01.23-3 11    p      STARTED 30528186 30.3gb
logstash-applogs-2017.01.23-3 9     p      STARTED 30530585 35.2gb
logstash-applogs-2017.01.23-3 9     r      STARTED 30530585 30.5gb
logstash-applogs-2017.01.23-3 1     r      STARTED 30526639 30.4gb
logstash-applogs-2017.01.23-3 1     p      STARTED 30526668 30.5gb
logstash-applogs-2017.01.23-3 14    p      STARTED 30539209 35.5gb
logstash-applogs-2017.01.23-3 14    r      STARTED 30539209   35gb
logstash-applogs-2017.01.23-3 12    p      STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 12    r      STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 15    p      STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 15    r      STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 19    r      STARTED 30533725 35.3gb
logstash-applogs-2017.01.23-3 19    p      STARTED 30533725 36.4gb
logstash-applogs-2017.01.23-3 18    r      STARTED 30525190 30.2gb
logstash-applogs-2017.01.23-3 18    p      STARTED 30525190 30.3gb
logstash-applogs-2017.01.23-3 8     p      STARTED 30526785 35.8gb
logstash-applogs-2017.01.23-3 8     r      STARTED 30526785 35.3gb
logstash-applogs-2017.01.23-3 3     p      STARTED 30526960 30.4gb
logstash-applogs-2017.01.23-3 3     r      STARTED 30526960 30.2gb
logstash-applogs-2017.01.23-3 5     p      STARTED 30522469 35.3gb
logstash-applogs-2017.01.23-3 5     r      STARTED 30522469 30.8gb
logstash-applogs-2017.01.23-3 6     p      STARTED 30539580 30.9gb
logstash-applogs-2017.01.23-3 6     r      STARTED 30539580 30.3gb
logstash-applogs-2017.01.23-3 7     p      STARTED 30535488 30.3gb
logstash-applogs-2017.01.23-3 7     r      STARTED 30535488 30.4gb
logstash-applogs-2017.01.23-3 2     p      STARTED 30524575 35.2gb
logstash-applogs-2017.01.23-3 2     r      STARTED 30524575 35.3gb
logstash-applogs-2017.01.23-3 10    p      STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 10    r      STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 16    p      STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 16    r      STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 4     r      STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 4     p      STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 17    r      STARTED 30528132 30.2gb
logstash-applogs-2017.01.23-3 17    p      STARTED 30528132 30.4gb
logstash-applogs-2017.01.23-3 13    r      STARTED 30521873 30.3gb
logstash-applogs-2017.01.23-3 13    p      STARTED 30521873 30.4gb
logstash-applogs-2017.01.23-3 0     r      STARTED 30520172 30.4gb
logstash-applogs-2017.01.23-3 0     p      STARTED 30520172 30.5gb
我通过将数据转储到文件中来测试logstash中的传入数据。我在30秒内得到了一个290MB的文件,包含377822行。因此,卡夫卡没有任何问题,因为在给定时间,我的3个日志存储服务器每秒接收35000个事件,但我的Elasticsearch最多每秒能够索引10000个事件

有人能帮我解决这个问题吗


编辑:我尝试以默认的125、500、1000、10000批量发送请求,但在索引速度方面仍然没有任何改进。

我通过将数据节点移动到更大的机器来提高索引速度

数据节点:具有以下配置的VMWare虚拟机:

14 CPU @ 2.60GHz
64GB RAM, 31GB dedicated for elasticsearch.
我可用的最快磁盘是带有光纤通道的SAN,因为我无法获得任何SSD或本地磁盘


我实现了每秒100000个事件的最大索引率。每个文档大小约为2到5 KB。

日志存储机器和数据节点上的网络接口卡大小是多少?根据您提供的数字,看起来您的最大速度为~10 Mbs。你能分析一下你的logstash机器上的网络使用情况吗?@Val I使用iperf3测试了logstash机器和其中一个数据节点之间的带宽,带宽为2.70 Gbits/sec。所以我不认为网络是这里的瓶颈。好吧,这很好,因为这通常被忽略。还要注意,如果您使用的是Logstash 5,那么
elasticsearch
输出中的
workers
设置不受欢迎,您应该改用管道worker(即命令行上的
-w 6
)我使用的是Logstash 2.4和Elastic 5.0.2