Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/google-sheets/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Performance elasticsearch的索引性能不佳_Performance_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch_Logstash_Elastic Stack - Fatal编程技术网 elasticsearch,logstash,elastic-stack,Performance,elasticsearch,Logstash,Elastic Stack" /> elasticsearch,logstash,elastic-stack,Performance,elasticsearch,Logstash,Elastic Stack" />

Performance elasticsearch的索引性能不佳

Performance elasticsearch的索引性能不佳,performance,elasticsearch,logstash,elastic-stack,Performance,elasticsearch,Logstash,Elastic Stack,目前,我正在使用弹性搜索来存储和查询一些日志。我们建立了一个五节点弹性搜索集群。其中两个索引节点和三个查询节点。在索引节点中,我们在两台服务器上都有redis、logstash和elasticsearch。elasticsearch使用NFS存储作为数据存储。我们的要求是每秒索引300个日志条目。但我从elasticsearch获得的最佳性能只有25个日志条目/秒! elasticsearch的XMX为16G。 每个组件的版本: Redis: 2.8.12 logstash: 1.4.2 ela

目前,我正在使用弹性搜索来存储和查询一些日志。我们建立了一个五节点弹性搜索集群。其中两个索引节点和三个查询节点。在索引节点中,我们在两台服务器上都有redis、logstash和elasticsearch。elasticsearch使用NFS存储作为数据存储。我们的要求是每秒索引300个日志条目。但我从elasticsearch获得的最佳性能只有25个日志条目/秒! elasticsearch的XMX为16G。 每个组件的版本:

Redis: 2.8.12
logstash: 1.4.2
elasticsearch: 1.5.0
我们当前的索引设置如下所示:

     {
      "userlog" : {
        "settings" : {
          "index" : {
            "index" : {
              "store" : {
                "type" : "mmapfs"
              },
              "translog" : {
                "flush_threshold_ops" : "50000"
              }
            },
            "number_of_replicas" : "1",
            "translog" : {
              "flush_threshold_size" : "1G",
              "durability" : "async"
            },
            "merge" : {
              "scheduler" : {
                "max_thread_count" : "1"
              }
            },
            "indexing" : {
              "slowlog" : {
                "threshold" : {
                  "index" : {
                    "trace" : "2s",
                    "info" : "5s"
                  }
                }
              }
            },
            "memory" : {
              "index_buffer_size" : "3G"
            },
            "refresh_interval" : "30s",
            "version" : {
              "created" : "1050099"
            },
            "creation_date" : "1447730702943",
            "search" : {
              "slowlog" : {
                "threshold" : {
                  "fetch" : {
                    "debug" : "500ms"
                  },
                  "query" : {
                    "warn" : "10s",
                    "trace" : "1s"
                  }
                }
              }
            },
            "indices" : {
              "memory" : {
                "index_buffer_size" : "30%"
              }
            },
            "uuid" : "E1ttme3fSxKVD5kRHEr_MA",
            "index_currency" : "32",
            "number_of_shards" : "5"
          }
        }
      }
    }
这是我的日志存储配置:

    input {
            redis {
                    host => "eanprduserreporedis01.eao.abn-iad.ea.com"
                    port => "6379"
                    type => "redis-input"
                    data_type => "list"
                    key => "userLog"
                    threads => 15
            }
        # Second reids block begin
            redis {
                    host => "eanprduserreporedis02.eao.abn-iad.ea.com"
                    port => "6379"
                    type => "redis-input"
                    data_type => "list"
                    key => "userLog"
                    threads => 15
            }
            # Second reids block end
    }

    output {
            elasticsearch {
                    cluster => "customizedlog_prod"
                    index => "userlog"
                    workers => 30
            }
           stdout{}
    }
一件非常奇怪的事情是,尽管目前索引速度只有~20/s,但IO等待非常高,几乎达到70%。大部分是阅读流量。通过nfsiostat,当前读取速度约为200Mbps!所以基本上,为了索引每个日志条目,它将读取大约10Mbit的数据,这是疯狂的,因为我们的日志条目的平均长度小于10K。 因此,我对弹性搜索进行了jstack转储,下面是一个运行线程的结果:

    "elasticsearch[somestupidhostname][bulk][T#3]" daemon prio=10 tid=0x00007f230c109800 nid=0x79f6 runnable [0x00007f1ba85f0000]
       java.lang.Thread.State: RUNNABLE
            at sun.nio.ch.FileDispatcherImpl.pread0(Native Method)
            at sun.nio.ch.FileDispatcherImpl.pread(FileDispatcherImpl.java:52)
            at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:220)
            at sun.nio.ch.IOUtil.read(IOUtil.java:197)
            at sun.nio.ch.FileChannelImpl.readInternal(FileChannelImpl.java:730)
            at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:715)
            at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.readInternal(NIOFSDirectory.java:179)
            at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:342)
            at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
            at org.apache.lucene.store.DataInput.readVInt(DataInput.java:122)
            at org.apache.lucene.store.BufferedIndexInput.readVInt(BufferedIndexInput.java:221)
            at org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock(SegmentTermsEnumFrame.java:152)
            at org.apache.lucene.codecs.blocktree.SegmentTermsEnum.seekExact(SegmentTermsEnum.java:506)
            at org.elasticsearch.common.lucene.uid.PerThreadIDAndVersionLookup.lookup(PerThreadIDAndVersionLookup.java:104)
            at org.elasticsearch.common.lucene.uid.Versions.loadDocIdAndVersion(Versions.java:150)
            at org.elasticsearch.common.lucene.uid.Versions.loadVersion(Versions.java:161)
            at org.elasticsearch.index.engine.InternalEngine.loadCurrentVersionFromIndex(InternalEngine.java:1002)
            at org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:277)
            - locked <0x00000005fc76b938> (a java.lang.Object)
            at org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:256)
            at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:455)
            at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:437)
            at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:149)
            at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:515)
            at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:422)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
            at java.lang.Thread.run(Thread.java:745)
“elasticsearch[somestupidhostname][bulk][T#3]”守护程序prio=10 tid=0x00007f230c109800 nid=0x79f6可运行[0x00007f1ba85f0000]
java.lang.Thread.State:可运行
位于sun.nio.ch.FileDispatcherImpl.pread0(本机方法)
位于sun.nio.ch.FileDispatcherImpl.pread(FileDispatcherImpl.java:52)
位于sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:220)
位于sun.nio.ch.IOUtil.read(IOUtil.java:197)
位于sun.nio.ch.FileChannelImpl.readInternal(FileChannelImpl.java:730)
位于sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:715)
位于org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.readInternal(NIOFSDirectory.java:179)
位于org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:342)
位于org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:54)
在org.apache.lucene.store.DataInput.readVInt(DataInput.java:122)
位于org.apache.lucene.store.BufferedIndexInput.readVInt(BufferedIndexInput.java:221)
位于org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame.loadBlock(SegmentTermsEnumFrame.java:152)
位于org.apache.lucene.codecs.blocktree.Seegmentermsenum.seekExact(segmentermsenum.java:506)
位于org.elasticsearch.common.lucene.uid.PerThreadIDAndVersionLookup.lookup(PerThreadIDAndVersionLookup.java:104)
在org.elasticsearch.common.lucene.uid.Versions.loadDocIdAndVersion(Versions.java:150)上
位于org.elasticsearch.common.lucene.uid.Versions.loadVersion(Versions.java:161)
位于org.elasticsearch.index.engine.InternalEngine.loadCurrentVersionFromIndex(InternalEngine.java:1002)
位于org.elasticsearch.index.engine.InternalEngine.innerCreate(InternalEngine.java:277)
-锁定(一个java.lang.Object)
位于org.elasticsearch.index.engine.InternalEngine.create(InternalEngine.java:256)
位于org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:455)
位于org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:437)
在org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:149)上
位于org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:515)
位于org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:422)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
运行(Thread.java:745)

谁能告诉我弹性搜索在做什么,为什么索引速度这么慢?有可能改进吗?

这可能不是您表现不佳的全部原因,但请查看redis的选项。我敢打赌,如果您一次从redis提取多个文档,效果会更好。

这可能不是您性能不佳的全部原因,但请查看redis的选项。我敢打赌,如果您一次从redis中提取多个文档,效果会更好。

您有一些组件,每个组件都可能很慢。除非你在elasticsearch中设置了大量插入数据的独立测试,否则罪魁祸首很可能是logstash、redis等。此外,发布你的logstash配置;我有一个理论。@AlainCollins我在帖子中添加了配置。顺便说一句,我不认为Redis或logstash是问题所在。因为,通过iotop,我可以看到大多数IOs都被elasticsearch使用。你有几个组件,每个组件都很慢。除非你在elasticsearch中设置了大量插入数据的独立测试,否则罪魁祸首很可能是logstash、redis等。此外,发布你的logstash配置;我有一个理论。@AlainCollins我在帖子中添加了配置。顺便说一句,我不认为Redis或logstash是问题所在。因为,通过iotop,我可以看到大多数IOs都被elasticsearch使用。