elasticsearch 日志存储在Docker中-将2个事件合并为1个事件,elasticsearch,logstash,elasticsearch,Logstash" /> elasticsearch 日志存储在Docker中-将2个事件合并为1个事件,elasticsearch,logstash,elasticsearch,Logstash" />

elasticsearch 日志存储在Docker中-将2个事件合并为1个事件

elasticsearch 日志存储在Docker中-将2个事件合并为1个事件,elasticsearch,logstash,elasticsearch,Logstash,我通过他们的官方图片在Docker运行ElasticStack;但是,当我尝试使用Logstash-aggregate插件组合具有相同RequestID的事件时,我当前收到以下错误消息: 无法创建管道{:reason=>“找不到任何名为“aggregate”的筛选器插件。您确定这是正确的吗?尝试加载聚合筛选器插件导致以下错误:加载请求的筛选器类型的名为aggregate的插件时出现问题。错误:NameError NameError”} 也就是说,我也不是100%确定如何使用Logstash-ag

我通过他们的官方图片在Docker运行ElasticStack;但是,当我尝试使用Logstash-aggregate插件组合具有相同RequestID的事件时,我当前收到以下错误消息:

无法创建管道{:reason=>“找不到任何名为“aggregate”的筛选器插件。您确定这是正确的吗?尝试加载聚合筛选器插件导致以下错误:加载请求的筛选器类型的名为aggregate的插件时出现问题。错误:NameError NameError”}

也就是说,我也不是100%确定如何使用Logstash-aggregate插件将以下示例事件组合成一个事件:

{
    "@t": "2017-10-16T20:21:35.0531946Z",
    "@m": "HTTP GET Request: \"https://myapi.com/?format=json&trackid=385728443\"",
    "@i": "29b30dc6",
    "Url": "https://myapi.com/?format=json&trackid=385728443",
    "SourceContext": "OpenAPIClient.Client",
    "ActionId": "fd683cc6-9e59-427f-a9f4-7855663f3568",
    "ActionName": "Web.Controllers.API.TrackController.TrackRadioLocationGetAsync (Web)",
    "RequestId": "0HL8KO13F8US6:0000000E",
    "RequestPath": "/api/track/radiourl/385728443"
}
{
    "@t": "2017-10-16T20:21:35.0882617Z",
    "@m": "HTTP GET Response: LocationAPIResponse { Location: \"http://sample.com/file/385728443/\", Error: null, Success: True }",
    "@i": "84f6b72b",
    "Response":
    {
        "Location": "http://sample.com/file/385728443/",
        "Error": null,
        "Success": true,
        "$type": "LocationAPIResponse"
    },
    "SourceContext": "OpenAPIClient.Client",
    "ActionId": "fd683cc6-9e59-427f-a9f4-7855663f3568",
    "ActionName": "Web.Controllers.API.TrackController.TrackRadioLocationGetAsync (Web)",
    "RequestId": "0HL8KO13F8US6:0000000E",
    "RequestPath": "/api/track/radiourl/385728443"
}
有人能告诉我如何正确组合这些事件吗?如果聚合是正确的插件,为什么内置插件似乎不是Logstash Docker映像的一部分?

docker-compose.yml内容:

 version: '3'
 services:
   elasticsearch:
     image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
     container_name: elasticsearch
     environment:
       - discovery.type=single-node
       - xpack.security.enabled=false
     ports:
       - 9200:9200
     restart: always
   logstash:
     image: docker.elastic.co/logstash/logstash:5.6.3
     container_name: logstash
     environment:
       - xpack.monitoring.elasticsearch.url=http://elasticsearch:9200
     depends_on:
       - elasticsearch
     ports:
       - 10000:10000
     restart: always
     volumes:
       - ./logstash/pipeline/:/usr/share/logstash/pipeline/
   kibana:
     image: docker.elastic.co/kibana/kibana:5.6.3
     container_name: kibana
     environment:
       - xpack.monitoring.elasticsearch.url=http://elasticsearch:9200
     depends_on:
       - elasticsearch
     ports:
       - 5601:5601
     restart: always
 input {
     http {
         id => "empstore_http"
         port => 10000
         codec => "json"
     }
 }

 output {
     elasticsearch {
         hosts => [ "elasticsearch:9200" ]
         id => "empstore_elasticsearch"
         index => "empstore-openapi"
     }
 }

 filter {
     mutate {
         rename => { "RequestId" => "RequestID" }
     }

    aggregate {
         task_id => "%{RequestID}"
         code => ""
     }
 }
logstash/pipeline/empstore.conf内容:

 version: '3'
 services:
   elasticsearch:
     image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
     container_name: elasticsearch
     environment:
       - discovery.type=single-node
       - xpack.security.enabled=false
     ports:
       - 9200:9200
     restart: always
   logstash:
     image: docker.elastic.co/logstash/logstash:5.6.3
     container_name: logstash
     environment:
       - xpack.monitoring.elasticsearch.url=http://elasticsearch:9200
     depends_on:
       - elasticsearch
     ports:
       - 10000:10000
     restart: always
     volumes:
       - ./logstash/pipeline/:/usr/share/logstash/pipeline/
   kibana:
     image: docker.elastic.co/kibana/kibana:5.6.3
     container_name: kibana
     environment:
       - xpack.monitoring.elasticsearch.url=http://elasticsearch:9200
     depends_on:
       - elasticsearch
     ports:
       - 5601:5601
     restart: always
 input {
     http {
         id => "empstore_http"
         port => 10000
         codec => "json"
     }
 }

 output {
     elasticsearch {
         hosts => [ "elasticsearch:9200" ]
         id => "empstore_elasticsearch"
         index => "empstore-openapi"
     }
 }

 filter {
     mutate {
         rename => { "RequestId" => "RequestID" }
     }

    aggregate {
         task_id => "%{RequestID}"
         code => ""
     }
 }

过滤器中的代码是必需的设置

代码示例:

  • 请求结束:

    代码=>“映射['sql\u duration']+=event.get('duration')”

  • 请求启动:

    代码=>“映射['sql\U duration']=0”

  • 请求:

    代码=>“映射['sql\u duration']+=event.get('duration')”


过滤器中的代码是必需的设置

代码示例:

  • 请求结束:

    代码=>“映射['sql\u duration']+=event.get('duration')”

  • 请求启动:

    代码=>“映射['sql\U duration']=0”

  • 请求:

    代码=>“映射['sql\u duration']+=event.get('duration')”


因此,如果不经过确定哪一个是请求,哪一个是响应的开销,就无法简单地组合两个事件的现有字段。请看这里:code=>“map['country\u name']=event.get('country\u name')map['towns']|=[]map['towns']event.get('town\u name'))}事件。取消()我最终决定,修改发出事件的应用程序以将它们组合起来要比弄清楚如何正确安装/使用聚合插件容易得多,特别是因为我无法确保一旦应用程序部署到多台机器上,事件不会交错到达。作为一个解决方案,这还不错!好!!因此,没有办法简单地组合两个事件的现有字段,而不经历确定哪一个是请求,哪一个是响应的开销?看这里:code=>“map['country\u name']=event.get('country\u name')map['towns'].\124;=[]map['towns']event.get('town\u name'))}事件。取消()我最终决定,修改发出事件的应用程序以将它们组合起来要比弄清楚如何正确安装/使用聚合插件容易得多,特别是因为我无法确保一旦应用程序部署到多台机器上,事件不会交错到达。作为一个解决方案,这还不错!好!!