elasticsearch 弹性力学中的geo_点,elasticsearch,logstash,elastic-stack,logstash-configuration,elasticsearch,Logstash,Elastic Stack,Logstash Configuration" /> elasticsearch 弹性力学中的geo_点,elasticsearch,logstash,elastic-stack,logstash-configuration,elasticsearch,Logstash,Elastic Stack,Logstash Configuration" />

elasticsearch 弹性力学中的geo_点

elasticsearch 弹性力学中的geo_点,elasticsearch,logstash,elastic-stack,logstash-configuration,elasticsearch,Logstash,Elastic Stack,Logstash Configuration,我正在尝试将纬度和经度映射到Elastic中的一个地理点 以下是我的日志文件条目: 13-01-2017 ORDER COMPLETE: £22.00 Glasgow, 55.856299, -4.258845 这是我的conf文件 input { file { path => "/opt/logs/orders.log" start_position => "beginning" } } filter { grok { match => {

我正在尝试将纬度和经度映射到Elastic中的一个地理点

以下是我的日志文件条目:

13-01-2017 ORDER COMPLETE: £22.00 Glasgow, 55.856299, -4.258845
这是我的conf文件

input {
file {
  path => "/opt/logs/orders.log"
  start_position => "beginning"
 }
}

filter {
   grok {
       match => { "message" => "(?<date>[0-9-]+) (?<order_status>ORDER [a-zA-Z]+): (?<order_amount>£[0-9.]+) (?<order_location>[a-zA-Z ]+)"}
}

mutate {
       convert => { "order_amount" => "float" }
       convert => { "order_lat" => "float" }
       convert => { "order_long" => "float" }

       rename => {
                  "order_long" => "[location][lon]"
                  "order_lat" => "[location][lat]"
       }
 }
}

output {
      elasticsearch {
               hosts => "localhost"

               index => "sales"
               document_type => "order"

     }
    stdout {}
}
看到了吗?它将位置视为一个地理点。然而,在以下方面可以获得销售/_映射结果:

"location": {
        "properties": {
          "lat": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "lon": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          }
        }
      },
使现代化 每次重新编制索引时,我都会停止logstash,然后从/opt/logstash/data/plugins/inputs/file…中删除.sincedb。。。。我还制作了一个全新的日志文件,每次我达到sales7时,我都会增加索引

配置文件

input {
file {
  path => "/opt/logs/orders.log"
  start_position => "beginning"
 }
}

filter {
   grok {
       match => { "message" => "(?<date>[0-9-]+) (?<order_status>ORDER [a-zA-Z]+): (?<order_amount>£[0-9.]+) (?<order_location>[a-zA-Z ]+)"}
}

mutate {
       convert => { "order_amount" => "float" }
       convert => { "order_lat" => "float" }
       convert => { "order_long" => "float" }

       rename => {
                  "order_long" => "[location][lon]"
                  "order_lat" => "[location][lat]"
       }
 }
}

output {
      elasticsearch {
               hosts => "localhost"

               index => "sales"
               document_type => "order"

     }
    stdout {}
}

有趣的是,当geo_点映射不起作用时,即lat和long都是浮动的,我的数据被索引为30行。但是,当位置正确地设置为地理点时,我的所有行都不会被索引。

有两种方法可以做到这一点。第一个是为映射创建模板,以便在为数据编制索引时创建正确的映射。因为Elasticseach不了解您的数据类型。你应该这样说,这些事情如下

首先,为映射结构创建template.json文件:

{
  "template": "sales*",
  "settings": {
    "index.refresh_interval": "5s"
  },
  "mappings": {
    "sales": {
      "_source": {
        "enabled": false
      },
      "properties": {
        "location": {
          "type": "geo_point"
        }
      }
    }
  },
  "aliases": {}
}
之后,更改日志存储配置,将此映射放在索引中:

input {
file {
  path => "/opt/logs/orders.log"
  start_position => "beginning"
 }
}

filter {
   grok {
       match => { "message" => "(?<date>[0-9-]+) (?<order_status>ORDER [a-zA-Z]+): (?<order_amount>£[0-9.]+) (?<order_location>[a-zA-Z ]+)"}
}

mutate {
       convert => { "order_amount" => "float" }
       convert => { "order_lat" => "float" }
       convert => { "order_long" => "float" }

       rename => {
                  "order_long" => "[location][lon]"
                  "order_lat" => "[location][lat]"
       }
 }
}

output {
      elasticsearch {
                hosts => "localhost"

                index => "sales"
                document_type => "order"
                template_name => "myindex"
                template => "/etc/logstash/conf.d/template.json"
                template_overwrite => true


     }
    stdout {}
}

第二个选项是摄取节点特性。我将更新此选项的答案,但现在您可以检查。在本例中,我在解析位置数据时使用了摄入节点功能而不是模板。

谢谢。这很奇怪,因为我现在看到的结构是一个地质点,但它没有索引任何数据。获取包含long和lat的sales5/_计数,因为浮动显示30个索引文档。获取sales6/_计数,并正确拾取位置,因为地理点显示0个文档。我每次都会删除sincedb,正如您所看到的,增加索引并修改JSON和conf文件如果您使用的是dynamic:strict映射,这可能是错误的。其他方面,请检查控制台上的日志存储响应。它将提供有关索引错误的信息。我已经打开了debug-debug,现在得到的glob是[]。我重新创建了日志文件,但它仍然没有正确索引。为了解决这个问题,你能给出一个示例文件吗?我将尝试将这个文件索引到Elasticsearch,并尝试创建一个停靠的存储库。啊,我找到了解决方案,你不需要发送lat long separated,因为数组使用逗号分隔发送这些数据。
{
  "template": "sales*",
  "settings": {
    "index.refresh_interval": "5s"
  },
  "mappings": {
    "sales": {
      "_source": {
        "enabled": false
      },
      "properties": {
        "location": {
          "type": "geo_point"
        }
      }
    }
  },
  "aliases": {}
}
input {
file {
  path => "/opt/logs/orders.log"
  start_position => "beginning"
 }
}

filter {
   grok {
       match => { "message" => "(?<date>[0-9-]+) (?<order_status>ORDER [a-zA-Z]+): (?<order_amount>£[0-9.]+) (?<order_location>[a-zA-Z ]+)"}
}

mutate {
       convert => { "order_amount" => "float" }
       convert => { "order_lat" => "float" }
       convert => { "order_long" => "float" }

       rename => {
                  "order_long" => "[location][lon]"
                  "order_lat" => "[location][lat]"
       }
 }
}

output {
      elasticsearch {
                hosts => "localhost"

                index => "sales"
                document_type => "order"
                template_name => "myindex"
                template => "/etc/logstash/conf.d/template.json"
                template_overwrite => true


     }
    stdout {}
}