Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/.htaccess/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在Docker堆栈上部署JDBC管道的Logstash会重复创建新容器_Docker_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch_Jdbc_Logstash - Fatal编程技术网 elasticsearch,jdbc,logstash,Docker,elasticsearch,Jdbc,Logstash" /> elasticsearch,jdbc,logstash,Docker,elasticsearch,Jdbc,Logstash" />

在Docker堆栈上部署JDBC管道的Logstash会重复创建新容器

在Docker堆栈上部署JDBC管道的Logstash会重复创建新容器,docker,elasticsearch,jdbc,logstash,Docker,elasticsearch,Jdbc,Logstash,我一直在努力教自己如何在我的本地机器上部署Docker上的麋鹿,下面的问题已经发生了一个星期了,我一直无法在线找到解决方案 我在以下配置中运行“docker deploy-c docker-compose.yml elk_stack”。 我面临的问题是,在创建logstash容器之后,日志显示正确选择了管道配置,数据流通过elasticsearch容器。然后,一旦移动了所有数据,logstash容器就会销毁自己,并按照与上一个容器相同的步骤创建一个新容器 为什么会这样 下面是我的docker-c

我一直在努力教自己如何在我的本地机器上部署Docker上的麋鹿,下面的问题已经发生了一个星期了,我一直无法在线找到解决方案

我在以下配置中运行“docker deploy-c docker-compose.yml elk_stack”。 我面临的问题是,在创建logstash容器之后,日志显示正确选择了管道配置,数据流通过elasticsearch容器。然后,一旦移动了所有数据,logstash容器就会销毁自己,并按照与上一个容器相同的步骤创建一个新容器

为什么会这样

下面是我的docker-compose.yml

version: "3"
networks:
  elk_net:

services:
  db:
    image: mariadb:latest
    command: --default-authentication-plugin=mysql_native_password
    environment:
      MYSQL_ROOT_PASSWORD: root
    ports:
      - 3306:3306
    volumes:
      - mysqldata:/var/lib/mysql
    deploy:
      placement:
        constraints: [node.role == manager]
    networks:
      - elk_net
    depends_on:
      - elk_net
      - mysqldata
  adminer:
    image: adminer
    ports:
      - "8080:8080"
    deploy:
      placement:
        constraints: [node.role == manager]
    networks:
      - elk_net
    depends_on:
      - elk_net
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
    environment:
      discovery.type: single-node
    ports:
      - 9200:9200
      - 9300:9300
    volumes:
      - esdata01:/usr/share/elasticsearch/data
    networks:
      - elk_net
    depends_on:
      - elk_net
  logstash:
    image: logstash:custom
    stdin_open: true
    tty: true
    volumes: 
      - ./dependency:/usr/local/dependency/
      - ./logstash/pipeline/mysql:/usr/share/logstash/pipeline/
    networks:
      - elk_net
    depends_on:
      - elk_net
  kibana:
    image: docker.elastic.co/kibana/kibana:7.3.1
    ports:
      - 5601:5601
    networks:
      - elk_net
    depends_on:
      - elk_net

volumes:
  esdata01:
    driver: local
  mysqldata:
    driver: local
这是我的日志

input {
    jdbc {
        jdbc_connection_string => "jdbc:mysql://db:3306/sonar_data"
        jdbc_user => "root"
        jdbc_password => "root"
        jdbc_driver_library => ""
        jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
        jdbc_paging_enabled => true
        tracking_column => "accounting_entry_id"
        tracking_column_type => "numeric"
        use_column_value => true
        statement => "SELECT * FROM call_detail_record WHERE accounting_entry_id > :sql_last_value ORDER BY accounting_entry_id ASC"
    }
}

output {
    stdout { codec => json_lines }
    elasticsearch {
        hosts => ["elasticsearch:9200"]
        index => "cdr_data"
    }
}
docker日志示例:

ravi@ravi-VirtualBox:~/Documents/git_personal/cdr-data-visualizer-elk$ sudo docker logs 2c89502d48b3 -f
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-09-17T08:06:56,317][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-09-17T08:06:56,339][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-09-17T08:06:56,968][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.3.1"}
[2019-09-17T08:06:57,002][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"7a2b2d2a-157e-42c3-bcde-a14dc773750f", :path=>"/usr/share/logstash/data/uuid"}
[2019-09-17T08:06:57,795][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2019-09-17T08:06:59,033][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-09-17T08:06:59,316][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-09-17T08:06:59,391][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2019-09-17T08:06:59,393][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-17T08:06:59,720][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2019-09-17T08:06:59,725][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2019-09-17T08:07:01,244][INFO ][org.reflections.Reflections] Reflections took 59 ms to scan 1 urls, producing 19 keys and 39 values 
[2019-09-17T08:07:01,818][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-09-17T08:07:01,842][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-09-17T08:07:01,860][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-17T08:07:01,868][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-17T08:07:01,930][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2019-09-17T08:07:02,138][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-09-17T08:07:02,328][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2019-09-17T08:07:02,332][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, :thread=>"#<Thread:0x2228b784 run>"}
[2019-09-17T08:07:02,439][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-09-17T08:07:02,947][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2019-09-17T08:07:03,178][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-09-17T08:07:04,327][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"7d7dfa0f023f65240aeb31ebb353da5a42dc782979a2bd7e26e28b7cbd509bb3", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_151a6660-4b00-4b2c-8a78-3d93f5161cbe", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-09-17T08:07:04,499][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-09-17T08:07:04,529][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-09-17T08:07:04,550][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-17T08:07:04,560][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-17T08:07:04,596][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
[2019-09-17T08:07:04,637][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0x736c74cd run>"}
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
[2019-09-17T08:07:04,892][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2019-09-17T08:07:04,920][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2019-09-17T08:07:05,660][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-09-17T08:07:06,850][INFO ][logstash.inputs.jdbc     ] (0.029802s) SELECT version()
[2019-09-17T08:07:07,038][INFO ][logstash.inputs.jdbc     ] (0.007399s) SELECT version()
[2019-09-17T08:07:07,393][INFO ][logstash.inputs.jdbc     ] (0.003612s) SELECT count(*) AS `count` FROM (SELECT * FROM call_detail_record WHERE accounting_entry_id > 0 ORDER BY accounting_entry_id ASC) AS `t1` LIMIT 1
[2019-09-17T08:07:07,545][INFO ][logstash.inputs.jdbc     ] (0.041288s) SELECT * FROM (SELECT * FROM call_detail_record WHERE accounting_entry_id > 0 ORDER BY accounting_entry_id ASC) AS `t1` LIMIT 100000 OFFSET 0
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
************ A LOT OFF RECORDS ARE PUSHED TO ELASTICSEARCH FROM MYSQL SUCCESSFULLY******************
....

[2019-09-17T08:07:13,148][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2019-09-17T08:07:13,633][INFO ][logstash.runner          ] Logstash shut down.
ravi@ravi-VirtualBox:~/Documents/git_personal/cdr-data-visualizer-elk$ 
ravi@ravi-VirtualBox:~/Documents/git_personal/cdr数据可视化工具elk$sudo docker日志2c89502d48b3-f
OpenJDK 64位服务器VM警告:选项UseConcMarkSweepGC在9.0版中已被弃用,并可能在将来的版本中被删除。
警告:发生了非法的反射访问操作
警告:com.headius.backport9.modules.modules(文件:/usr/share/logstash/logstash core/lib/jars/jruby-complete-9.2.7.0.jar)对字段java.io.FileDescriptor.fd的非法反射访问
警告:请考虑将此报告给COM.HeADUS.BACKPORT9模块。
警告:使用--invalize access=warn以启用对进一步非法访问操作的警告
警告:所有非法访问操作将在未来版本中被拒绝
Thread.exclusive已弃用,请使用Thread::Mutex
将Logstash日志发送到/usr/share/Logstash/logs,该日志现在通过log4j2.properties配置
[2019-09-17T08:06:56317][INFO][logstash.setting.writeabledirectory]创建目录{:setting=>“path.queue”,:path=>“/usr/share/logstash/data/queue”}
[2019-09-17T08:06:56339][INFO][logstash.setting.writabledirectory]正在创建目录{:setting=>“path.dead_letter_queue”,“path=>”/usr/share/logstash/data/dead_letter_queue}
[2019-09-17T08:06:56968][INFO][logstash.runner]正在启动logstash{“logstash.version”=>“7.3.1”}
[2019-09-17T08:06:57002][INFO][logstash.agent]未找到持久UUID文件。正在生成新的UUID{:UUID=>“7a2b2d2a-157e-42c3-bcde-a14dc773750f”,:path=>“/usr/share/logstash/data/UUID”}
[2019-09-17T08:06:57795][WARN][logstash.monitoringextension.pipelineregisterhook]xpack.monitoring.enabled尚未定义,但找到elasticsearch配置。请在logstash.yml中显式设置'xpack.monitoring.enabled:true'
[2019-09-17T08:06:59033][INFO][logstash.licensechecker.licensereader]Elasticsearch池URL更新{:更改=>{:删除=>[],:添加=>[http://elasticsearch:9200/]}}
[2019-09-17T08:06:59316][WARN][logstash.licensechecker.licensereader]已恢复到ES实例的连接{:url=>”http://elasticsearch:9200/"}
[2019-09-17T08:06:59391][INFO][logstash.licensechecker.licensereader]ES输出版本已确定{:ES_版本=>7}
[2019-09-17T08:06:59393][WARN][logstash.licensechecker.licensereader]检测到6.x及以上版本的群集:`type`事件字段将不用于确定文档类型{:es\u version=>7}
[2019-09-17T08:06:59720][INFO][logstash.monitoring.internalpipelinesource]监控许可证正常
[2019-09-17T08:06:59725][INFO][logstash.monitoring.internalpipelinesource]已验证的监视许可证。启用监控管道。
[2019-09-17T08:07:01244][INFO][org.reflections.reflections]反射花了59毫秒扫描1个URL,生成19个键和39个值
[2019-09-17T08:07:01818][INFO][logstash.outputs.elasticsearch]elasticsearch池URL更新{:更改=>{:删除=>[],:添加=>[http://elasticsearch:9200/]}}
[2019-09-17T08:07:01842][WARN][logstash.outputs.elasticsearch]已恢复到ES实例的连接{:url=>”http://elasticsearch:9200/"}
[2019-09-17T08:07:01860][INFO][logstash.outputs.elasticsearch]ES输出版本已确定{:ES_version=>7}
[2019-09-17T08:07:01868][WARN][logstash.outputs.elasticsearch]检测到一个6.x及以上的群集:“type”事件字段将不用于确定文档类型{:es\u version=>7}
[2019-09-17T08:07:01930][INFO][logstash.outputs.elasticsearch]新的elasticsearch输出{:class=>“logstash::outputs::elasticsearch”,:hosts=>[“//elasticsearch:9200”]}
[2019-09-17T08:07:02138][INFO][logstash.outputs.elasticsearch]使用默认映射模板
[2019-09-17T08:07:02328][WARN][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge]已为key:cluster_uids创建未知类型的度量(org.jruby.specialized.rubyarayoneobject)。这可能会导致无效的序列化。建议将问题记录到负责的开发人员/开发团队。
[2019-09-17T08:07:02332][INFO][logstash.javapipeline]正在启动管道{:pipeline\u id=>“main”,“pipeline.workers”=>1,“pipeline.batch.size”=>125,“pipeline.batch.delay”=>50,“pipeline.max\u inflight”=>125,“thread=>”}
[2019-09-17T08:07:02439][INFO][logstash.outputs.elasticsearch]正在尝试安装模板{:管理模板=>{“索引模式”=>“logstash-*”,“版本”=>60001,“设置”=>{“索引.刷新间隔”=>“5s”,“碎片数”=>1},“映射”=>{“动态模板”=>[{“消息字段”=>{“路径匹配”=>“消息”,“匹配映射类型”=>“字符串”,“映射”=>{“类型”=>“文本”、“规范”=>false}}}}、{“字符串”=>{“匹配”=>“*”、“匹配映射类型”=>“字符串”、“映射”=>{“类型”=>“文本”、“规范”=>false、“字段”=>{“关键字”=>{“类型”=>“关键字”、“忽略”=>256}}}}}}]、“属性”=>{timestamp=>“日期”=>“版本”=>“类型”=>“关键字”=>“动态属性”{>{ip“=>{”类型“=>”ip“}”,位置“=>{”类型“=>”地理点“}”,纬度“=>{”类型“=>”半浮“}”,经度“=>{”类型“=>”半浮“}”
[2019-09-17T08:07:02947][INFO][logstash.javapipeline]管道已启动{“Pipeline.id=>”m
schedule => "*/10 * * * *"