Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/docker/9.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Docker为什么我的elasticsearch会丢失数据?_Docker_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch - Fatal编程技术网 elasticsearch,Docker,elasticsearch" /> elasticsearch,Docker,elasticsearch" />

Docker为什么我的elasticsearch会丢失数据?

Docker为什么我的elasticsearch会丢失数据?,docker,elasticsearch,Docker,elasticsearch,我怀疑我的设置有问题 我的想法是在主机中有一个数据目录,它映射到docker容器中的一个目录。(对于主节点) 但是,我将docker volumn用于非主节点 services: es-log-00: volumes: - /var/lib/elasticsearch:/usr/share/elasticsearch/data es-log-01: volumes: - data-log1:/usr/share/elasticsearch/data -v

我怀疑我的设置有问题

我的想法是在主机中有一个数据目录,它映射到docker容器中的一个目录。(对于主节点)

但是,我将docker volumn用于非主节点

 services:
  es-log-00:
   volumes:
    - /var/lib/elasticsearch:/usr/share/elasticsearch/data
  es-log-01:
   volumes:
    - data-log1:/usr/share/elasticsearch/data

-volumes:
   data-log1:
这是我完整的docker文件

version: '3'

services:
 es-log-00:
  build:
    context: ../
    dockerfile: ./compose/elasticsearch/Dockerfile
    args:
      - VERSION=${VERSION}
      - ELASTICSEARCH_NETWORK_HOST=${ELASTICSEARCH_NETWORK_HOST}
      - ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT}
      - MEM=${MEM}
      - ENV=${ENV}
  container_name: es-log-00
  network_mode: host
  environment:
      - node.name=node-master
      - discovery.seed_hosts=node1,node2
      - cluster.initial_master_nodes=node-master,node1,node2
      - bootstrap.memory_lock=true
      - cluster.name=littlehome-log
      - network.publish_host=192.168.1.105
  volumes:
   - /etc/localtime:/etc/localtime:ro
   - /var/lib/elasticsearch:/usr/share/elasticsearch/data
   - /var/lib/elasticsearch-backup:/var/lib/elasticsearch-backup
   - /var/nfs/elasticsearch:/var/nfs/elasticsearch
  ulimits:
    memlock:
      soft: -1
      hard: -1
    nofile:
      soft: 65536
      hard: 65536

 es-log-01:
  restart: always
  build:
    context: ../
    dockerfile: ./compose/elasticsearch/Dockerfile
    args:
      - VERSION=${VERSION}
      - ELASTICSEARCH_NETWORK_HOST=${ELASTICSEARCH_NETWORK_HOST}
      - ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_SEARCH}
      - MEM=${MEM1}
      - ENV=${ENV}
  container_name: es-log-01
  network_mode: host
  environment:
      - node.name=node1
      - discovery.seed_hosts=node-master,node2
      - cluster.initial_master_nodes=node-master,node1,node2
      - bootstrap.memory_lock=true
      - cluster.name=littlehome-log
      - network.publish_host=192.168.1.100
  volumes:
   - /etc/localtime:/etc/localtime:ro
   - data-log1:/usr/share/elasticsearch/data
   - /var/nfs/elasticsearch:/var/nfs/elasticsearch
  ulimits:
    memlock:
      soft: -1
      hard: -1
    nofile:
      soft: 65536
      hard: 65536

 es-log-02:
  restart: always
  build:
    context: ../
    dockerfile: ./compose/elasticsearch/Dockerfile
    args:
      - VERSION=${VERSION}
      - ELASTICSEARCH_NETWORK_HOST=${ELASTICSEARCH_NETWORK_HOST}
      - ELASTICSEARCH_PORT=${ELASTICSEARCH_PORT_SEARCH}
      - MEM=${MEM2}
      - ENV=${ENV}
  container_name: es-log-02
  network_mode: host
  environment:
      - node.name=node2
      - discovery.seed_hosts=node-master,node1
      - cluster.initial_master_nodes=node-master,node1,node2
      - bootstrap.memory_lock=true
      - cluster.name=littlehome-log
      - network.publish_host=192.168.1.104
  volumes:
   - /etc/localtime:/etc/localtime:ro
   - data-log2:/usr/share/elasticsearch/data
   - /var/nfs/elasticsearch:/var/nfs/elasticsearch
  ulimits:
    memlock:
      soft: -1
      hard: -1
    nofile:
      soft: 65536
      hard: 65536


volumes:
  data-log1:
  data-log2:
我是否应该不使用数据卷,而是像使用es-log-00那样使用主机目录


或者还有什么我应该检查的吗?

我不认为上面的麋鹿堆栈在集群模式下运行,您是否可以验证它是否在集群模式下运行,因为您使用的是
网络模式:主机
,其中服务对服务通信不起作用,因为容器没有分配IP地址。尝试运行
curl-X GET“localhost:9200/_cat/nodes?v&pretty”
如果它作为集群运行,它将返回
3
,否则
1
可能会创建数据不一致。我看到3个数据节点,所以它正在工作。。IP是机器的内部IP。(我可以用ssh连接到其中)我不认为上面的ELK堆栈在集群模式下运行,您是否可以验证它是否在集群模式下运行,因为您使用的是
网络模式:host
,其中服务对服务通信不起作用,因为容器没有分配IP地址。尝试运行
curl-X GET“localhost:9200/_cat/nodes?v&pretty”
如果它作为集群运行,它将返回
3
,否则
1
可能会创建数据不一致。我看到3个数据节点,所以它正在工作。。IP是机器的内部IP。(我可以用ssh连接到其中)