Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/wix/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Docker 超过高磁盘水印[90%]的[…]碎片将从此节点重新定位_Docker_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch - Fatal编程技术网 elasticsearch,Docker,elasticsearch" /> elasticsearch,Docker,elasticsearch" />

Docker 超过高磁盘水印[90%]的[…]碎片将从此节点重新定位

Docker 超过高磁盘水印[90%]的[…]碎片将从此节点重新定位,docker,elasticsearch,Docker,elasticsearch,我正在尝试从本地计算机中的docker容器运行多节点Elasticsearch。下面是我的docker compose文件: version: '2.2' services: es01: image: docker.elastic.co/elasticsearch/elasticsearch:7.13.0 container_name: es01 environment: - node.name=es01 - cluster.name=es-d

我正在尝试从本地计算机中的docker容器运行多节点Elasticsearch。下面是我的docker compose文件:

version: '2.2'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.0
    container_name: es01
    environment:
      - node.name=es01
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms256m -Xmx256m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - C:/Docker/Elasticsearch/data01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - esnet
  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.0
    container_name: es02
    environment:
      - node.name=es02
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms256m -Xmx256m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - C:/Docker/Elasticsearch/data02:/usr/share/elasticsearch/data
    networks:
      - esnet
  es03:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.13.0
    container_name: es03
    environment:
      - node.name=es03
      - cluster.name=es-docker-cluster
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms256m -Xmx256m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - C:/Docker/Elasticsearch/data03:/usr/share/elasticsearch/data
    networks:
      - esnet
  kibana:
    image: docker.elastic.co/kibana/kibana:7.13.0
    container_name: kibana
    environment:
      - "ELASTICSEARCH_HOSTS=http://es01:9200"
    ports:
      - '5601:5601'
    networks:
      - esnet

networks:
  esnet:
    driver: bridge
当我运行docker compose命令时,会出现以下错误:

es01      | {"type": "server", "timestamp": "2021-05-29T12:08:16,409Z", "level": "WARN", "component": "o.e.c.r.a.DiskThresholdMonitor", "cluster.name": "es-docker-cluster", "node.name": "es01", "message": "high disk watermark [90%] exceeded on [6UfoOc1-QaCrrRlhLngSkA][es02][/usr/share/elasticsearch/data/nodes/0] free: 17.3gb[7.3%], shards will be relocated away from this node; currently relocating away shards totalling [0] bytes; the node is expected to continue to exceed the high 
disk watermark when these relocations are complete", "cluster.uuid": "dpml3lE2Q0i7NRxFaQcGkQ", "node.id": "F_delTaoRfCASAlmu_Yd-Q"  }
下面是我的docker资源的屏幕截图:

docker服务器消息下面的


但是,我的Kibana不起作用(

在评论部分,我已经提到了我的问题所在,仅供将来参考,我在这里解释:

  • 超过高磁盘水印[90%]的[…]碎片将从此节点重新定位。: 每当您的系统中遇到可用磁盘空间不足时,就会出现此错误。因此,为了克服此问题,我必须清理一些数据以获得更多磁盘空间,而我当前的可用空间为40GB,工作正常
  • 修复磁盘空间后,我解决了另一个问题,即“最大虚拟内存区域vm.Max_map_count[65530]太低,至少增加到[262144]”
    因此,这里我们需要增加Docker的虚拟内存。在windows中,我们首先需要在执行增加虚拟内存命令之前转到Docker终端
  • 如果您的Docker使用wsl子系统,那么

  • 开式电源外壳
  • 运行:
    wsl-d docker desktop
    ,这将带到docker终端
  • 运行:
    sysctl-w vm.max\u map\u count=262144
  • 重新启动你的Docker,一切都准备好了


    注意:增加虚拟内存的问题已经在这里得到了回答:

    在Docker compose中,您将数据位置装载到本地磁盘(
    C:/Docker/Elasticsearch/…
    ),因此我怀疑您的磁盘空间不足。根据错误消息(
    free:17.3gb[7.3%])
    )看起来你正在使用一个大约236 GB大小的磁盘,你只剩下17.3 GB的可用空间。你能检查一下吗?@zsltg,是的,你是对的,在我清理了一些磁盘空间之后,又遇到了另一个与虚拟内存有关的问题。我用answer解决了这个问题。
    {
      "name" : "es01",
      "cluster_name" : "es-docker-cluster",
      "cluster_uuid" : "dpml3lE2Q0i7NRxFaQcGkQ",
      "version" : {
        "number" : "7.13.0",
        "build_flavor" : "default",
        "build_type" : "docker",
        "build_hash" : "5ca8591c6fcdb1260ce95b08a8e023559635c6f3",
        "build_date" : "2021-05-19T22:22:26.081971330Z",
        "build_snapshot" : false,
        "lucene_version" : "8.8.2",
        "minimum_wire_compatibility_version" : "6.8.0",
        "minimum_index_compatibility_version" : "6.0.0-beta1"
      },
      "tagline" : "You Know, for Search"
    }