elasticsearch Elasticsearch 7.2群集会议未分配碎片,elasticsearch,cluster-computing,sharding,elasticsearch,Cluster Computing,Sharding" /> elasticsearch Elasticsearch 7.2群集会议未分配碎片,elasticsearch,cluster-computing,sharding,elasticsearch,Cluster Computing,Sharding" />

elasticsearch Elasticsearch 7.2群集会议未分配碎片

elasticsearch Elasticsearch 7.2群集会议未分配碎片,elasticsearch,cluster-computing,sharding,elasticsearch,Cluster Computing,Sharding,我想使用7.2版本构建一个三节点Elasticsearch集群,但有些事情出乎意料 我有三个虚拟机:192.168.7.2、192.168.7.3、192.168.7.4,它们的主配置位于config/elasticsearch.yml: 192.168.7.2: 192.168.7.3: 192.168.7.4: 当我启动每个节点时,创建一个名为movie的索引,其中包含3个碎片和0个副本,然后将一些文档写入索引,集群看起来正常: PUT moive { "settings":

我想使用7.2版本构建一个三节点Elasticsearch集群,但有些事情出乎意料

我有三个虚拟机:192.168.7.2、192.168.7.3、192.168.7.4,它们的主配置位于
config/elasticsearch.yml

  • 192.168.7.2:
  • 192.168.7.3:
  • 192.168.7.4:
当我启动每个节点时,创建一个名为movie的索引,其中包含3个碎片和0个副本,然后将一些文档写入索引,集群看起来正常:

PUT moive
{
  "settings": {
    "number_of_shards": 3,
    "number_of_replicas": 0
  }
}


PUT moive/_doc/3
{
  "title":"title 3"
}

然后,将
movie
replica设置为1:

PUT moive/_settings
{
  "number_of_replicas": 1
}

一切进展顺利,但当我将
movie
replica设置为2时:

PUT moive/_settings
{
  "number_of_replicas": 2
}

无法将新副本分配给节点2


我不知道哪一步不正确,请帮助并讨论它。

首先使用explain命令查找无法分配碎片的原因:


GET _cluster/allocation/explain?pretty



{
  "index" : "moive",
  "shard" : 2,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "NODE_LEFT",
    "at" : "2019-07-19T06:47:29.704Z",
    "details" : "node_left [tIm8GrisRya8jl_n9lc3MQ]",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "kQ0Noq8LSpyEcVDF1POfJw",
      "node_name" : "node-3",
      "transport_address" : "192.168.7.3:9300",
      "node_attributes" : {
        "ml.machine_memory" : "5033172992",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "store" : {
        "matching_sync_id" : true
      },
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[moive][2], node[kQ0Noq8LSpyEcVDF1POfJw], [R], s[STARTED], a[id=Ul73SPyaTSyGah7Yl3k2zA]]"
        }
      ]
    },
    {
      "node_id" : "mNpqD9WPRrKsyntk2GKHMQ",
      "node_name" : "node-4",
      "transport_address" : "192.168.7.4:9300",
      "node_attributes" : {
        "ml.machine_memory" : "5033172992",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "store" : {
        "matching_sync_id" : true
      },
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[moive][2], node[mNpqD9WPRrKsyntk2GKHMQ], [P], s[STARTED], a[id=yQo1HUqoSdecD-SZyYMYfg]]"
        }
      ]
    },
    {
      "node_id" : "tIm8GrisRya8jl_n9lc3MQ",
      "node_name" : "node-2",
      "transport_address" : "192.168.7.2:9300",
      "node_attributes" : {
        "ml.machine_memory" : "5033172992",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "disk_threshold",
          "decision" : "NO",
          "explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [2.2790256709451573E-4%]"
        }
      ]
    }
  ]
}

我们可以看到节点2的磁盘空间已满:

[vagrant@node2 ~]$ df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.4G  8.0G  480M  95% /
devtmpfs                 2.4G     0  2.4G   0% /dev
tmpfs                    2.4G     0  2.4G   0% /dev/shm
tmpfs                    2.4G  8.4M  2.4G   1% /run
tmpfs                    2.4G     0  2.4G   0% /sys/fs/cgroup
/dev/sda1                497M  118M  379M  24% /boot
none                     234G  149G   86G  64% /vagrant
然后我清理了磁盘空间,一切恢复正常:

PUT moive/_settings
{
  "number_of_replicas": 2
}

GET _cluster/allocation/explain?pretty



{
  "index" : "moive",
  "shard" : 2,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "NODE_LEFT",
    "at" : "2019-07-19T06:47:29.704Z",
    "details" : "node_left [tIm8GrisRya8jl_n9lc3MQ]",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "kQ0Noq8LSpyEcVDF1POfJw",
      "node_name" : "node-3",
      "transport_address" : "192.168.7.3:9300",
      "node_attributes" : {
        "ml.machine_memory" : "5033172992",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "store" : {
        "matching_sync_id" : true
      },
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[moive][2], node[kQ0Noq8LSpyEcVDF1POfJw], [R], s[STARTED], a[id=Ul73SPyaTSyGah7Yl3k2zA]]"
        }
      ]
    },
    {
      "node_id" : "mNpqD9WPRrKsyntk2GKHMQ",
      "node_name" : "node-4",
      "transport_address" : "192.168.7.4:9300",
      "node_attributes" : {
        "ml.machine_memory" : "5033172992",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "store" : {
        "matching_sync_id" : true
      },
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[moive][2], node[mNpqD9WPRrKsyntk2GKHMQ], [P], s[STARTED], a[id=yQo1HUqoSdecD-SZyYMYfg]]"
        }
      ]
    },
    {
      "node_id" : "tIm8GrisRya8jl_n9lc3MQ",
      "node_name" : "node-2",
      "transport_address" : "192.168.7.2:9300",
      "node_attributes" : {
        "ml.machine_memory" : "5033172992",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "deciders" : [
        {
          "decider" : "disk_threshold",
          "decision" : "NO",
          "explanation" : "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [2.2790256709451573E-4%]"
        }
      ]
    }
  ]
}

[vagrant@node2 ~]$ df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  8.4G  8.0G  480M  95% /
devtmpfs                 2.4G     0  2.4G   0% /dev
tmpfs                    2.4G     0  2.4G   0% /dev/shm
tmpfs                    2.4G  8.4M  2.4G   1% /run
tmpfs                    2.4G     0  2.4G   0% /sys/fs/cgroup
/dev/sda1                497M  118M  379M  24% /boot
none                     234G  149G   86G  64% /vagrant