Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/kubernetes/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kubernetes群集主节点显示-NotReady、coredns&;织布展待定_Kubernetes_Kubernetes Pod_Centos8_Coredns_Weave - Fatal编程技术网

Kubernetes群集主节点显示-NotReady、coredns&;织布展待定

Kubernetes群集主节点显示-NotReady、coredns&;织布展待定,kubernetes,kubernetes-pod,centos8,coredns,weave,Kubernetes,Kubernetes Pod,Centos8,Coredns,Weave,我已在CentOS-8上安装了Kubernetes群集,但节点状态显示为NotReady,coredns的命名空间状态显示为pending,编织网状态显示为CrashLoopBackOff。我也重新安装了,但结果仍然相同,而且taint命令不起作用。如何解决此问题 # kubectl get nodes NAME STATUS ROLES AGE VERSION K8s-Master NotReady master 42m v1.18.8 #

我已在CentOS-8上安装了
Kubernetes群集
,但节点状态显示为
NotReady
coredns的命名空间状态显示为
pending
,编织网状态显示为
CrashLoopBackOff
。我也重新安装了,但结果仍然相同,而且
taint
命令不起作用。如何解决此问题

# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
K8s-Master   NotReady   master   42m   v1.18.8

# kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                  READY   STATUS             RESTARTS   AGE   IP                NODE          NOMINATED NODE   READINESS GATES
kube-system   coredns-66bff467f8-5vtjf              0/1      Pending            0          42m   <none>            <none>        <none>           <none>
kube-system   coredns-66bff467f8-pr6pt              0/1      Pending            0          42m   <none>            <none>        <none>           <none>
kube-system   etcd-K8s-Master                       1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-apiserver-K8s-Master             1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-controller-manager-K8s-Master    1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-proxy-pw2bk                      1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-scheduler-K8s-Master             1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   weave-net-k4mdf                       1/2      CrashLoopBackOff   12         41m   90.91.92.93   K8s-Master        <none>           <none>

# kubectl describe pod coredns-66bff467f8-pr6pt --namespace=kube-system
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  70s (x33 over 43m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

# kubectl describe node | grep -i taint
Taints:             node.kubernetes.io/not-ready:NoExecute

# kubectl taint nodes --all node.kubernetes.io/not-ready:NoExecute
error: node K8s-Master already has node.kubernetes.io/not-ready taint(s) with same effect(s) and --overwrite is false

# kubectl describe pod weave-net-k4mdf --namespace=kube-system
Events:
  Type     Reason     Age                   From                  Message
  ----     ------     ----                  ----                  -------
  Normal   Scheduled  43m                   default-scheduler    Successfully assigned kube-system/weave-net-k4mdf to K8s-Master
  Normal   Pulling    43m                   kubelet, K8s-Master  Pulling image "docker.io/weaveworks/weave-kube:2.7.0"
  Normal   Pulled     43m                   kubelet, K8s-Master  Successfully pulled image "docker.io/weaveworks/weave-kube:2.7.0"
  Normal   Pulling    43m                   kubelet, K8s-Master  Pulling image "docker.io/weaveworks/weave-npc:2.7.0"
  Normal   Pulled     42m                   kubelet, K8s-Master  Successfully pulled image "docker.io/weaveworks/weave-npc:2.7.0"
  Normal   Started    42m                   kubelet, K8s-Master  Started container weave-npc
  Normal   Created    42m                   kubelet, K8s-Master  Created container weave-npc
  Normal   Started    42m (x4 over 43m)     kubelet, K8s-Master  Started container weave
  Normal   Created    42m (x4 over 43m)     kubelet, K8s-Master  Created container weave
  Normal   Pulled     42m (x3 over 42m)     kubelet, K8s-Master  Container image "docker.io/weaveworks/weave-kube:2.7.0" already present on machine
  Warning  BackOff    3m1s (x191 over 42m)  kubelet, K8s-Master  Back-off restarting failed container
  Normal   Pulled     33s (x4 over 118s)    kubelet, K8s-Master  Container image "docker.io/weaveworks/weave-kube:2.7.0" already present on machine
  Normal   Created    33s (x4 over 118s)    kubelet, K8s-Master  Created container weave
  Normal   Started    33s (x4 over 118s)    kubelet, K8s-Master  Started container weave
  Warning  BackOff    5s (x10 over 117s)    kubelet, K8s-Master  Back-off restarting failed container

# kubectl logs weave-net-k4mdf -c weave --namespace=kube-system
ipset v7.2: Set cannot be destroyed: it is in use by a kernel component
#kubectl获取节点
姓名状态角色年龄版本
K8s主机未就绪主机42m v1.18.8
#kubectl获得pods-o宽——所有名称空间
命名空间名称就绪状态重新启动老化IP节点指定节点就绪门
kube系统coredns-66bff467f8-5vtjf 0/1挂起0 42m
kube系统coredns-66bff467f8-pr6pt 0/1挂起0 42m
kube系统etcd-K8s-Master 1/1运行0 42m 90.91.92.93 K8s Master
kube系统kube-apiserver-K8s-Master 1/1运行0 42m 90.91.92.93 K8s Master
kube系统kube-controller-manager-K8s-Master 1/1运行0 42m 90.91.92.93 K8s Master
kube系统kube-proxy-pw2bk 1/1运行0 42m 90.91.92.93 K8s主机
kube系统kube-scheduler-K8s-Master 1/1运行0 42m 90.91.92.93 K8s Master
kube system weave-net-k4mdf 1/2碰撞回缩12 41m 90.91.92.93 K8s Master
#kubectl描述pod coredns-66bff467f8-pr6pt——名称空间=kube系统
活动:
从消息中键入原因年龄
----     ------            ----                ----               -------
警告失败调度70s(x33超过43m)默认调度程序0/1节点可用:1个节点具有pod无法容忍的污染{node.kubernetes.io/not-ready:}。
#kubectl描述节点| grep-i污染
污点:node.kubernetes.io/未准备好:NoExecute
#kubectl污染节点——所有节点.kubernetes.io/未就绪:NoExecute
错误:节点K8s主节点已具有具有相同效果的node.kubernetes.io/not-ready污染,并且--overwrite为false
#kubectl描述pod weave-net-k4mdf——名称空间=kube系统
活动:
从消息中键入原因年龄
----     ------     ----                  ----                  -------
正常调度的43m默认调度程序已成功将kube system/weave-net-k4mdf分配给K8s主机
正常拉伸43m kubelet,K8s主拉伸图像“docker.io/weaveworks/weave kube:2.7.0”
正常拉动43m kubelet,K8s Master成功拉动图像“docker.io/weaveworks/weave kube:2.7.0”
正常拉伸43m kubelet,K8s主拉伸图像“docker.io/weaveworks/weave npc:2.7.0”
正常拉动42米kubelet,K8s Master成功拉动图像“docker.io/weaveworks/weave npc:2.7.0”
正常启动42m kubelet,K8s主机启动集装箱编织npc
普通创建42m kubelet,K8s Master创建容器编织npc
正常启动42米(x4超过43米)kubelet,K8s主控启动集装箱编织
正常创建的42米(x4/43米)kubelet,K8s主创建的容器编织
正常拉动42米(x3/42米)kubelet,机器上已出现K8s主容器图像“docker.io/weaveworks/weave kube:2.7.0”
警告后退3m1s(x191超过42m)kubelet,K8s主后退重新启动失败的容器
正常拉伸33秒(x4/118秒)kubelet,K8s主容器图像“docker.io/weaveworks/weave kube:2.7.0”已出现在机器上
正常创建33s(x4/118s)kubelet,K8s主创建容器编织
正常启动33秒(x4超过118秒)kubelet,K8s Master启动容器编织
警告后退5s(x10超过117s)kubelet,K8s主后退重新启动失败容器
#kubectl日志weave-net-k4mdf-c weave--namespace=kube系统
ipset v7.2:无法销毁集合:内核组件正在使用它
上述错误是由于竞争条件造成的

从中引用,您可以编辑weave守护程序集yaml,将其添加到下面作为解决方法

              command:
                - /bin/sh
                - -c
                - sed '/ipset destroy weave-kube-test$/ i sleep 1' /home/weave/launch.sh | /bin/sh
所以编织守护程序看起来像

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: weave-net
  annotations:
    cloud.weave.works/launcher-info: |-
      {
        "original-request": {
          "url": "/k8s/v1.13/net.yaml",
          "date": "Fri Aug 14 2020 07:36:34 GMT+0000 (UTC)"
        },
        "email-address": "support@weave.works"
      }
  labels:
    name: weave-net
  namespace: kube-system
spec:
  minReadySeconds: 5
  selector:
    matchLabels:
      name: weave-net
  template:
    metadata:
      labels:
        name: weave-net
    spec:
      containers:
        - name: weave
          command:
            - /bin/sh
            - -c
            - sed '/ipset destroy weave-kube-test$/ i sleep 1' /home/weave/launch.sh | /bin/sh
...

编辑问题以从中的weave pod添加日志crashloop@ArghyaSadhu,添加了weave日志。在此文件中
/var/lib/kubelet/kubeadm flags.env
我已删除
--network plugin=cni
并重新启动
kubelet.service
现在coredns显示
正在运行
,我的主状态显示-
就绪
仍然
weave-net-k4mdf
显示
CrashLoopBackOff
错误是什么在编织舱里?与之前相同,您已经发布了什么?weave i安装了此命令
kubectl apply-f“https://cloud.weave.works/k8s/net?k8s-版本=$kubever“
因此,默认情况下,它将把
yaml
文件存储在哪个路径?在这个路径中
/opt/cni/bin/weave
不存在可执行文件。不要直接应用yaml…通过curl本地下载并编辑它,然后应用它…或者使用
kubectl edit ds weave net-n kube系统编辑守护程序
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: weave-net
  annotations:
    cloud.weave.works/launcher-info: |-
      {
        "original-request": {
          "url": "/k8s/v1.13/net.yaml",
          "date": "Fri Aug 14 2020 07:36:34 GMT+0000 (UTC)"
        },
        "email-address": "support@weave.works"
      }
  labels:
    name: weave-net
  namespace: kube-system
spec:
  minReadySeconds: 5
  selector:
    matchLabels:
      name: weave-net
  template:
    metadata:
      labels:
        name: weave-net
    spec:
      containers:
        - name: weave
          command:
            - /bin/sh
            - -c
            - sed '/ipset destroy weave-kube-test$/ i sleep 1' /home/weave/launch.sh | /bin/sh
...