nginx入口控制器-CrashLoopBackOff-ProxBox上的kubernetes(kvm)

nginx入口控制器-CrashLoopBackOff-ProxBox上的kubernetes(kvm),kubernetes,crash,kubernetes-ingress,kvm,proxmox,Kubernetes,Crash,Kubernetes Ingress,Kvm,Proxmox,我在4kvm内运行一个kubernetes集群主机,由proxmox管理。安装带有的nginx入口控制器后 helm install nginx-ingress stable/nginx-ingress --set controller.publishService.enabled=true -n nginx-ingress 控制器正在崩溃(崩溃环)。这些日志并没有真正的帮助(或者我不知道确切地看哪里) 谢谢你,彼得 在集群吊舱的下面: root@sedeka78:~# kubectl get

我在4kvm内运行一个kubernetes集群主机,由proxmox管理。安装带有的nginx入口控制器后

helm install nginx-ingress stable/nginx-ingress --set controller.publishService.enabled=true -n nginx-ingress
控制器正在崩溃(崩溃环)。这些日志并没有真正的帮助(或者我不知道确切地看哪里)

谢谢你,彼得

在集群吊舱的下面:

root@sedeka78:~# kubectl get pods --all-namespaces -o wide
NAMESPACE              NAME                                             READY   STATUS             RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
kube-system            coredns-66bff467f8-jv2mx                         1/1     Running            0          83m   10.244.0.9    sedeka78   <none>           <none>
kube-system            coredns-66bff467f8-vwrzb                         1/1     Running            0          83m   10.244.0.6    sedeka78   <none>           <none>
kube-system            etcd-sedeka78                                    1/1     Running            2          84m   10.10.10.78   sedeka78   <none>           <none>
kube-system            kube-apiserver-sedeka78                          1/1     Running            2          84m   10.10.10.78   sedeka78   <none>           <none>
kube-system            kube-controller-manager-sedeka78                 1/1     Running            4          84m   10.10.10.78   sedeka78   <none>           <none>
kube-system            kube-flannel-ds-amd64-fxvfh                      1/1     Running            0          83m   10.10.10.78   sedeka78   <none>           <none>
kube-system            kube-flannel-ds-amd64-h6btb                      1/1     Running            1          78m   10.10.10.79   sedeka79   <none>           <none>
kube-system            kube-flannel-ds-amd64-m6dw2                      1/1     Running            1          78m   10.10.10.80   sedeka80   <none>           <none>
kube-system            kube-flannel-ds-amd64-wgtqb                      1/1     Running            1          78m   10.10.10.81   sedeka81   <none>           <none>
kube-system            kube-proxy-5dvdg                                 1/1     Running            1          78m   10.10.10.80   sedeka80   <none>           <none>
kube-system            kube-proxy-89pf7                                 1/1     Running            0          83m   10.10.10.78   sedeka78   <none>           <none>
kube-system            kube-proxy-hhgtf                                 1/1     Running            1          78m   10.10.10.79   sedeka79   <none>           <none>
kube-system            kube-proxy-kshnn                                 1/1     Running            1          78m   10.10.10.81   sedeka81   <none>           <none>
kube-system            kube-scheduler-sedeka78                          1/1     Running            5          84m   10.10.10.78   sedeka78   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-4trgg       1/1     Running            0          80m   10.244.0.8    sedeka78   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-7bfbb48676-q6c2t            1/1     Running            0          80m   10.244.0.7    sedeka78   <none>           <none>
nginx-ingress          nginx-ingress-controller-57f4b84b5-ldkk5         0/1     CrashLoopBackOff   19         45m   10.244.1.2    sedeka81   <none>           <none>
nginx-ingress          nginx-ingress-default-backend-7c868597f4-8q9n7   1/1     Running            0          45m   10.244.4.2    sedeka80   <none>           <none>
root@sedeka78:~#

在这里:

root@sedeka78:~# kubectl describe pod nginx-ingress-controller-57f4b84b5-ldkk5 -n nginx-ingress
Name:         nginx-ingress-controller-57f4b84b5-ldkk5
Namespace:    nginx-ingress
Priority:     0
Node:         sedeka81/10.10.10.81
Start Time:   Sun, 05 Jul 2020 13:54:56 +0200
Labels:       app=nginx-ingress
              app.kubernetes.io/component=controller
              component=controller
              pod-template-hash=57f4b84b5
              release=nginx-ingress
Annotations:  <none>
Status:       Running
IP:           10.244.1.2
IPs:
  IP:           10.244.1.2
Controlled By:  ReplicaSet/nginx-ingress-controller-57f4b84b5
Containers:
  nginx-ingress-controller:
    Container ID:  docker://545ed277d1a039cd36b0d18a66d1f58c8b44f3fc5e4cacdcde84cc68e763b0e8
    Image:         quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
    Image ID:      docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287
    Ports:         80/TCP, 443/TCP
    Host Ports:    0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --default-backend-service=nginx-ingress/nginx-ingress-default-backend
      --publish-service=nginx-ingress/nginx-ingress-controller
      --election-id=ingress-controller-leader
      --ingress-class=nginx
      --configmap=nginx-ingress/nginx-ingress-controller
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Sun, 05 Jul 2020 14:33:33 +0200
      Finished:     Sun, 05 Jul 2020 14:34:03 +0200
    Ready:          False
    Restart Count:  17
    Liveness:       http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-controller-57f4b84b5-ldkk5 (v1:metadata.name)
      POD_NAMESPACE:  nginx-ingress (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-token-rmhf8 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  nginx-ingress-token-rmhf8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nginx-ingress-token-rmhf8
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  <unknown>            default-scheduler  Successfully assigned nginx-ingress/nginx-ingress-controller-57f4b84b5-ldkk5 to sedeka81
  Normal   Pulling    41m                  kubelet, sedeka81  Pulling image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0"
  Normal   Pulled     41m                  kubelet, sedeka81  Successfully pulled image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0"
  Normal   Created    40m (x3 over 41m)    kubelet, sedeka81  Created container nginx-ingress-controller
  Normal   Started    40m (x3 over 41m)    kubelet, sedeka81  Started container nginx-ingress-controller
  Normal   Killing    40m (x2 over 40m)    kubelet, sedeka81  Container nginx-ingress-controller failed liveness probe, will be restarted
  Normal   Pulled     40m (x2 over 40m)    kubelet, sedeka81  Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0" already present on machine
  Warning  Unhealthy  40m (x6 over 41m)    kubelet, sedeka81  Readiness probe failed: Get http://10.244.1.2:10254/healthz: dial tcp 10.244.1.2:10254: connect: connection refused
  Warning  Unhealthy  21m (x33 over 41m)   kubelet, sedeka81  Liveness probe failed: Get http://10.244.1.2:10254/healthz: dial tcp 10.244.1.2:10254: connect: connection refused
  Warning  BackOff    97s (x148 over 38m)  kubelet, sedeka81  Back-off restarting failed container
root@sedeka78:~#kubectl描述吊舱nginx-ingress-controller-57f4b84b5-ldkk5-n nginx ingress
名称:nginx-ingress-controller-57f4b84b5-ldkk5
名称空间:nginx入口
优先级:0
节点:sedeka81/10.10.10.81
开始时间:2020年7月5日星期日13:54:56+0200
标签:app=nginx入口
app.kubernetes.io/组件=控制器
组件=控制器
pod模板散列=57f4b84b5
释放=nginx入口
注释:
状态:正在运行
IP:10.244.1.2
IPs:
IP:10.244.1.2
控制人:ReplicaSet/nginx-ingress-controller-57f4b84b5
容器:
nginx入口控制器:
容器ID:docker://545ed277d1a039cd36b0d18a66d1f58c8b44f3fc5e4cacdcde84cc68e763b0e8
图片:quay.io/kubernetes入口控制器/nginx入口控制器:0.32.0
图像ID:docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287
端口:80/TCP、443/TCP
主机端口:0/TCP,0/TCP
Args:
/nginx入口控制器
--默认后端服务=nginx入口/nginx入口默认后端
--发布服务=nginx入口/nginx入口控制器
--选举id=入口控制器领导人
--入口类=nginx
--configmap=nginx入口/nginx入口控制器
国家:等待
原因:仓促退却
最后状态:终止
原因:错误
出境代码:143
开始时间:太阳,2020年7月5日14:33:33+0200
完成日期:太阳,2020年7月5日14:34:03+0200
就绪:错误
重新启动计数:17
活跃度:http get http://:10254/healthz delay=10s超时=1s周期=10s成功=1失败=3
准备就绪:http get http://:10254/healthz delay=10s超时=1s周期=10s成功=1失败=3
环境:
POD_名称:nginx-ingres-controller-57f4b84b5-ldkk5(v1:metadata.NAME)
POD_名称空间:nginx入口(v1:metadata.NAMESPACE)
挂载:
/nginx-ingress-token-rmhf8(ro)中的var/run/secrets/kubernetes.io/serviceCount
条件:
类型状态
初始化为True
准备错误
集装箱准备好了吗
播客预定为真
卷数:
nginx-ingress-token-rmhf8:
类型:Secret(由Secret填充的卷)
SecretName:nginx-ingres-token-rmhf8
可选:false
QoS等级:最佳努力
节点选择器:
容差:node.kubernetes.io/未就绪:不执行300秒
node.kubernetes.io/不可访问:不执行300秒
活动:
从消息中键入原因年龄
----     ------     ----                 ----               -------
正常计划的默认计划程序已成功将nginx ingress/nginx-ingress-controller-57f4b84b5-ldkk5分配给sedeka81
正常拉动41m kubelet,sedeka81拉动图像“码头io/kubernetes入口控制器/nginx入口控制器:0.32.0”
正常拉取41m kubelet,sedeka81成功拉取图像“码头io/kubernetes入口控制器/nginx入口控制器:0.32.0”
正常创建40m(x3/41m)kubelet,sedeka81创建集装箱nginx入口控制器
正常启动40米(x3/41米)kubelet,sedeka81启动集装箱nginx入口控制器
正常杀死40m(x2大于40m)kubelet、sedeka81集装箱nginx入口控制器失效的活跃度探测器将重新启动
机器上已存在正常牵引40米(x2/40米)kubelet、sedeka81集装箱图像“码头io/kubernetes入口控制器/nginx入口控制器:0.32.0”
警告:40米(x6大于41米)kubelet,sedeka81准备就绪探测失败:获取http://10.244.1.2:10254/healthz: 拨打tcp 10.244.1.2:10254:连接:连接被拒绝
警告21米(x33超过41米)kubelet,sedeka81活动度探测失败:获取http://10.244.1.2:10254/healthz: 拨打tcp 10.244.1.2:10254:连接:连接被拒绝
警告后退97s(x148超过38米)kubelet,sedeka81后退重新启动失败的容器

我无法准确指出问题所在,但nginx ingress controller处于
CrashLoopBackOff
状态,因为它无法到达位于
的Kubernetes API服务器https://10.96.0.1:443
。nginx入口控制器吊舱和Kubernetes API服务器之间可能存在网络或连接问题


尝试
curlhttps://10.96.0.1:443
来自另一个pod。

谈到证书问题:

curl [10.96.0.1:443](https://10.96.0.1/) curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: [curl.haxx.se/docs/sslcerts.html](https://curl.haxx.se/docs/sslcerts.html) curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above
您有两个选项来实现此功能:

  • 使用cURL和
    -k
    选项,该选项允许cURL建立不安全的连接,即cURL不验证证书

  • 将根CA(签署服务器证书的CA)添加到
    /etc/ssl/certs/CA证书。crt

  • 我认为您应该使用选项2,因为它可以确保您连接到安全的FTP服务器

    谈到准备就绪和活跃性:

    当节点上的CPU消耗为100%时,nginx ingress controller会立即失败,因为它没有
    请求
    CPU,所以它对http://:…/healthz的响应时间太长(如果我记得的话,会持续1秒)

    您应该为nginx入口控制器提供CPU
    请求
    ,或者永远不要让节点中的pod使用100%的CPU,这听起来是不可能控制的

    你也可以查
    root@sedeka78:~# kubectl describe pod nginx-ingress-controller-57f4b84b5-ldkk5 -n nginx-ingress
    Name:         nginx-ingress-controller-57f4b84b5-ldkk5
    Namespace:    nginx-ingress
    Priority:     0
    Node:         sedeka81/10.10.10.81
    Start Time:   Sun, 05 Jul 2020 13:54:56 +0200
    Labels:       app=nginx-ingress
                  app.kubernetes.io/component=controller
                  component=controller
                  pod-template-hash=57f4b84b5
                  release=nginx-ingress
    Annotations:  <none>
    Status:       Running
    IP:           10.244.1.2
    IPs:
      IP:           10.244.1.2
    Controlled By:  ReplicaSet/nginx-ingress-controller-57f4b84b5
    Containers:
      nginx-ingress-controller:
        Container ID:  docker://545ed277d1a039cd36b0d18a66d1f58c8b44f3fc5e4cacdcde84cc68e763b0e8
        Image:         quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
        Image ID:      docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287
        Ports:         80/TCP, 443/TCP
        Host Ports:    0/TCP, 0/TCP
        Args:
          /nginx-ingress-controller
          --default-backend-service=nginx-ingress/nginx-ingress-default-backend
          --publish-service=nginx-ingress/nginx-ingress-controller
          --election-id=ingress-controller-leader
          --ingress-class=nginx
          --configmap=nginx-ingress/nginx-ingress-controller
        State:          Waiting
          Reason:       CrashLoopBackOff
        Last State:     Terminated
          Reason:       Error
          Exit Code:    143
          Started:      Sun, 05 Jul 2020 14:33:33 +0200
          Finished:     Sun, 05 Jul 2020 14:34:03 +0200
        Ready:          False
        Restart Count:  17
        Liveness:       http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
        Readiness:      http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
        Environment:
          POD_NAME:       nginx-ingress-controller-57f4b84b5-ldkk5 (v1:metadata.name)
          POD_NAMESPACE:  nginx-ingress (v1:metadata.namespace)
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-token-rmhf8 (ro)
    Conditions:
      Type              Status
      Initialized       True
      Ready             False
      ContainersReady   False
      PodScheduled      True
    Volumes:
      nginx-ingress-token-rmhf8:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  nginx-ingress-token-rmhf8
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type     Reason     Age                  From               Message
      ----     ------     ----                 ----               -------
      Normal   Scheduled  <unknown>            default-scheduler  Successfully assigned nginx-ingress/nginx-ingress-controller-57f4b84b5-ldkk5 to sedeka81
      Normal   Pulling    41m                  kubelet, sedeka81  Pulling image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0"
      Normal   Pulled     41m                  kubelet, sedeka81  Successfully pulled image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0"
      Normal   Created    40m (x3 over 41m)    kubelet, sedeka81  Created container nginx-ingress-controller
      Normal   Started    40m (x3 over 41m)    kubelet, sedeka81  Started container nginx-ingress-controller
      Normal   Killing    40m (x2 over 40m)    kubelet, sedeka81  Container nginx-ingress-controller failed liveness probe, will be restarted
      Normal   Pulled     40m (x2 over 40m)    kubelet, sedeka81  Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0" already present on machine
      Warning  Unhealthy  40m (x6 over 41m)    kubelet, sedeka81  Readiness probe failed: Get http://10.244.1.2:10254/healthz: dial tcp 10.244.1.2:10254: connect: connection refused
      Warning  Unhealthy  21m (x33 over 41m)   kubelet, sedeka81  Liveness probe failed: Get http://10.244.1.2:10254/healthz: dial tcp 10.244.1.2:10254: connect: connection refused
      Warning  BackOff    97s (x148 over 38m)  kubelet, sedeka81  Back-off restarting failed container
    
    curl [10.96.0.1:443](https://10.96.0.1/) curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: [curl.haxx.se/docs/sslcerts.html](https://curl.haxx.se/docs/sslcerts.html) curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above
    
    kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/etcd.yaml
    kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/rbac.yaml
    kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/calico.yaml
    
    sudo apt-get install -y iptables arptables ebtables
    
    
    update-alternatives --set iptables /usr/sbin/iptables-nft
    update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
    update-alternatives --set arptables /usr/sbin/arptables-nft
    update-alternatives --set ebtables /usr/sbin/ebtables-nft