Kubernetes 舵图部署活跃度和准备度失败错误
我有一个Openshift集群。我已经创建了一个自定义应用程序,并尝试使用Helm图表进行部署。当我使用“ocnewapp”在Openshift上部署它时,部署效果非常好。但当我使用头盔图表部署它时,它不起作用 以下是“oc get all的输出----Kubernetes 舵图部署活跃度和准备度失败错误,kubernetes,openshift,kubernetes-helm,helmfile,Kubernetes,Openshift,Kubernetes Helm,Helmfile,我有一个Openshift集群。我已经创建了一个自定义应用程序,并尝试使用Helm图表进行部署。当我使用“ocnewapp”在Openshift上部署它时,部署效果非常好。但当我使用头盔图表部署它时,它不起作用 以下是“oc get all的输出---- [root@worker2 ~]# [root@worker2~]#oc获得全部 名称就绪状态重新启动 pod/chart-acme-85648d4645-7msdl 1/1运行0 3d7h pod/chart1-acme-f8b65b78d-
[root@worker2 ~]#
[root@worker2~]#oc获得全部
名称就绪状态重新启动
pod/chart-acme-85648d4645-7msdl 1/1运行0 3d7h
pod/chart1-acme-f8b65b78d-k2fb6 1/1运行0 3d7h
pod/netshoot 1/1运行0 3d10h
pod/sample1-buildachart-5b5d9d8649-qqmsf 0/1紧急回退672 2d9h
pod/sample2-686bb7f969-fx5bk 0/1紧急回撤674 2d9h
pod/vjobs-npm-96b65fcb-b2p27 0/1紧急回撤817 47h
名称类型CLUSTER-IP外部IP端口年龄
服务/图表acme负载平衡器172.30.174.208 80:30222/TCP 3d7h
服务/图表1 acme负载平衡器172.30.153.36 80:30383/TCP 3d7h
服务/示例1 buildachart节点端口172.30.29.124 80:32375/TCP 2d9h
服务/样本2节点端口172.30.19.24 80:32647/TCP 2d9h
服务/vjobs npm节点端口172.30.205.30 80:30476/TCP 47h
姓名就绪最新可用年龄
deployment.apps/chart-acme 1/1 3d7h
deployment.apps/chart1-acme 1/1 3d7h
deployment.apps/sample1-buildachart 0/1 1 0 2d9h
deployment.apps/sample2 0/1 1 0 2d9h
deployment.apps/vjobs-npm 0/1 1 0 47h
名称所需的当前就绪年龄
replicaset.apps/chart-acme-85648d4645 1 3d7h
replicaset.apps/chart1-acme-f8b65b78d 1 3d7h
replicaset.apps/sample1-buildachart-5b5d9d8649 1 1 0 2d9h
replicaset.apps/sample2-686bb7f969 1 0 2d9h
replicaset.apps/vjobs-npm-96b65fcb 1 1 0 47h
[root@worker2 ~]#
[root@worker2 ~]#
根据上图,您可以看到部署“vjobs npm”给出了“CrashLoopBackOff”错误
下面是“oc描述pod”的输出----
[root@worker2 ~]#
[root@worker2~]#oc描述吊舱vjobs-npm-96b65fcb-b2p27
名称:vjobs-npm-96b65fcb-b2p27
命名空间:vjobs测试
优先级:0
节点:worker0/192.168.100.109
开始时间:2020年8月31日星期一09:30:28-0400
标签:app.kubernetes.io/instance=vjobs-npm
app.kubernetes.io/name=vjobs-npm
pod模板哈希=96b65fcb
注释:openshift.io/scc:restricted
状态:正在运行
IP:10.131.0.107
IPs:
IP:10.131.0.107
控制人:ReplicaSet/vjobs-npm-96b65fcb
容器:
vjobs npm:
集装箱编号:cri-o://c232849eb25bd96ae9343ac3ed1539d492985dd8cdf47a5a4df7d3cf776c4cf3
图片:quay.io/aditya7002/vjobs\u local\u build\u new:latest
图像ID:quay.io/aditya7002/vjobs\u local\u build_new@sha256:87f18e3a24fc7043a43a143e96b0b069db418ace027d95a5427cf53de56feb4c
端口:80/TCP
主机端口:0/TCP
状态:正在运行
开始时间:2020年8月31日星期一09:31:23-0400
最后状态:终止
原因:错误
出境代码:137
开始时间:2020年8月31日星期一09:30:31-0400
完成时间:2020年8月31日星期一09:31:22-0400
就绪:错误
重新启动计数:1
活跃度:http get http://:http/delay=0s超时=1s周期=10s成功=1失败=3
准备就绪:http get http://:http/delay=0s超时=1s周期=10s成功=1失败=3
环境:
挂载:
/vjobs-npm-token-vw6f7(ro)中的var/run/secrets/kubernetes.io/serviceCount
条件:
类型状态
初始化为True
准备错误
集装箱准备好了吗
播客预定为真
卷数:
vjobs-npm-token-vw6f7:
类型:Secret(由Secret填充的卷)
SecretName:vjobs-npm-token-vw6f7
可选:false
QoS等级:最佳努力
节点选择器:
容差:node.kubernetes.io/未就绪:不执行300秒
node.kubernetes.io/不可访问:不执行300秒
活动:
从消息中键入原因年龄
---- ------ ---- ---- -------
正常计划68s默认计划程序已成功将vjobs测试/vjobs-npm-96b65fcb-b2p27分配给worker0
将重新启动正常终止44s kubelet、worker0容器vjobs npm失败的活动性探测
正常拉动14s(x2/66s)kubelet,工人0拉动图像“码头io/aditya7002/vjobs\u本地\u构建\u新:最新”
正常拉动13秒(x2/65秒)kubelet,工人0成功拉动图像“码头io/aditya7002/vjobs\u本地\u构建\u新:最新”
正常创建13秒(x2/65秒)kubelet,工人0创建容器vjobs npm
正常启动13秒(x2/65秒)kubelet,工人0启动容器vjobs npm
警告不健康4s(x4超过64s)kubelet,worker0活动性探测失败:获取http://10.131.0.107:80/: 拨打tcp 10.131.0.107:80:
连接:连接被拒绝
警告不健康1秒(x7超过61秒)kubelet,worker0准备就绪探测失败:获取http://10.131.0.107:80/: 拨打tcp 10.131.0.107:80
:连接:连接被拒绝
[root@work
[root@worker2 ~]#
[root@worker2 ~]# oc get all
NAME READY STATUS RESTARTS AGE
pod/chart-acme-85648d4645-7msdl 1/1 Running 0 3d7h
pod/chart1-acme-f8b65b78d-k2fb6 1/1 Running 0 3d7h
pod/netshoot 1/1 Running 0 3d10h
pod/sample1-buildachart-5b5d9d8649-qqmsf 0/1 CrashLoopBackOff 672 2d9h
pod/sample2-686bb7f969-fx5bk 0/1 CrashLoopBackOff 674 2d9h
pod/vjobs-npm-96b65fcb-b2p27 0/1 CrashLoopBackOff 817 47h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/chart-acme LoadBalancer 172.30.174.208 <pending> 80:30222/TCP 3d7h
service/chart1-acme LoadBalancer 172.30.153.36 <pending> 80:30383/TCP 3d7h
service/sample1-buildachart NodePort 172.30.29.124 <none> 80:32375/TCP 2d9h
service/sample2 NodePort 172.30.19.24 <none> 80:32647/TCP 2d9h
service/vjobs-npm NodePort 172.30.205.30 <none> 80:30476/TCP 47h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/chart-acme 1/1 1 1 3d7h
deployment.apps/chart1-acme 1/1 1 1 3d7h
deployment.apps/sample1-buildachart 0/1 1 0 2d9h
deployment.apps/sample2 0/1 1 0 2d9h
deployment.apps/vjobs-npm 0/1 1 0 47h
NAME DESIRED CURRENT READY AGE
replicaset.apps/chart-acme-85648d4645 1 1 1 3d7h
replicaset.apps/chart1-acme-f8b65b78d 1 1 1 3d7h
replicaset.apps/sample1-buildachart-5b5d9d8649 1 1 0 2d9h
replicaset.apps/sample2-686bb7f969 1 1 0 2d9h
replicaset.apps/vjobs-npm-96b65fcb 1 1 0 47h
[root@worker2 ~]#
[root@worker2 ~]#
[root@worker2 ~]#
[root@worker2 ~]# oc describe pod vjobs-npm-96b65fcb-b2p27
Name: vjobs-npm-96b65fcb-b2p27
Namespace: vjobs-testing
Priority: 0
Node: worker0/192.168.100.109
Start Time: Mon, 31 Aug 2020 09:30:28 -0400
Labels: app.kubernetes.io/instance=vjobs-npm
app.kubernetes.io/name=vjobs-npm
pod-template-hash=96b65fcb
Annotations: openshift.io/scc: restricted
Status: Running
IP: 10.131.0.107
IPs:
IP: 10.131.0.107
Controlled By: ReplicaSet/vjobs-npm-96b65fcb
Containers:
vjobs-npm:
Container ID: cri-o://c232849eb25bd96ae9343ac3ed1539d492985dd8cdf47a5a4df7d3cf776c4cf3
Image: quay.io/aditya7002/vjobs_local_build_new:latest
Image ID: quay.io/aditya7002/vjobs_local_build_new@sha256:87f18e3a24fc7043a43a143e96b0b069db418ace027d95a5427cf53de56feb4c
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 31 Aug 2020 09:31:23 -0400
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Mon, 31 Aug 2020 09:30:31 -0400
Finished: Mon, 31 Aug 2020 09:31:22 -0400
Ready: False
Restart Count: 1
Liveness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from vjobs-npm-token-vw6f7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
vjobs-npm-token-vw6f7:
Type: Secret (a volume populated by a Secret)
SecretName: vjobs-npm-token-vw6f7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 68s default-scheduler Successfully assigned vjobs-testing/vjobs-npm-96b65fcb-b2p27 to worker0
Normal Killing 44s kubelet, worker0 Container vjobs-npm failed liveness probe, will be restarted
Normal Pulling 14s (x2 over 66s) kubelet, worker0 Pulling image "quay.io/aditya7002/vjobs_local_build_new:latest"
Normal Pulled 13s (x2 over 65s) kubelet, worker0 Successfully pulled image "quay.io/aditya7002/vjobs_local_build_new:latest"
Normal Created 13s (x2 over 65s) kubelet, worker0 Created container vjobs-npm
Normal Started 13s (x2 over 65s) kubelet, worker0 Started container vjobs-npm
Warning Unhealthy 4s (x4 over 64s) kubelet, worker0 Liveness probe failed: Get http://10.131.0.107:80/: dial tcp 10.131.0.107:80:
connect: connection refused
Warning Unhealthy 1s (x7 over 61s) kubelet, worker0 Readiness probe failed: Get http://10.131.0.107:80/: dial tcp 10.131.0.107:80
: connect: connection refused
[root@worker2 ~]#