Kubernetes印花布节点碰撞回缩
虽然有一些问题和我的一样,但这些修正对我来说并不适用。 我正在使用kubernetes v1.9.3二进制文件,并使用法兰绒和印花布来设置kubernetes集群。在应用calico yaml文件后,它在创建第二个pod时陷入困境。 我做错了什么?日志中没有明确指出问题所在Kubernetes印花布节点碰撞回缩,kubernetes,kubectl,kubeadm,calico,Kubernetes,Kubectl,Kubeadm,Calico,虽然有一些问题和我的一样,但这些修正对我来说并不适用。 我正在使用kubernetes v1.9.3二进制文件,并使用法兰绒和印花布来设置kubernetes集群。在应用calico yaml文件后,它在创建第二个pod时陷入困境。 我做错了什么?日志中没有明确指出问题所在 kubectl获取pods——所有名称空间 root@kube-master01:/home/john/cookem/kubeadm-ha# kubectl logs calico-node- n87l7 --namespa
kubectl获取pods——所有名称空间
root@kube-master01:/home/john/cookem/kubeadm-ha# kubectl logs calico-node-
n87l7 --namespace=kube-system
Error from server (BadRequest): a container name must be specified for pod
calico-node-n87l7, choose one of: [calico-node install-cni]
root@kube-master01:/home/john/cookem/kubeadm-ha# kubectl logs calico-node-
n87l7 --namespace=kube-system install-cni
Installing any TLS assets from /calico-secrets
cp: can't stat '/calico-secrets/*': No such file or directory
kubectl描述pod calico-node-n87l7
返回
Name: calico-node-n87l7
Namespace: kube-system
Node: kube-master01/10.100.102.62
Start Time: Thu, 22 Feb 2018 15:21:38 +0100
Labels: controller-revision-hash=653023576
k8s-app=calico-node
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
scheduler.alpha.kubernetes.io/tolerations=[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
Status: Running
IP: 10.100.102.62
Controlled By: DaemonSet/calico-node
Containers:
calico-node:
Container ID: docker://6024188a667d98a209078b6a252505fa4db42124800baaf3a61e082ae2476147
Image: quay.io/calico/node:v3.0.1
Image ID: docker-pullable://quay.io/calico/node@sha256:e32b65742e372e2a4a06df759ee2466f4de1042e01588bea4d4df3f6d26d0581
Port: <none>
State: Running
Started: Thu, 22 Feb 2018 15:21:40 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 250m
Liveness: http-get http://:9099/liveness delay=10s timeout=1s period=10s #success=1 #failure=6
Readiness: http-get http://:9099/readiness delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
ETCD_ENDPOINTS: <set to the key 'etcd_endpoints' of config map 'calico-config'> Optional: false
CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false
CLUSTER_TYPE: k8s,bgp
CALICO_DISABLE_FILE_LOGGING: true
CALICO_K8S_NODE_REF: (v1:spec.nodeName)
FELIX_DEFAULTENDPOINTTOHOSTACTION: ACCEPT
CALICO_IPV4POOL_CIDR: 10.244.0.0/16
CALICO_IPV4POOL_IPIP: Always
FELIX_IPV6SUPPORT: false
FELIX_LOGSEVERITYSCREEN: info
FELIX_IPINIPMTU: 1440
ETCD_CA_CERT_FILE: <set to the key 'etcd_ca' of config map 'calico-config'> Optional: false
ETCD_KEY_FILE: <set to the key 'etcd_key' of config map 'calico-config'> Optional: false
ETCD_CERT_FILE: <set to the key 'etcd_cert' of config map 'calico-config'> Optional: false
IP: autodetect
IP_AUTODETECTION_METHOD: can-reach=10.100.102.0
FELIX_HEALTHENABLED: true
Mounts:
/calico-secrets from etcd-certs (rw)
/lib/modules from lib-modules (ro)
/var/run/calico from var-run-calico (rw)
/var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-p7d9n (ro)
install-cni:
Container ID: docker://d9fd7a0f3fa9364c9a104c8482e3d86fc877e3f06f47570d28cd1b296303a960
Image: quay.io/calico/cni:v2.0.0
Image ID: docker-pullable://quay.io/calico/cni@sha256:ddb91b6fb7d8136d75e828e672123fdcfcf941aad61f94a089d10eff8cd95cd0
Port: <none>
Command:
/install-cni.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 22 Feb 2018 15:53:16 +0100
Finished: Thu, 22 Feb 2018 15:53:16 +0100
Ready: False
Restart Count: 11
Environment:
CNI_CONF_NAME: 10-calico.conflist
ETCD_ENDPOINTS: <set to the key 'etcd_endpoints' of config map 'calico-config'> Optional: false
CNI_NETWORK_CONFIG: <set to the key 'cni_network_config' of config map 'calico-config'> Optional: false
Mounts:
/calico-secrets from etcd-certs (rw)
/host/etc/cni/net.d from cni-net-dir (rw)
/host/opt/cni/bin from cni-bin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-p7d9n (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
var-run-calico:
Type: HostPath (bare host directory volume)
Path: /var/run/calico
HostPathType:
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /opt/cni/binenter code here
HostPathType:
cni-net-dir:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
etcd-certs:
Type: Secret (a volume populated by a Secret)
SecretName: calico-etcd-secrets
Optional: false
calico-node-token-p7d9n:
Type: Secret (a volume populated by a Secret)
SecretName: calico-node-token-p7d9n
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "cni-net-dir"
Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "var-run-calico"
Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "cni-bin-dir"
Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "lib-modules"
Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "calico-node-token-p7d9n"
Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "etcd-certs"
Normal Created 34m kubelet, kube-master01 Created container
Normal Pulled 34m kubelet, kube-master01 Container image "quay.io/calico/node:v3.0.1" already present on machine
Normal Started 34m kubelet, kube-master01 Started container
Normal Started 34m (x3 over 34m) kubelet, kube-master01 Started container
Normal Pulled 33m (x4 over 34m) kubelet, kube-master01 Container image "quay.io/calico/cni:v2.0.0" already present on machine
Normal Created 33m (x4 over 34m) kubelet, kube-master01 Created container
Warning BackOff 4m (x139 over 34m) kubelet, kube-master01 Back-off restarting failed container
名称:印花布-node-n87l7
名称空间:kube系统
节点:kube-master01/10.100.102.62
开始时间:2018年2月22日星期四15:21:38+0100
标签:控制器修订哈希=653023576
k8s app=印花布节点
pod模板生成=1
注释:scheduler.alpha.kubernetes.io/critical-pod=
scheduler.alpha.kubernetes.io/permissions=[{“key”:“专用”、“value”:“master”、“effect”:“NoSchedule”},
{“key”:“criticaladdonly”,“operator”:“Exists”}]
状态:正在运行
IP:10.100.102.62
控制者:守护程序/印花布节点
容器:
印花布节点:
容器ID:docker://6024188a667d98a209078b6a252505fa4db42124800baaf3a61e082ae2476147
图片:quay.io/calico/node:v3.0.1
图像ID:docker-pullable://quay.io/calico/node@sha256:E32B65742E372E2A4 A06DF759EE2466F4DE1042E01588BEA4DF3F6D26D0581
端口:
状态:正在运行
开始时间:2018年2月22日星期四15:21:40+0100
准备好了吗
重新启动计数:0
请求:
中央处理器:250米
活跃度:http get http://:9099/活跃度延迟=10s超时=1s周期=10s成功=1失败=6
准备就绪:http get http://:9099/准备就绪延迟=0s超时=1s周期=10s成功=1失败=3
环境:
ETCD_端点:可选:false
印花布网络后端:可选:false
集群类型:k8s,bgp
印花布禁用文件日志记录:true
印花布K8S_节点_参考:(v1:spec.nodeName)
FELIX_DEFAULTENDPOINTTOHOSTACTION:接受
印花布IPV4池CIDR:10.244.0.0/16
印花布:永远
FELIX_IPv6支持:错误
FELIX_LogSeverity屏幕:信息
菲利克斯·伊皮尼普姆图:1440
ETCD证书文件:可选:false
ETCD_密钥_文件:可选:false
ETCD证书文件:可选:false
IP:自动检测
IP_自动检测方法:可达到=10.100.102.0
FELIX_HEALTHENABLED:正确
挂载:
/来自etcd证书的印花布秘密(rw)
/库/来自库模块(ro)的模块
/var/run/calico来自var run calico(rw)
/来自calico-node-token-p7d9n(ro)的var/run/secrets/kubernetes.io/serviceCount
安装cni:
容器ID:docker://d9fd7a0f3fa9364c9a104c8482e3d86fc877e3f06f47570d28cd1b296303a960
图片:quay.io/calico/cni:v2.0.0
图像ID:docker-pullable://quay.io/calico/cni@sha256:DDB91B6FB7D8136D75E828E672123FDCFC941AAD61F94A089D10EFF8CD95CD0
端口:
命令:
/安装-cni.sh
国家:等待
原因:仓促退却
最后状态:终止
原因:错误
退出代码:1
开始时间:2018年2月22日星期四15:53:16+0100
完成时间:2018年2月22日星期四15:53:16+0100
就绪:错误
重新启动计数:11
环境:
CNI_形态名称:10-calico.conflist
ETCD_端点:可选:false
CNI_网络_配置:可选:false
挂载:
/来自etcd证书的印花布秘密(rw)
/主机/etc/cni/net.d来自cni网络目录(rw)
/主机/opt/cni/bin来自cni-bin-dir(rw)
/来自calico-node-token-p7d9n(ro)的var/run/secrets/kubernetes.io/serviceCount
条件:
类型状态
初始化为True
准备错误
播客预定为真
卷数:
lib模块:
类型:主机路径(裸主机目录卷)
路径:/lib/模块
主机路径类型:
var run印花布:
类型:主机路径(裸主机目录卷)
路径:/var/run/calico
主机路径类型:
cni bin dir:
类型:主机路径(裸主机目录卷)
路径:/opt/cni/b在此处输入代码
主机路径类型:
网络总监:
类型:主机路径(裸主机目录卷)
路径:/etc/cni/net.d
主机路径类型:
etcd证书:
类型:Secret(由Secret填充的卷)
机密名称:印花布etcd机密
可选:false
印花布-node-token-p7d9n:
类型:Secret(由Secret填充的卷)
秘书长姓名:calico-node-token-p7d9n
可选:false
QoS等级:Burstable
节点选择器:
容差:node.kubernetes.io/磁盘压力:NoSchedule
node.kubernetes.io/内存压力:NoSchedule
node.kubernetes.io/未准备好:NoExecute
node.kubernetes.io/unreachable:NoExecute
活动:
从消息中键入原因年龄
---- ------ ---- ---- -------
正常成功装入卷34m kubelet,kube-master01装入卷。卷“cni net dir”的安装成功
正常成功装入卷34m kubelet,kube-master01装入卷。卷“var run calico”的安装成功
正常成功装入卷34m kubelet,kube-master01装入卷。卷“cni bin dir”的安装成功
正常成功装入卷34m kubelet,kube-master01装入卷。卷“lib模块”的安装成功
正常成功装入卷34m kubelet,kube-master01装入卷。卷“calico”的安装成功-
Normal Started 8m (x3 over 8m) kubelet, worker-node2 Started container
Normal Created 8m (x3 over 8m) kubelet, worker-node2 Created container
Normal Pulled 8m (x2 over 8m) kubelet, worker-node2 Container image "quay.io/calico/node:v3.0.3" already present on machine
Warning Unhealthy 8m (x2 over 8m) kubelet, worker-node2 Readiness probe failed: Get http://10.0.1.102:9099/readiness: dial tcp 10.0.1.102:9099: getsockopt: connection refused
Warning BackOff 4m (x21 over 8m) kubelet, worker-node2 Back-off restarting failed container
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:fe15:67e prefixlen 64 scopeid 0x20<link>
ether 08:00:27:15:06:7e txqueuelen 1000 (Ethernet)
RX packets 1506 bytes 495894 (495.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1112 bytes 128692 (128.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
/var/log/container/
/var/log/pod/<failed_pod_id>/