Docker 问题:升级kubeadm集群中的印花布节点
我要去 方向非常清楚(我将封锁每个节点,并为Docker 问题:升级kubeadm集群中的印花布节点,docker,kubernetes,kubeadm,project-calico,Docker,Kubernetes,Kubeadm,Project Calico,我要去 方向非常清楚(我将封锁每个节点,并为印花布/cni和印花布/node执行步骤),但我不确定这是什么意思 更新流程管理中的映像以引用新版本 请注意升级calico/node容器 否则,我看不到方向上有其他问题。我们的环境是一个k8s kubeadm集群 我想真正的问题是:我在哪里告诉k8s使用较新版本的calico/node图像 编辑 要回答上述问题: 我刚刚对calico.yaml和rbac kdd.yaml执行了kubectl delete-f,然后对这些文件的最新版本执行了kubec
印花布/cni
和印花布/node
执行步骤),但我不确定这是什么意思
更新流程管理中的映像以引用新版本
请注意升级calico/node
容器
否则,我看不到方向上有其他问题。我们的环境是一个k8s kubeadm集群
我想真正的问题是:我在哪里告诉k8s使用较新版本的calico/node
图像
编辑
要回答上述问题:
我刚刚对calico.yaml
和rbac kdd.yaml
执行了kubectl delete-f
,然后对这些文件的最新版本执行了kubectl创建-f
现在所有内容都显示为3.3.2版,但我现在在所有印花布节点吊舱上都出现了此错误:
警告不健康84s(x181超过31m)kubelet,thalia4准备就绪探测失败:印花布/节点未准备就绪:BIRD未准备就绪:BGP未使用--network plugin=cni建立指定我们使用cni网络插件,实际cni插件二进制文件位于--cni bin dir(默认/opt/cni/bin)和位于--CNI conf dir(default/etc/CNI/net.d)中的CNI插件配置
比如说
--网络插件=cni
--cni-bin-dir=/opt/cni/bin#可能有多个cni-bin,例如印花布/weave…,您可以使用命令“/opt/cni/bin/calico-v”来显示印花布版本
--cni conf dir=/etc/cni/net.d#定义详细的cni插件配置,如下所示:
{
"name": "calico-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"mtu": 8950,
"policy": {
"type": "k8s"
},
"ipam": {
"type": "calico-ipam",
"assign_ipv6": "false",
"assign_ipv4": "true"
},
"etcd_endpoints": "https://172.16.1.5:2379,https://172.16.1.9:2379,https://172.16.1.15:2379",
"etcd_key_file": "/etc/etcd/ssl/etcd-client-key.pem",
"etcd_cert_file": "/etc/etcd/ssl/etcd-client.pem",
"etcd_ca_cert_file": "/etc/etcd/ssl/ca.pem",
"kubernetes": {
"kubeconfig": "/etc/kubernetes/cluster-admin.kubeconfig"
}
}
]
}
我解决了这个问题。我必须为所有节点上的sudo iptables-A-cali failsafe in-p tcp-match multiport-dport 179-j ACCEPT
链中的cali failsafe添加一条显式规则到iptables
现在,所有节点上的一切都正常工作:
IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+---------------+-------------------+-------+----------+-------------+
| 134.xx.xx.163 | node-to-node mesh | up | 19:33:58 | Established |
| 134.xx.xx.164 | node-to-node mesh | up | 19:33:40 | Established |
| 134.xx.xx.165 | node-to-node mesh | up | 19:35:07 | Established |
| 134.xx.xx.168 | node-to-node mesh | up | 19:35:01 | Established |
+---------------+-------------------+-------+----------+-------------+
谢谢我发现最简单的方法就是使用kubectl
删除所有资源,然后使用最新的yaml文件重新创建它们。但是,现在在其中一个节点上出现错误。请参见上面的编辑。能否粘贴“kubectl描述节点thalia4”的结果
[sudo] password for gms:
Calico process is running.
IPv4 BGP status
+---------------+-------------------+-------+----------+---------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+---------------+-------------------+-------+----------+---------+
| 134.xx.xx.162 | node-to-node mesh | start | 02:36:29 | Connect |
| 134.xx.xx.163 | node-to-node mesh | start | 02:36:29 | Connect |
| 134.xx.xx.164 | node-to-node mesh | start | 02:36:29 | Connect |
| 134.xx.xx.165 | node-to-node mesh | start | 02:36:29 | Connect |
+---------------+-------------------+-------+----------+---------+
Name: thalia4.domain
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
dns=dns4
kubernetes.io/hostname=thalia4
node_name=thalia4
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 134.xx.xx.168/26
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 03 Dec 2018 14:17:07 -0600
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk Unknown Fri, 21 Dec 2018 11:58:38 -0600 Sat, 12 Jan 2019 16:44:10 -0600 NodeStatusUnknown Kubelet stopped posting node status.
MemoryPressure False Mon, 21 Jan 2019 20:54:38 -0600 Sat, 12 Jan 2019 16:50:18 -0600 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 21 Jan 2019 20:54:38 -0600 Sat, 12 Jan 2019 16:50:18 -0600 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 21 Jan 2019 20:54:38 -0600 Sat, 12 Jan 2019 16:50:18 -0600 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 21 Jan 2019 20:54:38 -0600 Sun, 20 Jan 2019 20:27:10 -0600 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 134.xx.xx.168
Hostname: thalia4
Capacity:
cpu: 4
ephemeral-storage: 6878Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8009268Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 6490895145
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7906868Ki
pods: 110
System Info:
Machine ID: c011569a40b740a88a672a5cc526b3ba
System UUID: 42093037-F27E-CA90-01E1-3B253813B904
Boot ID: ffa5170e-da2b-4c09-bd8a-032ce9fca2ee
Kernel Version: 3.10.0-957.1.3.el7.x86_64
OS Image: Red Hat Enterprise Linux
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.13.1
Kubelet Version: v1.13.1
Kube-Proxy Version: v1.13.1
PodCIDR: 192.168.4.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-node-8xqbs 250m (6%) 0 (0%) 0 (0%) 0 (0%) 24h
kube-system coredns-786f4c87c8-sbks2 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 47h
kube-system kube-proxy-zp4fk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 350m (8%) 0 (0%)
memory 70Mi (0%) 170Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
Events: <none>
{
"name": "calico-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"mtu": 8950,
"policy": {
"type": "k8s"
},
"ipam": {
"type": "calico-ipam",
"assign_ipv6": "false",
"assign_ipv4": "true"
},
"etcd_endpoints": "https://172.16.1.5:2379,https://172.16.1.9:2379,https://172.16.1.15:2379",
"etcd_key_file": "/etc/etcd/ssl/etcd-client-key.pem",
"etcd_cert_file": "/etc/etcd/ssl/etcd-client.pem",
"etcd_ca_cert_file": "/etc/etcd/ssl/ca.pem",
"kubernetes": {
"kubeconfig": "/etc/kubernetes/cluster-admin.kubeconfig"
}
}
]
}
IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+---------------+-------------------+-------+----------+-------------+
| 134.xx.xx.163 | node-to-node mesh | up | 19:33:58 | Established |
| 134.xx.xx.164 | node-to-node mesh | up | 19:33:40 | Established |
| 134.xx.xx.165 | node-to-node mesh | up | 19:35:07 | Established |
| 134.xx.xx.168 | node-to-node mesh | up | 19:35:01 | Established |
+---------------+-------------------+-------+----------+-------------+