如何列出Kubernetes节点上的污染?
关于如何在节点上设置污点或移除污点,这些都是很好的解释。我可以使用如何列出Kubernetes节点上的污染?,kubernetes,Kubernetes,关于如何在节点上设置污点或移除污点,这些都是很好的解释。我可以使用kubectl descripe node获得一个节点的详细描述,包括它的污点。但是如果我忘记了我创建的污染的名称,或者我设置了哪些节点呢?我可以列出我的所有节点,以及它们上存在的任何污点吗?您可以使用kubectl的go模板输出选项来帮助您 kubectl get nodes-o go template={{range.items}{{{if$x:=index.metadata.annotations“scheduler.alp
kubectl descripe node
获得一个节点的详细描述,包括它的污点。但是如果我忘记了我创建的污染的名称,或者我设置了哪些节点呢?我可以列出我的所有节点,以及它们上存在的任何污点吗?您可以使用kubectl
的go模板输出选项来帮助您
kubectl get nodes-o go template={{range.items}{{{if$x:=index.metadata.annotations“scheduler.alpha.kubernetes.io/taints”}{{with$x:=index.metadata.name}{{.}{printf\n}{end}{end}{end}{end}}{end}}{
在我的集群上,这会打印出我的主控器,它们被污染了:
kubemaster-1.example.net
kubemaster-2.example.net
kubemaster-3.example.net
在Kubernetes 1.6.x中,节点污染已经转移到规范中。因此,jaxxstorm的上述答案将不起作用。相反,您可以使用以下模板
{{printf'%-50s%-12s\n“节点”“污染”}
{{-range.items}
{{-if$taint:=(index.spec“taints”)}
{{-.metadata.name}{{{\t}}
{{-range$taint}
{-.key}}={{.value}:{{.effect}{{{\t}
{{-end}
{{-“\n”}
{{-end}
{{-end}
我已将其保存到一个文件中,然后像这样引用它:
kubectl get nodes -o go-template-file="./nodes-taints.tmpl"
Node Taint
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=etcd:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=jenkins:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=etcd:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=containerlinux-canary-channel-workers:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=jenkins:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=etcd:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=etcd:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=etcd:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=jenkins:NoSchedule
您将得到如下输出:
kubectl get nodes -o go-template-file="./nodes-taints.tmpl"
Node Taint
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=etcd:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=jenkins:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=etcd:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=containerlinux-canary-channel-workers:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=jenkins:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=etcd:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=etcd:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=etcd:NoSchedule
ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal dedicate=jenkins:NoSchedule
我不是一个庞大的go模板用户,所以我确信我可以做得更好,但事实就是这样
与上面相同,但都在一行中:
kubectl get nodes -o go-template='{{printf "%-50s %-12s\n" "Node" "Taint"}}{{- range .items}}{{- if $taint := (index .spec "taints") }}{{- .metadata.name }}{{ "\t" }}{{- range $taint }}{{- .key }}={{ .value }}:{{ .effect }}{{ "\t" }}{{- end }}{{- "\n" }}{{- end}}{{- end}}'
要查找节点的污点,只需运行:
kubectl describe nodes your-node-name
输出:
Name: your-node-name
...
Taints: node-role.kubernetes.io/master:NoSchedule
CreationTimestamp: Wed, 19 Jul 2017 06:00:23 +0800
master-11 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-12 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-13 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-11 node-role.kubernetes.io/master
master-12 node-role.kubernetes.io/master
master-13 node-role.kubernetes.io/master
worker-21 thegoldfish.org/storage thegoldfish.org/compute
worker-22 thegoldfish.org/storage thegoldfish.org/compute
worker-23 thegoldfish.org/compute
worker-24 thegoldfish.org/storage thegoldfish.org/compute
NAME ARCH KERNEL KUBLET CPU RAM
master-11 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
master-12 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
master-13 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
NAMESPACE NAME NODE HOSTIP PHASE START_TIME
kube-system kube-proxy-rhmrz master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system coredns-6955765f44-777v9 master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system coredns-6955765f44-w7rch master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system kube-scheduler-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system kube-controller-manager-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system etcd-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system kube-apiserver-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system calico-node-sxls8 master-11 192.168.121.108 Running 2019-12-26T14:55:41Z
kube-system calico-kube-controllers-6d85fdfbd8-dnpn4 master-11 192.168.121.108 Running 2019-12-26T14:55:41Z
kubernetes-dashboard dashboard-metrics-scraper-76585494d8-jx9cg master-11 192.168.121.108 Running 2019-12-26T16:10:16Z
kubernetes-dashboard kubernetes-dashboard-5996555fd8-5z5p2 master-11 192.168.121.108 Running 2019-12-26T16:10:16Z
我想得到具有特定污染的节点列表。我只找到了这个答案,所以如果有人在寻找这个答案,下面是解决方案:
kubectl get nodes -o go-template='{{range $item := .items}}{{with $nodename := $item.metadata.name}}{{range $taint := $item.spec.taints}}{{if and (eq $taint.key "node-role.kubernetes.io/master") (eq $taint.effect "NoSchedule")}}{{printf "%s\n" $nodename}}{{end}}{{end}}{{end}}{{end}}'
在我的群集上,输出为:
preprod-master
preprod-proxy
PowerShell:\>kubectl描述节点| findstr“污染主机名”
或
Bash#kubectl描述节点| egrep-hi“污染|主机名”
这个命令很容易记住
输出如下所示:
Taints: <none>
Hostname: aks-agentpool-30208295-0
Taints: <none>
Hostname: aks-agentpool-30208295-1
...
污点:
主机名:aks-agentpool-30208295-0
污点:
主机名:aks-agentpool-30208295-1
...
kubectl get nodes-o json | jq'.items[].spec'
这将提供带有节点名称的完整规范,或者:
kubectl获取节点-o json | jq'.items[].spec.taints'
将为每个节点生成污染列表CMD kubectl提供一个参数jsonpath,用于在获取后搜索并格式化输出。您可以查看文档以了解详细信息
kubectl get node -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.taints}{"\n"}{end}'
有关更多信息,可以使用FindResults方法检查反映源数据的
kubectl describe nodes [node_name] | grep 'Taints'
kubectl get nodes -o json | jq '.items[].spec.taints'
-->最后一步需要安装jq(sudo apt install jq)在不使用任何额外工具(如jq)的情况下,最简单的方法是使用自定义列输出选项
$ kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints --no-headers
输出:
Name: your-node-name
...
Taints: node-role.kubernetes.io/master:NoSchedule
CreationTimestamp: Wed, 19 Jul 2017 06:00:23 +0800
master-11 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-12 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-13 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-11 node-role.kubernetes.io/master
master-12 node-role.kubernetes.io/master
master-13 node-role.kubernetes.io/master
worker-21 thegoldfish.org/storage thegoldfish.org/compute
worker-22 thegoldfish.org/storage thegoldfish.org/compute
worker-23 thegoldfish.org/compute
worker-24 thegoldfish.org/storage thegoldfish.org/compute
NAME ARCH KERNEL KUBLET CPU RAM
master-11 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
master-12 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
master-13 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
NAMESPACE NAME NODE HOSTIP PHASE START_TIME
kube-system kube-proxy-rhmrz master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system coredns-6955765f44-777v9 master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system coredns-6955765f44-w7rch master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system kube-scheduler-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system kube-controller-manager-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system etcd-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system kube-apiserver-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system calico-node-sxls8 master-11 192.168.121.108 Running 2019-12-26T14:55:41Z
kube-system calico-kube-controllers-6d85fdfbd8-dnpn4 master-11 192.168.121.108 Running 2019-12-26T14:55:41Z
kubernetes-dashboard dashboard-metrics-scraper-76585494d8-jx9cg master-11 192.168.121.108 Running 2019-12-26T16:10:16Z
kubernetes-dashboard kubernetes-dashboard-5996555fd8-5z5p2 master-11 192.168.121.108 Running 2019-12-26T16:10:16Z
有了类似污点的东西,它是一个地图或列表,你希望它看起来干净,以便用其他工具进行解析。你可以使用类似于Edwin Tai的答案的东西来清理它们,但需要额外的智能来提取密钥
kubectl get nodes -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.taints[*].key}{"\n"}{end}'
输出:
Name: your-node-name
...
Taints: node-role.kubernetes.io/master:NoSchedule
CreationTimestamp: Wed, 19 Jul 2017 06:00:23 +0800
master-11 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-12 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-13 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-11 node-role.kubernetes.io/master
master-12 node-role.kubernetes.io/master
master-13 node-role.kubernetes.io/master
worker-21 thegoldfish.org/storage thegoldfish.org/compute
worker-22 thegoldfish.org/storage thegoldfish.org/compute
worker-23 thegoldfish.org/compute
worker-24 thegoldfish.org/storage thegoldfish.org/compute
NAME ARCH KERNEL KUBLET CPU RAM
master-11 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
master-12 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
master-13 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
NAMESPACE NAME NODE HOSTIP PHASE START_TIME
kube-system kube-proxy-rhmrz master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system coredns-6955765f44-777v9 master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system coredns-6955765f44-w7rch master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system kube-scheduler-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system kube-controller-manager-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system etcd-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system kube-apiserver-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system calico-node-sxls8 master-11 192.168.121.108 Running 2019-12-26T14:55:41Z
kube-system calico-kube-controllers-6d85fdfbd8-dnpn4 master-11 192.168.121.108 Running 2019-12-26T14:55:41Z
kubernetes-dashboard dashboard-metrics-scraper-76585494d8-jx9cg master-11 192.168.121.108 Running 2019-12-26T16:10:16Z
kubernetes-dashboard kubernetes-dashboard-5996555fd8-5z5p2 master-11 192.168.121.108 Running 2019-12-26T16:10:16Z
额外示例:
使用此方法,您可以轻松创建自定义输出
节点的快速概览:
kubectl get nodes -o custom-columns=NAME:.metadata.name,ARCH:.status.nodeInfo.architecture,KERNEL:.status.nodeInfo.kernelVersion,KUBLET:.status.nodeInfo.kubeletVersion,CPU:.status.capacity.cpu,RAM:.status.capacity.memory
输出:
Name: your-node-name
...
Taints: node-role.kubernetes.io/master:NoSchedule
CreationTimestamp: Wed, 19 Jul 2017 06:00:23 +0800
master-11 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-12 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-13 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-11 node-role.kubernetes.io/master
master-12 node-role.kubernetes.io/master
master-13 node-role.kubernetes.io/master
worker-21 thegoldfish.org/storage thegoldfish.org/compute
worker-22 thegoldfish.org/storage thegoldfish.org/compute
worker-23 thegoldfish.org/compute
worker-24 thegoldfish.org/storage thegoldfish.org/compute
NAME ARCH KERNEL KUBLET CPU RAM
master-11 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
master-12 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
master-13 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
NAMESPACE NAME NODE HOSTIP PHASE START_TIME
kube-system kube-proxy-rhmrz master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system coredns-6955765f44-777v9 master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system coredns-6955765f44-w7rch master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system kube-scheduler-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system kube-controller-manager-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system etcd-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system kube-apiserver-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system calico-node-sxls8 master-11 192.168.121.108 Running 2019-12-26T14:55:41Z
kube-system calico-kube-controllers-6d85fdfbd8-dnpn4 master-11 192.168.121.108 Running 2019-12-26T14:55:41Z
kubernetes-dashboard dashboard-metrics-scraper-76585494d8-jx9cg master-11 192.168.121.108 Running 2019-12-26T16:10:16Z
kubernetes-dashboard kubernetes-dashboard-5996555fd8-5z5p2 master-11 192.168.121.108 Running 2019-12-26T16:10:16Z
POD概述以及按创建时间排序的查找位置:
kubectl get pods -A -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,NODE:.spec.nodeName,HOSTIP:.status.hostIP,PHASE:.status.phase,START_TIME:.metadata.creationTimestamp --sort-by=.metadata.creationTimestamp
输出:
Name: your-node-name
...
Taints: node-role.kubernetes.io/master:NoSchedule
CreationTimestamp: Wed, 19 Jul 2017 06:00:23 +0800
master-11 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-12 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-13 [map[effect:PreferNoSchedule key:node-role.kubernetes.io/master]]
master-11 node-role.kubernetes.io/master
master-12 node-role.kubernetes.io/master
master-13 node-role.kubernetes.io/master
worker-21 thegoldfish.org/storage thegoldfish.org/compute
worker-22 thegoldfish.org/storage thegoldfish.org/compute
worker-23 thegoldfish.org/compute
worker-24 thegoldfish.org/storage thegoldfish.org/compute
NAME ARCH KERNEL KUBLET CPU RAM
master-11 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
master-12 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
master-13 amd64 3.10.0-1062.9.1.el7.x86_64 v1.17.0 6 7910096Ki
NAMESPACE NAME NODE HOSTIP PHASE START_TIME
kube-system kube-proxy-rhmrz master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system coredns-6955765f44-777v9 master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system coredns-6955765f44-w7rch master-11 192.168.121.108 Running 2019-12-26T14:22:03Z
kube-system kube-scheduler-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system kube-controller-manager-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system etcd-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system kube-apiserver-master-11 master-11 192.168.121.108 Running 2019-12-26T14:22:05Z
kube-system calico-node-sxls8 master-11 192.168.121.108 Running 2019-12-26T14:55:41Z
kube-system calico-kube-controllers-6d85fdfbd8-dnpn4 master-11 192.168.121.108 Running 2019-12-26T14:55:41Z
kubernetes-dashboard dashboard-metrics-scraper-76585494d8-jx9cg master-11 192.168.121.108 Running 2019-12-26T16:10:16Z
kubernetes-dashboard kubernetes-dashboard-5996555fd8-5z5p2 master-11 192.168.121.108 Running 2019-12-26T16:10:16Z
这方面的文档是试试这个:
kubectl get nodes -o=custom-columns=NAME:.metadata.name,TAINTS:.spec.taints
让我试着解释一下第一个是什么意思,然后剩下的应该放在适当的位置上:
节点名:。元数据。名称
ColumnName:JSONPATH到您要查找的属性。
ColumnName可以是您想要的任何内容。
类似于NodeName:items[*].metadata.name的功能相当于运行$kubectl get nodes-o=jsonpath='{.items[*].metadata.name}',但使用自定义列标志可以获得行和列格式的值
注意:您不需要从.items[*]开始。它已经用自定义列标志解析了它
现在所有的专栏都解释了:
节点名:.metadata.name-获取节点名并将其放在节点名列下
TaintKey:.spec.taints[*].key-通过查看污染贴图并将其置于TaintKey自定义列下,返回污染的所有键
污点值:.spec.taints[*].value与键相同,但您正在从污点映射返回值
污点效果:.spec.taints[*].effect与键相同,但您将从污点贴图返回效果
您可以将其设置在和别名下
alias get-nodetaints="kubectl get nodes -o=custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect"
您有自己的好命令来获取所有污染,您的输出应该如下所示
以下命令适用于我:
如果你有节点IP,你可以试试
kubectl获取节点$node_ip-o json|jq'.spec.taints'
输出:
[
{
“效果”:“无时间表”,
“键”:“专用”
}
]
(或)
kubectl描述节点$node|ip|grep-i污染
输出:
污点:专用:NoSchedule
这在1.6.x中不再有效,为1.6.x准备更完整的答案这应该是一个单独的问题。这里的语法似乎不正确-没有包含“污点主机名”的行,因此grep将找不到任何内容。我想这应该是kubectl description nodes | egrep“Taint | Hostname”
谢谢你的权利,我只测试了powershell代码,并假设Bash的grep与之类似,但没有测试,我会编辑我的答案。我使用的版本,首先是名称:$kubectl descripe nodes | grep-E'name:| Taint'
或者如果您想要ip-xx.internal nodes名称,您可以这样做:kubectl get nodes-o json | jq“。items[]{name:.metadata.name,taints:.spec taints}”
这个基于的答案假设安装了jq。并且只输出了一堆污点,而没有输出与它们相关的节点的名称,这没有多大用处。你没有说服我这可以