如何仅使用“删除kubernetes对象中的标签”;kubectl apply-f file.yaml“文件;?
我在Redhat Openshift中玩GitOps和ArgoCD。我的目标是将工作节点切换到infra节点 我希望使用描述性YAML文件来实现这一点,而不是使用命令行手动实现(使用kubectl标签节点很容易…) 为了使该节点成为infra节点,我想添加一个标签“infra”,并从中获取标签“worker”。在此之前,对象如下所示(省略不相关的标签): 应用YAML文件后,应该是这样的:如何仅使用“删除kubernetes对象中的标签”;kubectl apply-f file.yaml“文件;?,kubernetes,yaml,label,openshift,argocd,Kubernetes,Yaml,Label,Openshift,Argocd,我在Redhat Openshift中玩GitOps和ArgoCD。我的目标是将工作节点切换到infra节点 我希望使用描述性YAML文件来实现这一点,而不是使用命令行手动实现(使用kubectl标签节点很容易…) 为了使该节点成为infra节点,我想添加一个标签“infra”,并从中获取标签“worker”。在此之前,对象如下所示(省略不相关的标签): 应用YAML文件后,应该是这样的: apiVersion: v1 kind: Node metadata: labels: nod
apiVersion: v1
kind: Node
metadata:
labels:
node-role.kubernetes.io/worker: ""
name: node6.example.com
spec: {}
如果我将后一个配置放在一个文件中,并执行“kubectl apply-f”,则该节点同时具有infra和worker标签。因此,添加标签或更改标签的值很容易,但是有没有方法通过应用YAML文件删除对象元数据中的标签?您可以使用
kubectl label node node6.example.com node-role.kubernetes.io/infra-
然后可以使用新标签再次运行kubectl apply
。
您将启动并运行。我想说,使用
kubectl apply
是不可能的,至少我尝试过,但找不到任何相关信息
正如@Petr Kotas提到的,您可以随时使用
kubectl label node node6.example.com node-role.kubernetes.io/infra-
但我看到你在找别的东西
我希望使用描述性YAML文件来实现这一点,而不是使用命令行手动实现(使用kubectl标签节点很容易…)
因此,答案可能是使用API客户端,例如?我找到了这个由@Prafull Ladha制作的示例 如前所述,请更正kubectl示例以删除标签,但未提及使用API客户端删除标签。如果要使用API删除标签,则需要提供带有
labelname:None
的新主体,然后将该主体修补到节点或pod。我使用kubernetes python客户端API作为示例
我已经非常成功地使用
kubectl replace
和kubectl apply
在我的Kubernetes集群(使用kubeadm创建)中更改了一个节点标签
必需:如果您的节点配置是使用命令式命令(如kubectl label
手动更改的,则需要使用以下命令修复上次应用的配置
注释(用节点名替换node2):
注意:它与所有类型的Kubernetes对象的工作方式相同(结果略有不同。请始终检查结果)
注释2:--export
的kubectl get
参数已被弃用,没有它它也可以正常工作,但如果使用它,上次应用的配置
注释似乎要短得多,更容易阅读
在不应用现有配置的情况下,下一个kubectl apply
命令将忽略上次应用的配置注释中不存在的所有字段。以下示例说明: 修复注释(通过运行
kubectl get node 2-o yaml | kubectl apply-f-
)后,kubectl apply
可以很好地替换和删除标签:
kubectl get node node2 -o yaml | sed 's@node-role.kubernetes.io/worker: ""@node-role.kubernetes.io/infra: ""@' | sed 's@node-role.kubernetes.io/santa: ""@@'| kubectl diff -f -
diff -u -N /tmp/LIVE-107488917/v1.Node..node2 /tmp/MERGED-924858096/v1.Node..node2
--- /tmp/LIVE-107488917/v1.Node..node2 2020-04-08 18:01:55.776699954 +0000
+++ /tmp/MERGED-924858096/v1.Node..node2 2020-04-08 18:01:55.792699954 +0000
@@ -18,8 +18,7 @@
kubernetes.io/arch: amd64
kubernetes.io/hostname: node2
kubernetes.io/os: linux
- node-role.kubernetes.io/santa: "" # <-- removed as desired
- node-role.kubernetes.io/worker: "" # <-- removed as desired, literally replaced with the following label
+ node-role.kubernetes.io/infra: "" # <-- created as desired
name: node2
resourceVersion: "60978298"
selfLink: /api/v1/nodes/node2
exit status 1
首次对使用命令式命令(如kubectl create
或kubectl expose
创建的资源使用kubectl apply-f
时,您可能会看到以下警告:
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
在这种情况下,将使用
kubectl apply-f filename.yaml
命令中使用的文件内容创建上次应用的配置
注释。它可能不包含活动对象中存在的所有参数和标签。请尝试将工作标签设置为false:
node-role.kubernetes.io/worker: "false"
在OpenShift 4.4上为我工作
编辑:
这不管用。发生的情况是:
- 包含node-role.kubernetes.io/worker的应用YML文件:“false”
- 自动进程从节点删除node-role.kubernetes.io/worker标签时运行(由于YML中未指定该标签,因此将自动应用该标签)
有趣的是,如果标签为空而不是设置为false,则自动过程不会删除该标签 我认为改变节点上的标签而不是为基础设施节点创建新的机器集不是一个好主意。对于openshift集群,它应该以相同的方式工作,无论是使用
kubectl
还是oc
kubectl get node node2 -o yaml | grep node-role
{"apiVersion":"v1","kind":"Node","metadata":{"annotations":{"flannel.alpha.coreos.com/backend-data":"{\"VtepMAC\":\"46:c6:d1:f0:6c:0a\"}","flannel.alpha.coreos.com/backend-type":"vxlan","flannel.alpha.coreos.com/kube-subnet-manager":"true","flannel.alpha.coreos.com/public-ip":"10.156.0.11","kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"creationTimestamp":null,
"labels":{
"beta.kubernetes.io/arch":"amd64",
"beta.kubernetes.io/os":"linux",
"kubernetes.io/arch":"amd64",
"kubernetes.io/hostname":"node2",
"kubernetes.io/os":"linux",
"node-role.kubernetes.io/worker":""}, # <--- important line: only worker label is present
"name":"node2","selfLink":"/api/v1/nodes/node2"},"spec":{"podCIDR":"10.244.2.0/24"},"status":{"daemonEndpoints":{"kubeletEndpoint":{"Port":0}},"nodeInfo":{"architecture":"","bootID":"","containerRuntimeVersion":"","kernelVersion":"","kubeProxyVersion":"","kubeletVersion":"","machineID":"","operatingSystem":"","osImage":"","systemUUID":""}}}
node-role.kubernetes.io/santa: ""
node-role.kubernetes.io/worker: ""
# kubectl diff is used to comare the current online configuration, and the configuration as it would be if applied
kubectl get node node2 -o yaml | sed 's@node-role.kubernetes.io/worker: ""@node-role.kubernetes.io/infra: ""@' | sed 's@node-role.kubernetes.io/santa: ""@@'| kubectl diff -f -
diff -u -N /tmp/LIVE-380689040/v1.Node..node2 /tmp/MERGED-682760879/v1.Node..node2
--- /tmp/LIVE-380689040/v1.Node..node2 2020-04-08 17:20:18.108809972 +0000
+++ /tmp/MERGED-682760879/v1.Node..node2 2020-04-08 17:20:18.120809972 +0000
@@ -18,8 +18,8 @@
kubernetes.io/arch: amd64
kubernetes.io/hostname: node2
kubernetes.io/os: linux
+ node-role.kubernetes.io/infra: "" # <-- created as desired
node-role.kubernetes.io/santa: "" # <-- ignored, because the label isn't present in the last-applied-configuration annotation
- node-role.kubernetes.io/worker: "" # <-- removed as desired
name: node2
resourceVersion: "60973814"
selfLink: /api/v1/nodes/node2
exit status 1
kubectl get node node2 -o yaml | sed 's@node-role.kubernetes.io/worker: ""@node-role.kubernetes.io/infra: ""@' | sed 's@node-role.kubernetes.io/santa: ""@@'| kubectl diff -f -
diff -u -N /tmp/LIVE-107488917/v1.Node..node2 /tmp/MERGED-924858096/v1.Node..node2
--- /tmp/LIVE-107488917/v1.Node..node2 2020-04-08 18:01:55.776699954 +0000
+++ /tmp/MERGED-924858096/v1.Node..node2 2020-04-08 18:01:55.792699954 +0000
@@ -18,8 +18,7 @@
kubernetes.io/arch: amd64
kubernetes.io/hostname: node2
kubernetes.io/os: linux
- node-role.kubernetes.io/santa: "" # <-- removed as desired
- node-role.kubernetes.io/worker: "" # <-- removed as desired, literally replaced with the following label
+ node-role.kubernetes.io/infra: "" # <-- created as desired
name: node2
resourceVersion: "60978298"
selfLink: /api/v1/nodes/node2
exit status 1
# Check the original label ( last filter removes last applied config annotation line)
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/infra: ""
# Replace the label "infra" with "worker" using kubectl replace syntax
$ kubectl get node node2 -o yaml | sed 's@node-role.kubernetes.io/infra: ""@node-role.kubernetes.io/worker: ""@' | kubectl replace -f -
node/node2 replaced
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/worker: ""
# label replaced -------^^^^^^
# Replace the label "worker" back to "infra" using kubectl apply syntax
$ kubectl get node node2 -o yaml | sed 's@node-role.kubernetes.io/worker: ""@node-role.kubernetes.io/infra: ""@' | kubectl apply -f -
node/node2 configured
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/infra: ""
# label replaced -------^^^^^
# Remove the label from the node ( for demonstration purpose)
$ kubectl get node node2 -o yaml | sed 's@node-role.kubernetes.io/infra: ""@@' | kubectl apply -f -
node/node2 configured
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
# empty output
# label "infra" has been removed
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
node-role.kubernetes.io/worker: "false"