Kubernetes-如何将POD分配给具有特定标签的节点

Kubernetes-如何将POD分配给具有特定标签的节点,kubernetes,kubernetes-pod,Kubernetes,Kubernetes Pod,假设我有以下带有标签的节点env=staging和env=production server0201 Ready worker 79d v1.18.2 10.2.2.22 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://19.3.5 production server0202 Ready worker 79d v1.18.2

假设我有以下带有标签的节点
env=staging
env=production

server0201     Ready    worker   79d   v1.18.2   10.2.2.22     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0202     Ready    worker   79d   v1.18.2   10.2.2.23     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0203     Ready    worker   35d   v1.18.3   10.2.2.30     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     staging
server0301     Ready    worker   35d   v1.18.3   10.2.3.21     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0302     Ready    worker   35d   v1.18.3   10.2.3.29     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
server0303     Ready    worker   35d   v1.18.0   10.2.3.30     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     staging
server0304     Ready    worker   65d   v1.18.2   10.2.6.22     <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.5     production
工作节点中没有污染

kubectl get nodes -o json | jq '.items[].spec.taints'
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
[
  {
    "effect": "NoSchedule",
    "key": "node-role.kubernetes.io/master"
  }
]
null
null
null
null
null
null
null
显示所有标签

server0201     Ready    worker   80d   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0202,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0202     Ready    worker   80d   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0203,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0203     Ready    worker   35d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=staging,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0210,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0301     Ready    worker   35d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0301,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0302     Ready    worker   35d   v1.18.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0309,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0303     Ready    worker   35d   v1.18.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=staging,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0310,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker
server0304     Ready    worker   65d   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=production,kubernetes.io/arch=amd64,kubernetes.io/hostname=eye0602,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,role=worker

在玩了一番之后,我意识到
节点选择器
podafinity
没有问题。事实上,通过使用
节点选择器
注释限制在我的名称空间中,我甚至可以实现我的问题想要实现的目标

apiVersion: v1
kind: Namespace
metadata:
  name: gab
  annotations:
    scheduler.alpha.kubernetes.io/node-selector: env=production
spec: {}
status: {}    
只要我的部署在名称空间中,节点选择器就可以工作

kind: Deployment
metadata:
  name: helloworld
  namespace: gab
spec:
  selector:
    matchLabels:
      app: helloworld
  replicas: 10 # tells deployment to run 1 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      labels:
        app: helloworld
    spec:
      containers:
      - name: helloworld
        image: karthequian/helloworld:latest
        ports:
        - containerPort: 80
现在,它在一开始对我起作用的原因是,我的
staging
标记节点的第二个节点的利用率略高于我一直驻留的节点

Resource           Requests     Limits
  --------           --------     ------
  cpu                3370m (14%)  8600m (35%)
  memory             5350Mi (4%)  8600Mi (6%)
  ephemeral-storage  0 (0%)       0 (0%)
我一直在降落的节点是

  Resource           Requests    Limits
  --------           --------    ------
  cpu                1170m (4%)  500100m (2083%)
  memory             164Mi (0%)  100Mi (0%)
  ephemeral-storage  0 (0%)      0 (0%)
当我测试并切换到
生产时
,因为有更多的节点,所以它被分配到少数节点


因此,我认为,调度器基于
服务器负载来平衡吊舱(我可能是错的),而不是试图平均分配是否可以检查要放置吊舱的某个节点上的污点??强制吊舱到特定节点,您必须使用污点和容差。节点有什么标签
kubectl获取节点——显示标签
@ArghyaSadhu请参考上文。我的初始显示显示了
env
的标签,这是
kubectl get nodes-L env
的输出,其中我只显示并感兴趣
env
标签键及其值POD未计划的节点。它们有足够的资源吗?
  Resource           Requests    Limits
  --------           --------    ------
  cpu                1170m (4%)  500100m (2083%)
  memory             164Mi (0%)  100Mi (0%)
  ephemeral-storage  0 (0%)      0 (0%)