Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/kubernetes/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kubernetes 有没有办法在EKS上使用autoscaler耗尽CloudWatch容器洞察节点?_Kubernetes_Amazon Eks_Kubernetes Pod_Aws Cloudwatch Log Insights_Eksctl - Fatal编程技术网

Kubernetes 有没有办法在EKS上使用autoscaler耗尽CloudWatch容器洞察节点?

Kubernetes 有没有办法在EKS上使用autoscaler耗尽CloudWatch容器洞察节点?,kubernetes,amazon-eks,kubernetes-pod,aws-cloudwatch-log-insights,eksctl,Kubernetes,Amazon Eks,Kubernetes Pod,Aws Cloudwatch Log Insights,Eksctl,集群规范: apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: mixedCluster region: ap-southeast-1 nodeGroups: - name: scale-spot desiredCapacity: 1 maxSize: 10 instancesDistribution: instanceTypes: ["t2.small

集群规范:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: mixedCluster
  region: ap-southeast-1

nodeGroups:
  - name: scale-spot
    desiredCapacity: 1
    maxSize: 10
    instancesDistribution:
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 0
    availabilityZones: ["ap-southeast-1a", "ap-southeast-1b"]
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateless-workload
      instance-type: spot
    ssh:
      publicKeyName: newkeypairbro

availabilityZones: ["ap-southeast-1a", "ap-southeast-1b"]
问题:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: mixedCluster
  region: ap-southeast-1

nodeGroups:
  - name: scale-spot
    desiredCapacity: 1
    maxSize: 10
    instancesDistribution:
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 0
    availabilityZones: ["ap-southeast-1a", "ap-southeast-1b"]
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateless-workload
      instance-type: spot
    ssh:
      publicKeyName: newkeypairbro

availabilityZones: ["ap-southeast-1a", "ap-southeast-1b"]
当我扩展我的应用程序(业务播客)时,将自动为每个节点创建CloudWatch播客。但是,当我决定将我的业务POD缩减到零时,我的集群autoscaler并没有耗尽或终止某些节点中的cloudWatch内容(POD)。因此,这将在集群中创建一个虚拟节点。

根据上图,最后一个节点是虚拟节点,其中包含cloudWatch吊舱:

预期结果:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: mixedCluster
  region: ap-southeast-1

nodeGroups:
  - name: scale-spot
    desiredCapacity: 1
    maxSize: 10
    instancesDistribution:
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 0
    availabilityZones: ["ap-southeast-1a", "ap-southeast-1b"]
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateless-workload
      instance-type: spot
    ssh:
      publicKeyName: newkeypairbro

availabilityZones: ["ap-southeast-1a", "ap-southeast-1b"]
在业务pod终止后,如何优雅地(自动)耗尽Amazon CloudWatch节点?所以它不会创建虚拟节点


这是我的自动缩放配置:

Name:                   cluster-autoscaler
Namespace:              kube-system
CreationTimestamp:      Sun, 11 Apr 2021 20:44:28 +0700
Labels:                 app=cluster-autoscaler
Annotations:            cluster-autoscaler.kubernetes.io/safe-to-evict: false
                        deployment.kubernetes.io/revision: 2
Selector:               app=cluster-autoscaler
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=cluster-autoscaler
  Annotations:      prometheus.io/port: 8085
                    prometheus.io/scrape: true
  Service Account:  cluster-autoscaler
  Containers:
   cluster-autoscaler:
    Image:      k8s.gcr.io/autoscaling/cluster-autoscaler:v1.18.3
    Port:       <none>
    Host Port:  <none>
    Command:
      ./cluster-autoscaler
      --v=4
      --stderrthreshold=info
      --cloud-provider=aws
      --skip-nodes-with-local-storage=false
      --expander=least-waste
      --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/mixedCluster
    Limits:
      cpu:     100m
      memory:  300Mi
    Requests:
      cpu:        100m
      memory:     300Mi
    Environment:  <none>
    Mounts:
      /etc/ssl/certs/ca-certificates.crt from ssl-certs (ro)
  Volumes:
   ssl-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs/ca-bundle.crt
    HostPathType:
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   cluster-autoscaler-54ccd944f6 (1/1 replicas created)
Events:          <none>

没关系,我已经解决了我自己的问题。由于我的集群正在使用t2.small和t3.small实例,因此资源太少,无法触发autoscaler来缩小虚拟节点。我尝试过更大的实例规范,t3a.medium和t3.medium,效果很好