如何水平自动缩放Kubernetes部署

如何水平自动缩放Kubernetes部署,kubernetes,terraform,autoscaling,Kubernetes,Terraform,Autoscaling,编辑: 解决方案:我忘了将target\u cpu\u utilization\u percentage添加到autoscaler.tf文件中 我想要一个在Kubernetes上运行的Python(或其他语言)web服务,但具有自动伸缩性 我创建了一个部署和一个水平自动缩放器,但不起作用 我正在使用Terraform配置Kubernetes 我有以下文件: 部署。tf resource "kubernetes_deployment" "rui-test" { metadata {

编辑:

解决方案:我忘了将
target\u cpu\u utilization\u percentage
添加到
autoscaler.tf
文件中


我想要一个在Kubernetes上运行的Python(或其他语言)web服务,但具有自动伸缩性

我创建了一个
部署
和一个
水平自动缩放器
,但不起作用

我正在使用Terraform配置Kubernetes

我有以下文件:

部署。tf

resource "kubernetes_deployment" "rui-test" {
  metadata {
    name = "rui-test"
    labels {
      app  = "rui-test"
    }
  }
  spec {
    strategy = {
      type = "RollingUpdate"
      rolling_update = {
        max_unavailable = "26%" # This is not working
      }
    }
    selector = {
        match_labels = {
            app = "rui-test"
        }
    }
    template = {
      metadata = {
        labels = {
          app = "rui-test"
        }
      }
      spec = {
        container {
          name              = "python-test1"
          image             = "***************************"
        }
      }
    }
  }
}
resource "kubernetes_horizontal_pod_autoscaler" "test-rui" {    
  metadata {
    name = "test-rui"
  }
  spec {
    max_replicas = 10 # THIS IS NOT WORKING
    min_replicas = 3  # THIS IS NOT WORKING

    scale_target_ref {
      kind = "Deployment"
      name = "test-rui" # Name of deployment
    }
  }
}
resource "kubernetes_service" "rui-test" {
  metadata {
    name = "rui-test"
    labels {
      app  = "rui-test"
    }
  }
  spec {
    selector {
      app  = "rui-test"
    }
    type = "LoadBalancer"  # Use 'cluster_ip = "None"' or 'type = "LoadBalancer"'
    port {
      name = "http"
      port = 8080
    }
  }
}
Autoscaler.tf

resource "kubernetes_deployment" "rui-test" {
  metadata {
    name = "rui-test"
    labels {
      app  = "rui-test"
    }
  }
  spec {
    strategy = {
      type = "RollingUpdate"
      rolling_update = {
        max_unavailable = "26%" # This is not working
      }
    }
    selector = {
        match_labels = {
            app = "rui-test"
        }
    }
    template = {
      metadata = {
        labels = {
          app = "rui-test"
        }
      }
      spec = {
        container {
          name              = "python-test1"
          image             = "***************************"
        }
      }
    }
  }
}
resource "kubernetes_horizontal_pod_autoscaler" "test-rui" {    
  metadata {
    name = "test-rui"
  }
  spec {
    max_replicas = 10 # THIS IS NOT WORKING
    min_replicas = 3  # THIS IS NOT WORKING

    scale_target_ref {
      kind = "Deployment"
      name = "test-rui" # Name of deployment
    }
  }
}
resource "kubernetes_service" "rui-test" {
  metadata {
    name = "rui-test"
    labels {
      app  = "rui-test"
    }
  }
  spec {
    selector {
      app  = "rui-test"
    }
    type = "LoadBalancer"  # Use 'cluster_ip = "None"' or 'type = "LoadBalancer"'
    port {
      name = "http"
      port = 8080
    }
  }
}
Service.tf

resource "kubernetes_deployment" "rui-test" {
  metadata {
    name = "rui-test"
    labels {
      app  = "rui-test"
    }
  }
  spec {
    strategy = {
      type = "RollingUpdate"
      rolling_update = {
        max_unavailable = "26%" # This is not working
      }
    }
    selector = {
        match_labels = {
            app = "rui-test"
        }
    }
    template = {
      metadata = {
        labels = {
          app = "rui-test"
        }
      }
      spec = {
        container {
          name              = "python-test1"
          image             = "***************************"
        }
      }
    }
  }
}
resource "kubernetes_horizontal_pod_autoscaler" "test-rui" {    
  metadata {
    name = "test-rui"
  }
  spec {
    max_replicas = 10 # THIS IS NOT WORKING
    min_replicas = 3  # THIS IS NOT WORKING

    scale_target_ref {
      kind = "Deployment"
      name = "test-rui" # Name of deployment
    }
  }
}
resource "kubernetes_service" "rui-test" {
  metadata {
    name = "rui-test"
    labels {
      app  = "rui-test"
    }
  }
  spec {
    selector {
      app  = "rui-test"
    }
    type = "LoadBalancer"  # Use 'cluster_ip = "None"' or 'type = "LoadBalancer"'
    port {
      name = "http"
      port = 8080
    }
  }
}
当我运行
kubectl get hpa
时,我看到:

NAME       REFERENCE             TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
rui-test   Deployment/rui-test   <unknown>/80%   1         3         1          1h
Error from server (AlreadyExists): horizontalpodautoscalers.autoscaling "rui-test" already exists
在库伯内特斯出现了这种情况


水平吊舱自动缩放器取决于为您的部署配置的资源限制

从:

请注意,如果pod的某些容器没有设置相关的资源请求,则不会定义pod的CPU利用率,并且autoscaler不会对该指标采取任何操作

你错过了比赛。Kubernetes需要确定当前的CPU/内存使用情况,以便能够自动上下缩放

了解是否已安装metrics服务器的一种方法是运行:

$ kubectl top node
$ kubectl top pod

我运行了这些命令并进行了工作,然后我猜Google云平台已经预装了一个metrics服务器,但是Google云平台中的Kubernetes还没有安装?我搜索了文档,我认为我不需要配置更多的设置。如果我需要,我不知道我需要在哪里更改…不,这些不是默认设置的。Kubernetes不知道您的应用程序使用了多少资源,但要使Pod AutoScaler工作,它需要知道何时进行扩展,例如何时达到Pod 70%的CPU限制。要设置这些约束,请查看以下文档:。在部署中添加参考资料部分应该很容易。