Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/kubernetes/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kubernetes水平吊舱自动缩放器未根据副本计数创建副本_Kubernetes_Kubernetes Helm - Fatal编程技术网

Kubernetes水平吊舱自动缩放器未根据副本计数创建副本

Kubernetes水平吊舱自动缩放器未根据副本计数创建副本,kubernetes,kubernetes-helm,Kubernetes,Kubernetes Helm,在这里,我试图通过kubernetes定制集群(通过kubeadm创建)中的helm chart部署一个停靠的web服务。因此,当它自动缩放时,它不会根据副本计数创建副本 这是我的部署文件 apiVersion: apps/v1beta2 kind: Deployment metadata: name: {{ template "demochart.fullname" . }} labels: app: {{ template "demochart.name" . }}

在这里,我试图通过kubernetes定制集群(通过kubeadm创建)中的helm chart部署一个停靠的web服务。因此,当它自动缩放时,它不会根据副本计数创建副本

这是我的部署文件

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ template "demochart.fullname" . }}
  labels:
    app: {{ template "demochart.name" . }}
    chart: {{ template "demochart.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ template "demochart.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ template "demochart.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 80
          volumeMounts:
            - name: cred-storage
              mountPath: /root/
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- with .Values.nodeSelector }}
      nodeSelector:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.affinity }}
      affinity:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
{{ toYaml . | indent 8 }}
    {{- end }}
      volumes:
        - name: cred-storage
          hostPath:
            path: /home/aodev/
            type:
这里是values.yaml

replicaCount: 3

image:
  repository: REPO_NAME
  tag: latest
  pullPolicy: IfNotPresent

service:
  type: NodePort
  port: 8007

ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: 
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  limits:
    cpu: 1000m
    memory: 2000Mi
  requests:
    cpu: 1000m
    memory: 2000Mi

nodeSelector: {}

tolerations: []

affinity: {}
下面是我的跑步播客,其中包括heapster和metrics服务器以及我的web服务

下面是hpa文件

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
 annotations:
 name: entitydetection
 namespace: kube-system
spec:
  maxReplicas: 20
  minReplicas: 5
  scaleTargetRef:
    apiVersion: apps/v1beta2
    kind: Deployment
    name: entitydetection
  targetCPUUtilizationPercentage: 50
因此,我将部署中的副本计数设为3,将minReplicas设为5,将maxReplicas设为20,将hpa中的TargetCPUuUtilization设为50%。因此,当cpu利用率超过50%时,它会随机创建副本,而不是根据副本计数

因此,当CPU超过50%时,会创建2个以下的36秒级副本。理想情况下,应该创建3个副本。问题是什么


以下是HPA设计的报价:

自动缩放器作为一个控制回路实现。它定期查询Scale子资源的Status.PodSelector描述的pod,并收集它们的CPU利用率


然后,它将POD CPU利用率的算术平均值与Spec.CPUUtilization中定义的目标进行比较,并根据需要调整Scale的副本以匹配目标(保存条件:
MinReplicas我们可以减少高标度和低标度时间吗?您可以查看文档中的所有可用标志。下面是一条路径:“自动缩放器的周期由controller manager的--horizontal pod autoscaler sync period标志控制。默认值为30秒。”我找到了标志--horizontal pod autoscaler downscale delay和--horizontal pod autoscaler upscale delay。我需要更改这些值。但是,当我试图在kube-controller-manager.conf中添加这些标志时,我的群集无法正常工作。创建一个单独的问题。如果没有这些标志,很难了解您的情况日志。
TargetNumOfPods = ceil(sum(CurrentPodsCPUUtilization) / Target)