Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/kubernetes/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kubernetes 通过EKS上的Helm等待节点导出器的POD_Kubernetes_Kubernetes Helm_Amazon Eks - Fatal编程技术网

Kubernetes 通过EKS上的Helm等待节点导出器的POD

Kubernetes 通过EKS上的Helm等待节点导出器的POD,kubernetes,kubernetes-helm,amazon-eks,Kubernetes,Kubernetes Helm,Amazon Eks,为了进行故障排除,我决定通过helm install exporter stable/Prometheus部署一个非常普通的Prometheus NodeExporter实现,但是我无法启动POD。我到处找,不知道还能找什么地方。我可以在我的集群上安装许多其他应用程序,除了这一个。我附上了一些疑难解答输出供您参考。我相信这可能与“宽容”有关,但我仍在深入研究 EKS集群运行在3t2.large上,每个节点最多可以支持35个吊舱,我总共运行43个吊舱。如果您有任何关于故障排除的其他想法,我们将不胜

为了进行故障排除,我决定通过
helm install exporter stable/Prometheus
部署一个非常普通的Prometheus NodeExporter实现,但是我无法启动POD。我到处找,不知道还能找什么地方。我可以在我的集群上安装许多其他应用程序,除了这一个。我附上了一些疑难解答输出供您参考。我相信这可能与“宽容”有关,但我仍在深入研究

EKS集群运行在3t2.large上,每个节点最多可以支持35个吊舱,我总共运行43个吊舱。如果您有任何关于故障排除的其他想法,我们将不胜感激

描述Pods的输出

✗ kubectl get pods
NAME                                                              READY   STATUS             RESTARTS   AGE
exporter-prometheus-node-exporter-bcwc4                           0/1     Pending            0          15m
exporter-prometheus-node-exporter-kr7z7                           0/1     Pending            0          15m
exporter-prometheus-node-exporter-lw87g                           0/1     Pending            0          15m
描述豆荚

Name:           exporter-prometheus-node-exporter-bcwc4
Namespace:      monitoring
Priority:       0
Node:           <none>
Labels:         app=prometheus
                chart=prometheus-11.1.2
                component=node-exporter
                controller-revision-hash=668b4894bb
                heritage=Helm
                pod-template-generation=1
                release=exporter
Annotations:    kubernetes.io/psp: eks.privileged
Status:         Pending
IP:
IPs:            <none>
Controlled By:  DaemonSet/exporter-prometheus-node-exporter
Containers:
  prometheus-node-exporter:
    Image:      prom/node-exporter:v0.18.1
    Port:       9100/TCP
    Host Port:  9100/TCP
    Args:
      --path.procfs=/host/proc
      --path.sysfs=/host/sys
    Environment:  <none>
    Mounts:
      /host/proc from proc (ro)
      /host/sys from sys (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from exporter-prometheus-node-exporter-token-rl4fm (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  proc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:
  sys:
    Type:          HostPath (bare host directory volume)
    Path:          /sys
    HostPathType:
  exporter-prometheus-node-exporter-token-rl4fm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  exporter-prometheus-node-exporter-token-rl4fm
    Optional:    false
QoS Class:       BestEffort

Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  2s (x24 over 29m)  default-scheduler  0/3 nodes are available: 2 node(s) didn't match node selector, 3 node(s) didn't
 have free ports for the requested pod ports.
3个节点没有用于请求的pod端口的可用端口


从错误中可以看出,已分配的节点端口已在使用中。在定义主机端口9100时,它限制了pod可以调度的位置数量,因为每个组合必须是唯一的。Ref:

主机端口9100上是否运行任何程序?更正!另一个pod在另一个命名空间中运行,正在侦听端口9100。在我移除那些豆荚后,待处理的豆荚变成了跑步。我感到困惑的是,我让prometehus在两个不同的名称空间中运行,监听同一个端口。为什么node exporter是唯一一个在同一个端口上监听时让我感到困难的服务器?“那是在9100端口上监听的”是“NodePort”类型的服务吗?你能解释一下那个令人不快的吊舱的配置吗,或者问题已经解决了?如果问题得到解决,那么请考虑将解决方案添加为一个答案:)在一个不同的命名空间中运行的DaMeDebug“NoDeExtter”的另一个“实例”正在引发这个问题。在卸载了其中一个之后,其他的pod就上线了。我不确定如何将该评论标记为答案,但我会回复并提供有关用户反馈的答案:)。
apiVersion: extensions/v1beta1                                                                                                                      
kind: DaemonSet
metadata:
  creationTimestamp: "2020-05-12T06:15:30Z"
  generation: 1
  labels:
    app: prometheus
    chart: prometheus-11.1.2
    component: node-exporter
    heritage: Helm
    release: exporter
  name: exporter-prometheus-node-exporter
  namespace: monitoring
  resourceVersion: "8131959"
  selfLink: /apis/extensions/v1beta1/namespaces/monitoring/daemonsets/exporter-prometheus-node-exporter
  uid: 5ede0739-cd05-4e3b-ace1-87fafb33314a
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: prometheus
      component: node-exporter
      release: exporter
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: prometheus
        chart: prometheus-11.1.2
        component: node-exporter
        heritage: Helm
        release: exporter
    spec:
      containers:
      - args:
        - --path.procfs=/host/proc
        - --path.sysfs=/host/sys
        image: prom/node-exporter:v0.18.1
        imagePullPolicy: IfNotPresent
        name: prometheus-node-exporter
        ports:
        - containerPort: 9100
          hostPort: 9100
          name: metrics
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /host/proc
          name: proc
        readOnly: true
        - mountPath: /host/sys
          name: sys
          readOnly: true
      dnsPolicy: ClusterFirst
      hostNetwork: true
      hostPID: true
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: exporter-prometheus-node-exporter
      serviceAccountName: exporter-prometheus-node-exporter
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /proc
          type: ""
        name: proc
      - hostPath:
          path: /sys
          type: ""
        name: sys
  templateGeneration: 1
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 3
  desiredNumberScheduled: 3
  numberMisscheduled: 0
  numberReady: 0
  numberUnavailable: 3
  observedGeneration: 1
  updatedNumberScheduled: 3