Kubernetes 为什么我总是出错;5 pod已解除立即PersistentVolumeClaims“的绑定;?

Kubernetes 为什么我总是出错;5 pod已解除立即PersistentVolumeClaims“的绑定;?,kubernetes,kubernetes-helm,Kubernetes,Kubernetes Helm,我正在关注《Kubernetes for developers》这本书,似乎这本书现在已经严重过时了。 最近,我一直试图让普罗米修斯按照书中的指示在库伯内特斯上运行。这就建议安装并使用头盔,让普罗米修斯和格拉法纳开始运转 helm install monitor stable/prometheus --namespace monitoring 结果是: NAME READY STATUS

我正在关注《Kubernetes for developers》这本书,似乎这本书现在已经严重过时了。 最近,我一直试图让普罗米修斯按照书中的指示在库伯内特斯上运行。这就建议安装并使用头盔,让普罗米修斯和格拉法纳开始运转

 helm install monitor stable/prometheus --namespace monitoring
结果是:

NAME                                               READY   STATUS             RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATES
monitor-kube-state-metrics-578cdbb5b7-pdjzw        0/1     CrashLoopBackOff   14         36m   192.168.23.1     kube-worker-vm3   <none>           <none>
monitor-prometheus-alertmanager-7b4c476678-gr4s6   0/2     Pending            0          35m   <none>           <none>            <none>           <none>
monitor-prometheus-node-exporter-5kz8x             1/1     Running            0          14h   192.168.1.13     rockpro64         <none>           <none>
monitor-prometheus-node-exporter-jjrjh             1/1     Running            1          14h   192.168.1.35     osboxes           <none>           <none>
monitor-prometheus-node-exporter-k62fn             1/1     Running            1          14h   192.168.1.37     kube-worker-vm3   <none>           <none>
monitor-prometheus-node-exporter-wcg2k             1/1     Running            1          14h   192.168.1.36     kube-worker-vm2   <none>           <none>
monitor-prometheus-pushgateway-6898f8475b-sk4dz    1/1     Running            0          36m   192.168.90.200   osboxes           <none>           <none>
monitor-prometheus-server-74d7dc5d4c-vlqmm         0/2     Pending            0          14h   <none>           <none>            <none
NAME READY STATUS重新启动老化IP节点指定节点就绪门
monitor-kube-state-metrics-578cdbb5b7-pdjzw 0/1紧急回退14 36m 192.168.23.1 kube-worker-vm3
monitor-prometheus-alertmanager-7b4c476678-gr4s6 0/2挂起0 35m
monitor-prometheus-node-exporter-5kz8x 1/1运行0 14小时192.168.1.13 rockpro64
监控运行1 14h 192.168.1.35 OSBOX的prometheus节点导出器jjrjh 1/1
monitor-prometheus-node-exporter-k62fn 1/1运行1 14h 192.168.1.37 kube-worker-vm3
monitor-prometheus-node-exporter-wcg2k 1/1运行1 14h 192.168.1.36 kube-worker-vm2
monitor-prometheus-pushgateway-6898f8475b-sk4dz 1/1运行0 36m 192.168.90.200 OSBOX

monitor-prometheus-server-74d7dc5d4c-vlqmm 0/2挂起0 14h除非您使用配置集群,否则每次都必须手动生成PV。即使您不在云上,也可以设置动态存储提供程序。提供程序有许多选项,您可以找到许多。Ceph和minio是很受欢迎的提供商。

如果您使用云提供商,它们具有动态卷资源调配功能。您使用的是什么环境,是否存在适用于您的环境的动态卷供应器?我知道了,我正在裸机节点和3个vm实例的混合上运行kubernetes群集。你的意思是,如果我使用云提供程序,我就不必创建持久卷了?使用哪个动态提供程序取决于您使用的存储系统。例如,如果您使用VMWare,您可以从他们那里找到供应器。
# kubectl describe pod monitor-prometheus-server-74d7dc5d4c-vlqmm -n monitoring
Name:           monitor-prometheus-server-74d7dc5d4c-vlqmm
Namespace:      monitoring
Priority:       0
Node:           <none>
Labels:         app=prometheus
                chart=prometheus-13.8.0
                component=server
                heritage=Helm
                pod-template-hash=74d7dc5d4c
                release=monitor
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/monitor-prometheus-server-74d7dc5d4c
Containers:
  prometheus-server-configmap-reload:
    Image:      jimmidyson/configmap-reload:v0.4.0
    Port:       <none>
    Host Port:  <none>
    Args:
      --volume-dir=/etc/config
      --webhook-url=http://127.0.0.1:9090/-/reload
    Environment:  <none>
    Mounts:
      /etc/config from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from monitor-prometheus-server-token-n49ls (ro)
  prometheus-server:
    Image:      prom/prometheus:v2.20.1
    Port:       9090/TCP
    Host Port:  0/TCP
    Args:
      --storage.tsdb.retention.time=15d
      --config.file=/etc/config/prometheus.yml
      --storage.tsdb.path=/data
      --web.console.libraries=/etc/prometheus/console_libraries
      --web.console.templates=/etc/prometheus/consoles
      --web.enable-lifecycle
    Liveness:     http-get http://:9090/-/healthy delay=30s timeout=30s period=15s #success=1 #failure=3
    Readiness:    http-get http://:9090/-/ready delay=30s timeout=30s period=5s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /data from storage-volume (rw)
      /etc/config from config-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from monitor-prometheus-server-token-n49ls (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      monitor-prometheus-server
    Optional:  false
  storage-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  monitor-prometheus-server
    ReadOnly:   false
  monitor-prometheus-server-token-n49ls:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  monitor-prometheus-server-token-n49ls
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  28m (x734 over 14h)  default-scheduler  0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  3m5s (x23 over 24m)  default-scheduler  0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.
r