Kubernetes Prometheus服务器在使用Helm安装后处于挂起状态
我是k8s新手,正在尝试为k8s设置普罗米修斯监控。我曾经 “头盔安装”来安装普罗米修斯。现在:Kubernetes Prometheus服务器在使用Helm安装后处于挂起状态,kubernetes,prometheus,kubernetes-helm,persistent-volumes,persistent-volume-claims,Kubernetes,Prometheus,Kubernetes Helm,Persistent Volumes,Persistent Volume Claims,我是k8s新手,正在尝试为k8s设置普罗米修斯监控。我曾经 “头盔安装”来安装普罗米修斯。现在: 两个吊舱仍处于挂起状态: 普罗米修斯服务器 普罗米修斯警觉经理 我手动为这两个卷创建了持久卷 有人能帮我把这些PV和由helm chart创建的PVC进行映射吗 我还创建了卷并将其分配给本地存储 [centos@k8smaster1 prometheus_pv]$ kubectl get pv -n monitoring prometheus-alertmanager 3Gi
- 普罗米修斯服务器
- 普罗米修斯警觉经理
[centos@k8smaster1 prometheus_pv]$ kubectl get pv -n monitoring
prometheus-alertmanager 3Gi RWX Retain Available local-storage 2d19h
prometheus-server 12Gi RWX Retain Available local-storage 2d19h
普罗米修斯将尝试创建PersatentVolumeClaims,accessModes为ReadWriteOnce,只有当accessModes相同时,PVC才会与PersistentVolume匹配。将PV的accessmode更改为ReadWriteOnce,它应该可以工作。普罗米修斯将尝试使用accessModes作为ReadWriteOnce创建PersatentVolumeClaims,只有当accessModes相同时,PVC才会与PersistentVolume匹配。将PV的访问模式更改为ReadWriteOnce,它应该可以工作。另外,如果您觉得它是重复的,已经参考了其他答案,它们都没有映射到普罗米修斯的helm install版本。您好,请检查集群中的storageclass
kubectl get sc
Hi@SureshVishnoi,我检查它说找不到资源。我认为存储类提及不是强制性的。当您动态创建pv时,您的pvc需要引用itI猜测工作节点没有pv,主节点也有污染,以防您觉得它是重复的,已经参考了其他答案,它们都没有映射到普罗米修斯的helm安装版本。您好,检查集群中的storageclasskubectl get sc
Hi@SureshVishnoi,我检查过了,上面说找不到资源。我认为存储类不是强制性的。当您动态创建pv时,您的pvc需要引用itI,我猜工作节点没有pv,主节点有污点
[centos@k8smaster1 ~]$ kubectl get pv
prometheus-alertmanager 3Gi RWX Retain Available 22m
prometheus-server 12Gi RWX Retain Available 30m
[centos@k8smaster1 ~]$ kubectl get pvc -n monitoring
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-alertmanager Pending 20m
prometheus-server Pending 20m
[centos@k8smaster1 ~]$ kubectl describe pvc prometheus-alertmanager -n monitoring
Name: prometheus-alertmanager
Namespace: monitoring
StorageClass:
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-8.15.0
component=alertmanager
heritage=Tiller
release=prometheus
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 116s (x83 over 22m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Mounted By: prometheus-alertmanager-7757d759b8-x6bd7
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-alertmanager Pending local-storage 4m29s
prometheus-server Pending local-storage 4m29s
[centos@k8smaster1 prometheus_pv_storage]$ kubectl describe pvc prometheus-server -n monitoring
Name: prometheus-server
Namespace: monitoring
StorageClass: local-storage
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-8.15.0
component=server
heritage=Tiller
release=prometheus
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 11s (x22 over 4m59s) persistentvolume-controller waiting for first consumer to be created before binding
Mounted By: prometheus-server-7f8b5fc64b-bqf42
[centos@k8smaster1 ~]$ kubectl get pods prometheus-server-7f8b5fc64b-bqf42 -n monitoring -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-08-18T16:10:54Z"
generateName: prometheus-server-7f8b5fc64b-
labels:
app: prometheus
chart: prometheus-8.15.0
component: server
heritage: Tiller
pod-template-hash: 7f8b5fc64b
release: prometheus
name: prometheus-server-7f8b5fc64b-bqf42
namespace: monitoring
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: prometheus-server-7f8b5fc64b
uid: c1979bcb-c1d2-11e9-819d-fa163ebb8452
resourceVersion: "2461054"
selfLink: /api/v1/namespaces/monitoring/pods/prometheus-server-7f8b5fc64b-bqf42
uid: c19890d1-c1d2-11e9-819d-fa163ebb8452
spec:
containers:
- args:
- --volume-dir=/etc/config
- --webhook-url=http://127.0.0.1:9090/-/reload
image: jimmidyson/configmap-reload:v0.2.2
imagePullPolicy: IfNotPresent
name: prometheus-server-configmap-reload
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/config
name: config-volume
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-server-token-7h2df
readOnly: true
- args:
- --storage.tsdb.retention.time=15d
- --config.file=/etc/config/prometheus.yml
- --storage.tsdb.path=/data
- --web.console.libraries=/etc/prometheus/console_libraries
- --web.console.templates=/etc/prometheus/consoles
- --web.enable-lifecycle
image: prom/prometheus:v2.11.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /-/healthy
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: prometheus-server
ports:
- containerPort: 9090
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /-/ready
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/config
name: config-volume
- mountPath: /data
name: storage-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-server-token-7h2df
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
serviceAccount: prometheus-server
serviceAccountName: prometheus-server
terminationGracePeriodSeconds: 300
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: prometheus-server
name: config-volume
- name: storage-volume
persistentVolumeClaim:
claimName: prometheus-server
- name: prometheus-server-token-7h2df
secret:
defaultMode: 420
secretName: prometheus-server-token-7h2df
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-08-18T16:10:54Z"
message: '0/2 nodes are available: 1 node(s) didn''t find available persistent
volumes to bind, 1 node(s) had taints that the pod didn''t tolerate.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: BestEffort
[centos@k8smaster1 prometheus_pv]$ kubectl get pv -n monitoring
prometheus-alertmanager 3Gi RWX Retain Available local-storage 2d19h
prometheus-server 12Gi RWX Retain Available local-storage 2d19h