带NFS装载的rabbitmq kubernetes

带NFS装载的rabbitmq kubernetes,kubernetes,rabbitmq,nfs,chown,Kubernetes,Rabbitmq,Nfs,Chown,我尝试在kubernetes环境中建立一个rabbitmq集群,该环境在的帮助下具有NFS PV。不幸的是,rabbitmq似乎想要更改/usr/lib/rabbitmq的所有者,但是当我在那里挂载NFS目录时,我得到一个错误: $ kubectl logs rabbitmq-0 -f chown: /var/lib/rabbitmq: Operation not permitted chown: /var/lib/rabbitmq: Operation not permitted 我想我有

我尝试在kubernetes环境中建立一个rabbitmq集群,该环境在的帮助下具有NFS PV。不幸的是,rabbitmq似乎想要更改
/usr/lib/rabbitmq
的所有者,但是当我在那里挂载NFS目录时,我得到一个错误:

 $ kubectl logs rabbitmq-0 -f
chown: /var/lib/rabbitmq: Operation not permitted
chown: /var/lib/rabbitmq: Operation not permitted

我想我有两个选择:分叉rabbitmq,删除rabbitmq,构建自己的映像,或者让kubernetes/nfs很好地工作。我不想制作自己的fork,让kubernetes/nfs很好地工作听起来并不是我的问题。还有其他想法吗?

这就是我试图重现的问题。 我是在redhat 7上使用kubeadm安装kubernetes群集的,下面是群集、节点详细信息

环境详细信息:

[root@master tmp]# kubectl cluster-info
Kubernetes master is running at https://192.168.56.4:6443
KubeDNS is running at https://192.168.56.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@master tmp]#

[root@master tmp]# kubectl get no
NAME         STATUS     ROLES    AGE     VERSION
master.k8s   Ready      master   8d      v1.16.2
node1.k8s    Ready   <none>   7d22h   v1.16.3
node2.k8s    Ready      <none>   7d21h   v1.16.3
[root@master tmp]#
yum install nfs-utils nfs-utils-lib =============================================================>>>>> on nfs server,client
yum install portmap       =============================================================>>>>> on nfs server,client
mkdir /nfsroot =============================>>>>>>>>>>>>>>>>>>on nfs server
[root@master ~]# cat /etc/exports   =============================================================>>>>> on nfs server
/nfsroot 192.168.56.5/255.255.255.0(rw,sync,no_root_squash)
/nfsroot 192.168.56.6/255.255.255.0(rw,sync,no_root_squash)
exportfs -r               =============================================================>>>>> on nfs server
service nfs start =============================================================>>>>> on nfs server,client
showmount -e =============================================================>>>>> on nfs server,client
现在nfs安装已准备就绪,将应用rabbitmq k8s安装

RABBITMQ K8S设置:

[root@master tmp]# kubectl cluster-info
Kubernetes master is running at https://192.168.56.4:6443
KubeDNS is running at https://192.168.56.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@master tmp]#

[root@master tmp]# kubectl get no
NAME         STATUS     ROLES    AGE     VERSION
master.k8s   Ready      master   8d      v1.16.2
node1.k8s    Ready   <none>   7d22h   v1.16.3
node2.k8s    Ready      <none>   7d21h   v1.16.3
[root@master tmp]#
yum install nfs-utils nfs-utils-lib =============================================================>>>>> on nfs server,client
yum install portmap       =============================================================>>>>> on nfs server,client
mkdir /nfsroot =============================>>>>>>>>>>>>>>>>>>on nfs server
[root@master ~]# cat /etc/exports   =============================================================>>>>> on nfs server
/nfsroot 192.168.56.5/255.255.255.0(rw,sync,no_root_squash)
/nfsroot 192.168.56.6/255.255.255.0(rw,sync,no_root_squash)
exportfs -r               =============================================================>>>>> on nfs server
service nfs start =============================================================>>>>> on nfs server,client
showmount -e =============================================================>>>>> on nfs server,client
第一步是使用我们在上述步骤中创建的nfs装载创建持久卷

[root@master tmp]# cat /root/rabbitmq-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
 name: rabbitmq-pv-1
spec:
 accessModes:
 - ReadWriteOnce
 - ReadOnlyMany
 nfs:
  server: 192.168.56.4
  path: /nfsroot
 capacity:
  storage: 1Mi
 persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
 name: rabbitmq-pv-2
spec:
 accessModes:
 - ReadWriteOnce
 - ReadOnlyMany
 nfs:
  server: 192.168.56.4
  path: /nfsroot
 capacity:
  storage: 1Mi
 persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
 name: rabbitmq-pv-3
spec:
 accessModes:
 - ReadWriteOnce
 - ReadOnlyMany
 nfs:
  server: 192.168.56.4
  path: /nfsroot
 capacity:
  storage: 1Mi
 persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
 name: rabbitmq-pv-4
spec:
 accessModes:
 - ReadWriteOnce
 - ReadOnlyMany
 nfs:
  server: 192.168.56.4
  path: /nfsroot
 capacity:
  storage: 1Mi
 persistentVolumeReclaimPolicy: Recycle
应用上述清单后,它创建了如下pv

[root@master ~]# kubectl apply -f rabbitmq-pv.yaml
persistentvolume/rabbitmq-pv-1 created
persistentvolume/rabbitmq-pv-2 created
persistentvolume/rabbitmq-pv-3 created
persistentvolume/rabbitmq-pv-4 created
[root@master ~]# kubectl get pv
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
rabbitmq-pv-1   1Mi        RWO,ROX        Recycle          Available                                   5s
rabbitmq-pv-2   1Mi        RWO,ROX        Recycle          Available                                   5s
rabbitmq-pv-3   1Mi        RWO,ROX        Recycle          Available                                   5s
rabbitmq-pv-4   1Mi        RWO,ROX        Recycle          Available                                   5s
[root@master ~]#
无需创建persistentvolumeclaim,因为它将在运行StatefSet manifest by volumeclaimtemplate选项时自动进行处理 现在,让我们创建您在下面提到的秘密

[root@master tmp]# kubectl create secret generic rabbitmq-config --from-literal=erlang-cookie=c-is-for-cookie-thats-good-enough-for-me
secret/rabbitmq-config created
[root@master tmp]#

[root@master tmp]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-vjsmd   kubernetes.io/service-account-token   3      8d
jp-token-cfdzx        kubernetes.io/service-account-token   3      5d2h
rabbitmq-config       Opaque                                1      39m
[root@master tmp]#
现在让我们通过将所有loadbalancer服务类型替换为nodeport服务来提交您的rabbitmq清单,因为我们不使用任何cloudprovider环境。同时将卷名替换为rabbitmq pv,这是我们在pv步骤中创建的。将大小从1Gi减小到1Mi,因为这只是测试演示

apiVersion: v1
kind: Service
metadata:
  # Expose the management HTTP port on each node
  name: rabbitmq-management
  labels:
    app: rabbitmq
spec:
  ports:
  - port: 15672
    name: http
  selector:
    app: rabbitmq
  sessionAffinity: ClientIP
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  # The required headless service for StatefulSets
  name: rabbitmq
  labels:
    app: rabbitmq
spec:
  ports:
  - port: 5672
    name: amqp
  - port: 4369
    name: epmd
  - port: 25672
    name: rabbitmq-dist
  clusterIP: None
  selector:
    app: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
  # The required headless service for StatefulSets
  name: rabbitmq-cluster
  labels:
    app: rabbitmq
spec:
  ports:
  - port: 5672
    name: amqp
  - port: 4369
    name: epmd
  - port: 25672
    name: rabbitmq-dist
  type: NodePort
  selector:
    app: rabbitmq
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: rabbitmq
spec:
  serviceName: "rabbitmq"
  selector:
   matchLabels:
    app: rabbitmq
  replicas: 4
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: rabbitmq
        image: rabbitmq:3.6.6-management-alpine
        lifecycle:
          postStart:
            exec:
              command:
              - /bin/sh
              - -c
              - >
                if [ -z "$(grep rabbitmq /etc/resolv.conf)" ]; then
                  sed "s/^search \([^ ]\+\)/search rabbitmq.\1 \1/" /etc/resolv.conf > /etc/resolv.conf.new;
                  cat /etc/resolv.conf.new > /etc/resolv.conf;
                  rm /etc/resolv.conf.new;
                fi;
                until rabbitmqctl node_health_check; do sleep 1; done;
                if [[ "$HOSTNAME" != "rabbitmq-0" && -z "$(rabbitmqctl cluster_status | grep rabbitmq-0)" ]]; then
                  rabbitmqctl stop_app;
                  rabbitmqctl join_cluster rabbit@rabbitmq-0;
                  rabbitmqctl start_app;
                fi;
                rabbitmqctl set_policy ha-all "." '{"ha-mode":"exactly","ha-params":3,"ha-sync-mode":"automatic"}'
        env:
        - name: RABBITMQ_ERLANG_COOKIE
          valueFrom:
            secretKeyRef:
              name: rabbitmq-config
              key: erlang-cookie
        ports:
        - containerPort: 5672
          name: amqp
        - containerPort: 25672
          name: rabbitmq-dist
        volumeMounts:
        - name: rabbitmq-pv
          mountPath: /var/lib/rabbitmq
  volumeClaimTemplates:
  - metadata:
      name: rabbitmq-pv
      annotations:
        volume.alpha.kubernetes.io/storage-class: default
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Mi # make this bigger in production
提交pod清单后,可以看到状态集,pod被创建

[root@master tmp]# kubectl apply -f rabbitmq.yaml
service/rabbitmq-management created
service/rabbitmq created
service/rabbitmq-cluster created
statefulset.apps/rabbitmq created

[root@master tmp]#
NAME                         READY   STATUS                       RESTARTS   AGE
rabbitmq-0                   1/1     Running                      0          18m
rabbitmq-1                   1/1     Running                      0          17m
rabbitmq-2                   1/1     Running                      0          13m
rabbitmq-3                   1/1     Running                      0          13m

[root@master ~]# kubectl get pvc
NAME                     STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
rabbitmq-pv-rabbitmq-0   Bound    rabbitmq-pv-1   1Mi        RWO,ROX                       49m
rabbitmq-pv-rabbitmq-1   Bound    rabbitmq-pv-3   1Mi        RWO,ROX                       48m
rabbitmq-pv-rabbitmq-2   Bound    rabbitmq-pv-2   1Mi        RWO,ROX                       44m
rabbitmq-pv-rabbitmq-3   Bound    rabbitmq-pv-4   1Mi        RWO,ROX                       43m

[root@master ~]# kubectl get svc
NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                         AGE
rabbitmq              ClusterIP   None             <none>        5672/TCP,4369/TCP,25672/TCP                     49m
rabbitmq-cluster      NodePort    10.102.250.172   <none>        5672:30574/TCP,4369:31757/TCP,25672:31854/TCP   49m
rabbitmq-management   NodePort    10.108.131.46    <none>        15672:31716/TCP                                 49m
[root@master ~]#
[root@mastertmp]#kubectl应用-f rabbitmq.yaml
已创建服务/rabbitmq管理
已创建服务/rabbitmq
已创建服务/rabbitmq群集
statefulset.apps/rabbitmq已创建
[root@master[tmp]#
名称就绪状态重新启动
rabbitmq-0 1/1运行0 18m
rabbitmq-1 1/1运行0 17m
rabbitmq-2 1/1运行0 13m
rabbitmq-3 1/1运行0 13m
[root@masterkubectl获得pvc
名称状态卷容量访问模式STORAGECLASS年龄
rabbitmq-pv-rabbitmq-0绑定rabbitmq-pv-1 1Mi RWO,ROX 49m
rabbitmq-pv-rabbitmq-1绑定rabbitmq-pv-3 1Mi RWO,ROX 48m
rabbitmq-pv-rabbitmq-2绑定rabbitmq-pv-2 1Mi RWO,ROX 44m
rabbitmq-pv-rabbitmq-3绑定rabbitmq-pv-4 1Mi RWO,ROX 43m
[root@master~]#kubectl获得svc
名称类型CLUSTER-IP外部IP端口年龄
rabbitmq集群无5672/TCP、4399/TCP、25672/TCP 49m
rabbitmq集群节点端口10.102.250.172 5672:30574/TCP,4399:31757/TCP,25672:31854/TCP 49m
rabbitmq管理节点端口10.108.131.46 15672:31716/TCP 49m
[root@master ~]#
现在,我尝试使用nodeport服务点击rabbitmq管理页面,并且能够获得登录页面


因此,请让我知道,在您尝试上述方法后,是否仍面临chown问题,以便我们可以通过检查是否应用PodSecurityPolicys进一步了解这是我试图重现的问题。 我是在redhat 7上使用kubeadm安装kubernetes群集的,下面是群集、节点详细信息

环境详细信息:

[root@master tmp]# kubectl cluster-info
Kubernetes master is running at https://192.168.56.4:6443
KubeDNS is running at https://192.168.56.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@master tmp]#

[root@master tmp]# kubectl get no
NAME         STATUS     ROLES    AGE     VERSION
master.k8s   Ready      master   8d      v1.16.2
node1.k8s    Ready   <none>   7d22h   v1.16.3
node2.k8s    Ready      <none>   7d21h   v1.16.3
[root@master tmp]#
yum install nfs-utils nfs-utils-lib =============================================================>>>>> on nfs server,client
yum install portmap       =============================================================>>>>> on nfs server,client
mkdir /nfsroot =============================>>>>>>>>>>>>>>>>>>on nfs server
[root@master ~]# cat /etc/exports   =============================================================>>>>> on nfs server
/nfsroot 192.168.56.5/255.255.255.0(rw,sync,no_root_squash)
/nfsroot 192.168.56.6/255.255.255.0(rw,sync,no_root_squash)
exportfs -r               =============================================================>>>>> on nfs server
service nfs start =============================================================>>>>> on nfs server,client
showmount -e =============================================================>>>>> on nfs server,client
现在nfs安装已准备就绪,将应用rabbitmq k8s安装

RABBITMQ K8S设置:

[root@master tmp]# kubectl cluster-info
Kubernetes master is running at https://192.168.56.4:6443
KubeDNS is running at https://192.168.56.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@master tmp]#

[root@master tmp]# kubectl get no
NAME         STATUS     ROLES    AGE     VERSION
master.k8s   Ready      master   8d      v1.16.2
node1.k8s    Ready   <none>   7d22h   v1.16.3
node2.k8s    Ready      <none>   7d21h   v1.16.3
[root@master tmp]#
yum install nfs-utils nfs-utils-lib =============================================================>>>>> on nfs server,client
yum install portmap       =============================================================>>>>> on nfs server,client
mkdir /nfsroot =============================>>>>>>>>>>>>>>>>>>on nfs server
[root@master ~]# cat /etc/exports   =============================================================>>>>> on nfs server
/nfsroot 192.168.56.5/255.255.255.0(rw,sync,no_root_squash)
/nfsroot 192.168.56.6/255.255.255.0(rw,sync,no_root_squash)
exportfs -r               =============================================================>>>>> on nfs server
service nfs start =============================================================>>>>> on nfs server,client
showmount -e =============================================================>>>>> on nfs server,client
第一步是使用我们在上述步骤中创建的nfs装载创建持久卷

[root@master tmp]# cat /root/rabbitmq-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
 name: rabbitmq-pv-1
spec:
 accessModes:
 - ReadWriteOnce
 - ReadOnlyMany
 nfs:
  server: 192.168.56.4
  path: /nfsroot
 capacity:
  storage: 1Mi
 persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
 name: rabbitmq-pv-2
spec:
 accessModes:
 - ReadWriteOnce
 - ReadOnlyMany
 nfs:
  server: 192.168.56.4
  path: /nfsroot
 capacity:
  storage: 1Mi
 persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
 name: rabbitmq-pv-3
spec:
 accessModes:
 - ReadWriteOnce
 - ReadOnlyMany
 nfs:
  server: 192.168.56.4
  path: /nfsroot
 capacity:
  storage: 1Mi
 persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
 name: rabbitmq-pv-4
spec:
 accessModes:
 - ReadWriteOnce
 - ReadOnlyMany
 nfs:
  server: 192.168.56.4
  path: /nfsroot
 capacity:
  storage: 1Mi
 persistentVolumeReclaimPolicy: Recycle
应用上述清单后,它创建了如下pv

[root@master ~]# kubectl apply -f rabbitmq-pv.yaml
persistentvolume/rabbitmq-pv-1 created
persistentvolume/rabbitmq-pv-2 created
persistentvolume/rabbitmq-pv-3 created
persistentvolume/rabbitmq-pv-4 created
[root@master ~]# kubectl get pv
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
rabbitmq-pv-1   1Mi        RWO,ROX        Recycle          Available                                   5s
rabbitmq-pv-2   1Mi        RWO,ROX        Recycle          Available                                   5s
rabbitmq-pv-3   1Mi        RWO,ROX        Recycle          Available                                   5s
rabbitmq-pv-4   1Mi        RWO,ROX        Recycle          Available                                   5s
[root@master ~]#
无需创建persistentvolumeclaim,因为它将在运行StatefSet manifest by volumeclaimtemplate选项时自动进行处理 现在,让我们创建您在下面提到的秘密

[root@master tmp]# kubectl create secret generic rabbitmq-config --from-literal=erlang-cookie=c-is-for-cookie-thats-good-enough-for-me
secret/rabbitmq-config created
[root@master tmp]#

[root@master tmp]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-vjsmd   kubernetes.io/service-account-token   3      8d
jp-token-cfdzx        kubernetes.io/service-account-token   3      5d2h
rabbitmq-config       Opaque                                1      39m
[root@master tmp]#
现在让我们通过将所有loadbalancer服务类型替换为nodeport服务来提交您的rabbitmq清单,因为我们不使用任何cloudprovider环境。同时将卷名替换为rabbitmq pv,这是我们在pv步骤中创建的。将大小从1Gi减小到1Mi,因为这只是测试演示

apiVersion: v1
kind: Service
metadata:
  # Expose the management HTTP port on each node
  name: rabbitmq-management
  labels:
    app: rabbitmq
spec:
  ports:
  - port: 15672
    name: http
  selector:
    app: rabbitmq
  sessionAffinity: ClientIP
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  # The required headless service for StatefulSets
  name: rabbitmq
  labels:
    app: rabbitmq
spec:
  ports:
  - port: 5672
    name: amqp
  - port: 4369
    name: epmd
  - port: 25672
    name: rabbitmq-dist
  clusterIP: None
  selector:
    app: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
  # The required headless service for StatefulSets
  name: rabbitmq-cluster
  labels:
    app: rabbitmq
spec:
  ports:
  - port: 5672
    name: amqp
  - port: 4369
    name: epmd
  - port: 25672
    name: rabbitmq-dist
  type: NodePort
  selector:
    app: rabbitmq
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: rabbitmq
spec:
  serviceName: "rabbitmq"
  selector:
   matchLabels:
    app: rabbitmq
  replicas: 4
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: rabbitmq
        image: rabbitmq:3.6.6-management-alpine
        lifecycle:
          postStart:
            exec:
              command:
              - /bin/sh
              - -c
              - >
                if [ -z "$(grep rabbitmq /etc/resolv.conf)" ]; then
                  sed "s/^search \([^ ]\+\)/search rabbitmq.\1 \1/" /etc/resolv.conf > /etc/resolv.conf.new;
                  cat /etc/resolv.conf.new > /etc/resolv.conf;
                  rm /etc/resolv.conf.new;
                fi;
                until rabbitmqctl node_health_check; do sleep 1; done;
                if [[ "$HOSTNAME" != "rabbitmq-0" && -z "$(rabbitmqctl cluster_status | grep rabbitmq-0)" ]]; then
                  rabbitmqctl stop_app;
                  rabbitmqctl join_cluster rabbit@rabbitmq-0;
                  rabbitmqctl start_app;
                fi;
                rabbitmqctl set_policy ha-all "." '{"ha-mode":"exactly","ha-params":3,"ha-sync-mode":"automatic"}'
        env:
        - name: RABBITMQ_ERLANG_COOKIE
          valueFrom:
            secretKeyRef:
              name: rabbitmq-config
              key: erlang-cookie
        ports:
        - containerPort: 5672
          name: amqp
        - containerPort: 25672
          name: rabbitmq-dist
        volumeMounts:
        - name: rabbitmq-pv
          mountPath: /var/lib/rabbitmq
  volumeClaimTemplates:
  - metadata:
      name: rabbitmq-pv
      annotations:
        volume.alpha.kubernetes.io/storage-class: default
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Mi # make this bigger in production
提交pod清单后,可以看到状态集,pod被创建

[root@master tmp]# kubectl apply -f rabbitmq.yaml
service/rabbitmq-management created
service/rabbitmq created
service/rabbitmq-cluster created
statefulset.apps/rabbitmq created

[root@master tmp]#
NAME                         READY   STATUS                       RESTARTS   AGE
rabbitmq-0                   1/1     Running                      0          18m
rabbitmq-1                   1/1     Running                      0          17m
rabbitmq-2                   1/1     Running                      0          13m
rabbitmq-3                   1/1     Running                      0          13m

[root@master ~]# kubectl get pvc
NAME                     STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
rabbitmq-pv-rabbitmq-0   Bound    rabbitmq-pv-1   1Mi        RWO,ROX                       49m
rabbitmq-pv-rabbitmq-1   Bound    rabbitmq-pv-3   1Mi        RWO,ROX                       48m
rabbitmq-pv-rabbitmq-2   Bound    rabbitmq-pv-2   1Mi        RWO,ROX                       44m
rabbitmq-pv-rabbitmq-3   Bound    rabbitmq-pv-4   1Mi        RWO,ROX                       43m

[root@master ~]# kubectl get svc
NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                         AGE
rabbitmq              ClusterIP   None             <none>        5672/TCP,4369/TCP,25672/TCP                     49m
rabbitmq-cluster      NodePort    10.102.250.172   <none>        5672:30574/TCP,4369:31757/TCP,25672:31854/TCP   49m
rabbitmq-management   NodePort    10.108.131.46    <none>        15672:31716/TCP                                 49m
[root@master ~]#
[root@mastertmp]#kubectl应用-f rabbitmq.yaml
已创建服务/rabbitmq管理
已创建服务/rabbitmq
已创建服务/rabbitmq群集
statefulset.apps/rabbitmq已创建
[root@master[tmp]#
名称就绪状态重新启动
rabbitmq-0 1/1运行0 18m
rabbitmq-1 1/1运行0 17m
rabbitmq-2 1/1运行0 13m
rabbitmq-3 1/1运行0 13m
[root@masterkubectl获得pvc
名称状态卷容量访问模式STORAGECLASS年龄
rabbitmq-pv-rabbitmq-0绑定rabbitmq-pv-1 1Mi RWO,ROX 49m
rabbitmq-pv-rabbitmq-1绑定rabbitmq-pv-3 1Mi RWO,ROX 48m
rabbitmq-pv-rabbitmq-2绑定rabbitmq-pv-2 1Mi RWO,ROX 44m
rabbitmq-pv-rabbitmq-3绑定rabbitmq-pv-4 1Mi RWO,ROX 43m
[root@master~]#kubectl获得svc
名称类型CLUSTER-IP外部IP端口年龄
rabbitmq集群无5672/TCP,436