Kubernetes Pod服务帐户未使用定义的PSP配置文件

Kubernetes Pod服务帐户未使用定义的PSP配置文件,kubernetes,Kubernetes,我已定义以下serviceaccount、角色和角色绑定: # role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: concourse-worker namespace: k8s-01 rules: - apiGroups: ['policy'] resources: ['podsecuritypolicies

我已定义以下serviceaccount、角色和角色绑定:

    # role.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: concourse-worker
      namespace: k8s-01
    rules:
    - apiGroups: ['policy']
      resources: ['podsecuritypolicies']
      verbs: ['use']
      resourceNames:
        - wcp-privileged-psp
    ---
    # role-binding.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: concourse-worker
      namespace: k8s-01
      labels:
        app: concourse-worker
        release: concourse
    subjects:
    - kind: ServiceAccount
      name: concourse-worker
      namespace: k8s-01
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: concourse-worker
    ---
    # service-account.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: concourse-worker
      namespace: k8s-01
      labels:
        app: concourse-worker
        release: concourse
为了确认serviceaccount可以访问wcp特权psp,我运行了以下程序:

» kubectl --as=system:serviceaccount:k8s-01:concourse-worker auth can-i use podsecuritypolicy/wcp-privileged-psp
Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
yes
部署我的有状态应用程序:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: concourse-worker
  labels:
    app: concourse-worker
    release: "concourse"
spec:
  serviceName: concourse-worker
  replicas: 2
  selector:
    matchLabels:
      app: concourse-worker
      release: concourse
  template:
    metadata:
      labels:
        app: concourse-worker
        release: "concourse"
        tier: middletier
    spec:
      serviceAccountName: concourse-worker
      terminationGracePeriodSeconds: 60
      initContainers:
        - name: concourse-worker-init-rm
          image: "x.x.x.x/k8s-01/concourse:6.0.0"
          imagePullPolicy: "IfNotPresent"
          securityContext:
           privileged: true
          command:
            - /bin/bash
          args:
            - -ce
            - |-
              for v in $((btrfs subvolume list --sort=-ogen "/concourse-work-dir" || true) | awk '{print $9}'); do
                (btrfs subvolume show "/concourse-work-dir/$v" && btrfs subvolume delete "/concourse-work-dir/$v") || true
              done
              rm -rf "/concourse-work-dir/*"
          volumeMounts:
            - name: concourse-work-dir
              mountPath: "/concourse-work-dir"
      containers:
        - name: concourse-worker
          image: "x.x.x.x/k8s-01/concourse:6.0.0"
          imagePullPolicy: "IfNotPresent"
          args:
            - worker
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /
              port: worker-hc
            initialDelaySeconds: 10
            periodSeconds: 15
            timeoutSeconds: 3
          lifecycle:
            preStop:
              exec:
                command:
                  - "/bin/bash"
                  - "/pre-stop-hook.sh"
          env:
            - name: CONCOURSE_SWEEP_INTERVAL
              value: "30s"
            - name: CONCOURSE_CONNECTION_DRAIN_TIMEOUT
              value: "1h"
            - name: CONCOURSE_HEALTHCHECK_BIND_IP
              value: "0.0.0.0"
            - name: CONCOURSE_HEALTHCHECK_BIND_PORT
              value: "8888"
            - name: CONCOURSE_HEALTHCHECK_TIMEOUT
              value: "5s"
            - name: CONCOURSE_DEBUG_BIND_IP
              value: "127.0.0.1"
            - name: CONCOURSE_DEBUG_BIND_PORT
              value: "7776"
            - name: CONCOURSE_WORK_DIR
              value: "/concourse-work-dir"
            - name: CONCOURSE_BIND_IP
              value: "127.0.0.1"
            - name: CONCOURSE_BIND_PORT
              value: "7777"
            - name: CONCOURSE_LOG_LEVEL
              value: "debug"
            - name: CONCOURSE_TSA_HOST
              value: "concourse-web:2222"
            - name: CONCOURSE_TSA_PUBLIC_KEY
              value: "/concourse-keys/host_key.pub"
            - name: CONCOURSE_TSA_WORKER_PRIVATE_KEY
              value: "/concourse-keys/worker_key"
            - name: CONCOURSE_BAGGAGECLAIM_LOG_LEVEL
              value: "info"
            - name: CONCOURSE_BAGGAGECLAIM_BIND_IP
              value: "127.0.0.1"
            - name: CONCOURSE_BAGGAGECLAIM_BIND_PORT
              value: "7788"
            - name: CONCOURSE_BAGGAGECLAIM_DEBUG_BIND_IP
              value: "127.0.0.1"
            - name: CONCOURSE_BAGGAGECLAIM_DEBUG_BIND_PORT
              value: "7787"
            - name: CONCOURSE_BAGGAGECLAIM_DRIVER
              value: "naive"
            - name: CONCOURSE_BAGGAGECLAIM_BTRFS_BIN
              value: "btrfs"
            - name: CONCOURSE_BAGGAGECLAIM_MKFS_BIN
              value: "mkfs.btrfs"
            - name: CONCOURSE_VOLUME_SWEEPER_MAX_IN_FLIGHT
              value: "5"
            - name: CONCOURSE_CONTAINER_SWEEPER_MAX_IN_FLIGHT
              value: "5"
          ports:
            - name: worker-hc
              containerPort: 8888
          resources:
            requests:
              cpu: 100m
              memory: 512Mi
          securityContext:
            privileged: true
          volumeMounts:
            - name: concourse-keys
              mountPath: "/concourse-keys"
              readOnly: true
            - name: concourse-work-dir
              mountPath: "/concourse-work-dir"
            - name: pre-stop-hook
              mountPath: /pre-stop-hook.sh
              subPath: pre-stop-hook.sh
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app: concourse-worker
                  release: "concourse"
      volumes:
        - name: pre-stop-hook
          configMap:
            name: concourse-worker
        - name: concourse-keys
          secret:
            secretName: concourse-worker
            defaultMode: 0400
            items:
              - key: host-key-pub
                path: host_key.pub
              - key: worker-key
                path: worker_key
        - name: concourse-work-dir
          emptyDir:
            sizeLimit: 20Gi
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
然而,当它被部署,我检查它的PSP,它没有显示“wcp特权PSP”,因为它应该,但默认的“wcp默认PSP”

如文件所述,政策是按此顺序选择的

  • PODSecurityPolicy允许pod保持原样,而不改变默认值或改变pod,这是首选。这些非变异的PodSecurityPolicies的顺序并不重要
  • 如果pod必须是默认的或变异的,则选择允许pod的第一个PodSecurityPolicy(按名称排序)
  • 在这种情况下,
    wcp default psp
    按顺序排在
    wcp privileged psp
    之前

     »?  kubectl describe pods concourse-worker-0                                                                                            /opt/k8s/apps/concourse
    Name:         concourse-worker-0
    Namespace:    k8s-01
    Priority:     0
    Node:         x.x.x.x
    Start Time:   Sun, 26 Apr 2020 15:51:15 -0400
    Labels:       app=concourse-worker
                  controller-revision-hash=concourse-worker-6847cb88c5
                  release=concourse
                  statefulset.kubernetes.io/pod-name=concourse-worker-0
                  tier=middletier
    Annotations:  
                  kubernetes.io/psp: **wcp-default-psp**
                  mac: xxx
                  vlan: None