Kubernetes Cronjob openshift未运行pod

Kubernetes Cronjob openshift未运行pod,kubernetes,cron,openshift,kubernetes-pod,Kubernetes,Cron,Openshift,Kubernetes Pod,我正试图安排一个CronJob来启动kubectl命令。cronjob不会启动pod。 这是我的工作 apiVersion: batch/v1beta1 kind: CronJob metadata: name: mariadump namespace: my-namespace spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec:

我正试图安排一个CronJob来启动kubectl命令。cronjob不会启动pod。 这是我的工作

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: mariadump
  namespace: my-namespace
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: mariadbdumpsa
          containers:
          - name: kubectl
            image: garland/kubectl:1.10.4
            command:
            - /bin/sh
            - -c
            - kubectl get pods;echo 'DDD'
          restartPolicy: OnFailure 
我通过以下方式在openshift上创建cronjob:

oc create -f .\cron.yaml
获得以下结果

PS C:\Users\mymachine> oc create -f .\cron.yaml
cronjob.batch/mariadump created
PS C:\Users\mymachine> oc get cronjob -w
NAME        SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
mariadump   */1 * * * *   False     0        <none>          22s
mariadump   */1 * * * *   False     1        10s             40s
mariadump   */1 * * * *   False     0        20s             50s
PS C:\Users\mymachine> oc get pods -w
NAME                         READY   STATUS       RESTARTS   AGE

它在没有权限的情况下按预期工作

PS C:\Users\myuser> oc get cronjob -w
NAME        SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
mariadump   */1 * * * *   False     0        <none>          8s
mariadump   */1 * * * *   False     1        3s              61s
PS C:\Users\myuser> oc get pods -w
NAME                         READY   STATUS             RESTARTS   AGE
mariadump-1616089500-mnfxs   0/1     CrashLoopBackOff   1          8s

PS C:\Users\myuser> oc logs mariadump-1616089500-mnfxs
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:default" cannot list resource "pods" in API group "" in the namespace "my-namespace"
任何人都可以帮助我了解为什么带有ServiceAccount的cronjob无法工作


谢谢

有了这个yaml,它真的开始工作了

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: my-namespace
  name: mariadbdump
rules:
  - apiGroups:
      - ""
      - ''
    resources:
      - deployments
      - replicasets
      - pods
      - pods/exec
    verbs:
      - 'watch'
      - 'get'
      - 'create'
      - 'list'
      
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: mariadbdump
  namespace: my-namespace
subjects:
  - kind: ServiceAccount
    name: mariadbdumpsa
    namespace: my-namespace
roleRef:
  kind: Role
  name: mariadbdump
  apiGroup: ""
  
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mariadbdumpsa
  namespace: my-namespace
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: mariadump
  namespace: my-namespace
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: mariadbdumpsa
          containers:
          - name: kubectl
            image: garland/kubectl:1.10.4
            command:
            - /bin/sh
            - -c
            - kubectl exec $(kubectl get pods | grep Running | grep 'mariadb' | awk '{print $1}') -- /opt/rh/rh-mariadb102/root/usr/bin/mysqldump --skip-lock-tables -h 127.0.0.1 -P 3306 -u userdb --password=userdbpass databasename >/tmp/backup.sql;kubectl cp my-namespace/$(kubectl get pods | grep Running | grep 'mariadbdump' | awk '{print $1}'):/tmp/backup.sql my-namespace/$(kubectl get pods | grep Running | grep 'mariadb' | awk '{print $1}'):/tmp/backup.sql;echo 'Backup done'
          restartPolicy: OnFailure

您是否在发布时创建了角色、sa、rolebinding?您的RBAC和cronjob中存在一些输入错误。(cronjob和RBAC中的SA名称不同)。另一个问题可能是pod执行容器太快,Kubernetes认为它失败了。请在命令中添加
-sleep 30
,如果您对
CrashLoop
仍有问题,请告知我。很抱歉,我在此处复制了错误的SA名称,但我运行了正确的SA名称。我添加了-sleep 30,同样的情况也发生了没有创建pod:(你能检查/分享
kubectl descripe cronjob-n my_namespace mariadump
的输出吗?
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: my_namespace
  name: mariadbdump
rules:
- apiGroups:
  - extensions
  - apps
  resources:
  - deployments
  - replicasets
  verbs:
  - 'patch'
  - 'get'
​
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: mariadbdump
  namespace: my_namespace
subjects:
- kind: ServiceAccount
  name: mariadbdumpsa
  namespace: my_namespace
roleRef:
  kind: Role
  name: mariadbdump
  apiGroup: ""
  
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mariadbdumpsa
  namespace: my_namespace

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: my-namespace
  name: mariadbdump
rules:
  - apiGroups:
      - ""
      - ''
    resources:
      - deployments
      - replicasets
      - pods
      - pods/exec
    verbs:
      - 'watch'
      - 'get'
      - 'create'
      - 'list'
      
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: mariadbdump
  namespace: my-namespace
subjects:
  - kind: ServiceAccount
    name: mariadbdumpsa
    namespace: my-namespace
roleRef:
  kind: Role
  name: mariadbdump
  apiGroup: ""
  
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: mariadbdumpsa
  namespace: my-namespace
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: mariadump
  namespace: my-namespace
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: mariadbdumpsa
          containers:
          - name: kubectl
            image: garland/kubectl:1.10.4
            command:
            - /bin/sh
            - -c
            - kubectl exec $(kubectl get pods | grep Running | grep 'mariadb' | awk '{print $1}') -- /opt/rh/rh-mariadb102/root/usr/bin/mysqldump --skip-lock-tables -h 127.0.0.1 -P 3306 -u userdb --password=userdbpass databasename >/tmp/backup.sql;kubectl cp my-namespace/$(kubectl get pods | grep Running | grep 'mariadbdump' | awk '{print $1}'):/tmp/backup.sql my-namespace/$(kubectl get pods | grep Running | grep 'mariadb' | awk '{print $1}'):/tmp/backup.sql;echo 'Backup done'
          restartPolicy: OnFailure