Kubernetes 如何在命名空间中的作业中运行kubectl?

Kubernetes 如何在命名空间中的作业中运行kubectl?,kubernetes,containers,jobs,rbac,Kubernetes,Containers,Jobs,Rbac,嗨,我看到了kubectl可以在默认pod的pod内运行的地方。 是否可以在指定命名空间中的作业资源内运行kubectl? 未看到任何相关文档或示例 当我尝试将serviceAccounts添加到容器时,出现错误: Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in AP

嗨,我看到了kubectl可以在默认pod的pod内运行的地方。 是否可以在指定命名空间中的作业资源内运行kubectl? 未看到任何相关文档或示例

当我尝试将serviceAccounts添加到容器时,出现错误:

Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
这是当我试着进入容器并运行kubctl时

编辑问题

正如我前面提到的,根据我添加服务帐户的文档,下面是yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: internal-kubectl  
  namespace: my-namespace   
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: modify-pods
  namespace: my-namespace
rules:
  - apiGroups: [""]
    resources:
      - pods
    verbs:
      - get
      - list
      - delete      
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: modify-pods-to-sa
  namespace: my-namespace
subjects:
  - kind: ServiceAccount
    name: internal-kubectl
roleRef:
  kind: Role
  name: modify-pods
  apiGroup: rbac.authorization.k8s.io      
---
apiVersion: batch/v1
kind: Job
metadata:
  name: testing-stuff
  namespace: my-namespace
spec:
  template:
    metadata:
      name: testing-stuff
    spec:
      serviceAccountName: internal-kubectl
      containers:
      - name: tester
        image: bitnami/kubectl
        command:
         - "bin/bash"
         - "-c"
         - "kubectl get pods"
      restartPolicy: Never 
在运行作业时,我得到以下错误:

Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"

像这样创建服务帐户

apiVersion: v1
kind: ServiceAccount
metadata:
  name: internal-kubectl

使用此命令创建ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: modify-pods-to-sa
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: internal-kubectl

现在使用文档中给出的相同配置创建pod。

当您从pod使用kubectl进行任何操作(如获取pod或创建角色和角色绑定)时,它将使用默认服务帐户。默认情况下,此服务帐户没有执行这些操作的权限。所以你需要

  • 使用更具权限的帐户创建服务帐户、角色和角色绑定。您应该拥有具有管理员权限或类似管理员权限的kubeconfig文件。从pod外部使用kubeconfig文件和kubectl创建服务帐户、角色、角色绑定等

  • 完成后,通过指定服务帐户创建pod,您应该能够使用kubectl和服务帐户在此pod内执行角色中定义的操作


  • 是否可以在指定命名空间中的作业资源内运行kubectl?未看到任何相关文档或示例

    创建一个或多个POD,并确保指定数量的POD成功终止。这意味着权限方面与普通pod中相同,这意味着是的,可以在作业资源中运行kubectl。

    TL;医生:

    $ cat job-kubectl.yaml 
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: testing-stuff
      namespace: my-namespace
    spec:
      template:
        metadata:
          name: testing-stuff
        spec:
          serviceAccountName: internal-kubectl
          containers:
          - name: tester
            image: bitnami/kubectl:1.17.3
            command:
             - "bin/bash"
             - "-c"
             - "kubectl get pods -n my-namespace"
          restartPolicy: Never 
    
    $ cat job-svc-account.yaml 
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: internal-kubectl  
      namespace: my-namespace   
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: modify-pods
      namespace: my-namespace
    rules:
      - apiGroups: [""]
        resources: ["pods"]
        verbs: ["get", "list", "delete"]      
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: modify-pods-to-sa
      namespace: my-namespace
    subjects:
      - kind: ServiceAccount
        name: internal-kubectl
    roleRef:
      kind: Role
      name: modify-pods
      apiGroup: rbac.authorization.k8s.io
    
    • 您的yaml文件是正确的,可能集群中还有其他内容,我建议删除并重新创建这些资源,然后重试
    • 同时检查Kubernetes安装的版本和作业映像的kubectl版本,如果它们之间的差异超过1个次要版本

    安全注意事项:

    $ cat job-kubectl.yaml 
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: testing-stuff
      namespace: my-namespace
    spec:
      template:
        metadata:
          name: testing-stuff
        spec:
          serviceAccountName: internal-kubectl
          containers:
          - name: tester
            image: bitnami/kubectl:1.17.3
            command:
             - "bin/bash"
             - "-c"
             - "kubectl get pods -n my-namespace"
          restartPolicy: Never 
    
    $ cat job-svc-account.yaml 
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: internal-kubectl  
      namespace: my-namespace   
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: modify-pods
      namespace: my-namespace
    rules:
      - apiGroups: [""]
        resources: ["pods"]
        verbs: ["get", "list", "delete"]      
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: modify-pods-to-sa
      namespace: my-namespace
    subjects:
      - kind: ServiceAccount
        name: internal-kubectl
    roleRef:
      kind: Role
      name: modify-pods
      apiGroup: rbac.authorization.k8s.io
    
    • 根据(特定角色、特定命名空间上的特定用户),作业角色的范围是最佳实践
    • 如果将
      ClusterRoleBinding
      cluster admin
      角色一起使用,它将正常工作,但权限过大,不推荐使用,因为它将对整个集群提供完全的管理控制

    测试环境:

    $ cat job-kubectl.yaml 
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: testing-stuff
      namespace: my-namespace
    spec:
      template:
        metadata:
          name: testing-stuff
        spec:
          serviceAccountName: internal-kubectl
          containers:
          - name: tester
            image: bitnami/kubectl:1.17.3
            command:
             - "bin/bash"
             - "-c"
             - "kubectl get pods -n my-namespace"
          restartPolicy: Never 
    
    $ cat job-svc-account.yaml 
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: internal-kubectl  
      namespace: my-namespace   
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: modify-pods
      namespace: my-namespace
    rules:
      - apiGroups: [""]
        resources: ["pods"]
        verbs: ["get", "list", "delete"]      
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: modify-pods-to-sa
      namespace: my-namespace
    subjects:
      - kind: ServiceAccount
        name: internal-kubectl
    roleRef:
      kind: Role
      name: modify-pods
      apiGroup: rbac.authorization.k8s.io
    
    • 我将您的配置部署在kubernetes 1.17.3上,并使用
      bitnami/kubectl
      bitnami/kubectl:1:17.3
      运行作业。这两种情况都有效
    • 为避免不兼容,请使用与服务器版本匹配的
      kubectl

    复制:

    $ cat job-kubectl.yaml 
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: testing-stuff
      namespace: my-namespace
    spec:
      template:
        metadata:
          name: testing-stuff
        spec:
          serviceAccountName: internal-kubectl
          containers:
          - name: tester
            image: bitnami/kubectl:1.17.3
            command:
             - "bin/bash"
             - "-c"
             - "kubectl get pods -n my-namespace"
          restartPolicy: Never 
    
    $ cat job-svc-account.yaml 
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: internal-kubectl  
      namespace: my-namespace   
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: modify-pods
      namespace: my-namespace
    rules:
      - apiGroups: [""]
        resources: ["pods"]
        verbs: ["get", "list", "delete"]      
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: modify-pods-to-sa
      namespace: my-namespace
    subjects:
      - kind: ServiceAccount
        name: internal-kubectl
    roleRef:
      kind: Role
      name: modify-pods
      apiGroup: rbac.authorization.k8s.io
    
    • 我创建了两个pod,只是为了将输出添加到
      getpods
      的日志中
    • 然后我应用
      作业
      服务帐户
      角色
      角色绑定
    • 现在让我们检查一下测试日志,看看它是否记录了命令输出:
    如您所见,它已成功地使用自定义
    serviceCount
    运行作业


    如果您对此案例还有其他问题,请告诉我。

    您的pod仅对默认名称空间具有权限。请创建具有群集管理权限的服务帐户,然后重试。@VipinMenon如果提供的答案不能解决您的问题,请发布您的作业操作,我可以充分演示如何更正它。@willrof如果我已修改问题以添加yaml文件used@VipinMenon谢谢你提供的信息。我会复制你的场景,直到周一我会给你一个答案@VipinMenon我设法验证了您的环境,请检查我下面的答案,并提供完整的复制和文档参考。如果它是有用的,如果你回答问题的话,也要考虑它的投票。当我提前创建上面的清单时,似乎有一些拼写错误。