Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/kubernetes/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/reactjs/23.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Amazon web services 与kubeadm一起使用“cloud provider=aws”时,kube控制器管理器不会启动_Amazon Web Services_Kubernetes_Kubeadm_Kube Controller Manager - Fatal编程技术网

Amazon web services 与kubeadm一起使用“cloud provider=aws”时,kube控制器管理器不会启动

Amazon web services 与kubeadm一起使用“cloud provider=aws”时,kube控制器管理器不会启动,amazon-web-services,kubernetes,kubeadm,kube-controller-manager,Amazon Web Services,Kubernetes,Kubeadm,Kube Controller Manager,我试图使用Kubernetes与AWS集成,但kube控制器管理器无法启动。 顺便说一句:没有ASW选项,一切都能完美工作 以下是我的工作: -一,- ubuntu@ip-172-31-17-233:~$more/etc/kubernetes/aws.conf apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration cloudProvider: aws kubernetesVersion: 1.10.3 [init] Usin

我试图使用Kubernetes与AWS集成,但kube控制器管理器无法启动。 顺便说一句:没有ASW选项,一切都能完美工作

以下是我的工作:

-一,-

ubuntu@ip-172-31-17-233:~$more/etc/kubernetes/aws.conf

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
cloudProvider: aws
kubernetesVersion: 1.10.3
[init] Using Kubernetes version: v1.10.3
[init] Using Authorization modes: [Node RBAC]
[init] WARNING: For cloudprovider integrations to work --cloud-provider must be set for all kubelets in the cluster.
        (/etc/systemd/system/kubelet.service.d/10-kubeadm.conf should be edited for this purpose)
[preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ip-172-31-17-233 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.17.233]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ip-172-31-17-233] and IPs [172.31.17.233]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 19.001348 seconds
[uploadconfig]Â Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node ip-172-31-17-233 as master by adding a label and a taint
[markmaster] Master ip-172-31-17-233 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: x8hi0b.uxjr40j9gysc7lcp
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.31.17.233:6443 --token x8hi0b.uxjr40j9gysc7lcp --discovery-token-ca-cert-hash sha256:8ad9dfbcacaeba5bc3242c811b1e83c647e2e88f98b0d783875c2053f7a40f44
-二,-

ubuntu@ip-172-31-17-233:~$more/etc/kubernetes/cloud-config.conf

[Global]
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes
根据我发现的示例,我在这里尝试了很多组合,包括ws\u access\u key\u id、aws\u secret\u access\u key、省略.conf或删除此文件,但都不起作用

-三,-

ubuntu@ip-172-31-17-233:~$sudo kubeadm init-config/etc/kubernetes/aws.conf

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
cloudProvider: aws
kubernetesVersion: 1.10.3
[init] Using Kubernetes version: v1.10.3
[init] Using Authorization modes: [Node RBAC]
[init] WARNING: For cloudprovider integrations to work --cloud-provider must be set for all kubelets in the cluster.
        (/etc/systemd/system/kubelet.service.d/10-kubeadm.conf should be edited for this purpose)
[preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ip-172-31-17-233 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.17.233]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ip-172-31-17-233] and IPs [172.31.17.233]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 19.001348 seconds
[uploadconfig]Â Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node ip-172-31-17-233 as master by adding a label and a taint
[markmaster] Master ip-172-31-17-233 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: x8hi0b.uxjr40j9gysc7lcp
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.31.17.233:6443 --token x8hi0b.uxjr40j9gysc7lcp --discovery-token-ca-cert-hash sha256:8ad9dfbcacaeba5bc3242c811b1e83c647e2e88f98b0d783875c2053f7a40f44
-四,-

-五,-

ubuntu@ip-172-31-17-233:~$kubectl获取吊舱-所有名称空间

NAMESPACE     NAME                                       READY     STATUS             RESTARTS   AGE
kube-system   etcd-ip-172-31-17-233                      1/1       Running            0          40s
kube-system   kube-apiserver-ip-172-31-17-233            1/1       Running            0          45s
kube-system   kube-controller-manager-ip-172-31-17-233   0/1       CrashLoopBackOff   3          1m
kube-system   kube-scheduler-ip-172-31-17-233            1/1       Running            0          35s
kubectl版本

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
有什么想法吗? 我是库伯内特斯的新手,我不知道我能做什么

谢谢, 迈克尔

有什么想法吗

检查以下几点是否为潜在问题:

kubelet具有正确的提供程序集,请检查/etc/systemd/system/kubelet.service.d/20-cloud-provider.conf,其中包含:

Environment="KUBELET_EXTRA_ARGS=--cloud-provider=aws --cloud-config=/etc/kubernetes/cloud-config.conf
如果没有,请添加并重新启动kubelet服务

在/etc/kubernetes/manifests/check中,检查以下文件是否具有正确的配置:

kube-controller-manager.yaml和kube-apiserver.yaml:

如果没有,只需添加,pod将自动重新启动

以防万一,请检查AWS资源EC2实例等是否使用从cloud-config.conf获取的kubernetes标记进行了标记,并正确设置了IAM策略。 如果您能够按照Artem的要求在评论中提供日志,则可以进一步阐明该问题

编辑 根据评论中的要求,IAM策略处理的简要概述:

创建新的IAM策略或适当编辑(如果已创建),例如k8s默认策略。下面给出的是一个非常自由的策略,您可以精确设置以匹配您的安全偏好。在您的案例中,请注意负载平衡器部分。在描述中,请按照允许EC2实例代表您调用AWS服务这一行写一些东西。或类似的

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::kubernetes-*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": "ec2:Describe*",
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": "ec2:AttachVolume",
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": "ec2:DetachVolume",
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": ["ec2:*"],
      "Resource": ["*"]
    },
    {
      "Effect": "Allow",
      "Action": ["elasticloadbalancing:*"],
      "Resource": ["*"]
    }  ]
} 
创建新角色或适当编辑(如果已创建)并将以前的策略附加到该角色,例如将k8s默认策略附加到k8s默认角色

将角色附加到可以处理AWS资源的实例。如果需要,您可以为master和Worker创建不同的角色。EC2->实例->选择实例->操作->实例设置->附加/替换IAM角色->选择适当的角色

此外,除此之外,还要检查所有相关资源是否都使用kubernetes标记


你能提供kube控制器管理器pod:1的日志吗。kubectl日志kube-controller-manager-ip-172-31-17-233-n kube系统;2.kubectl描述了pod kube-controller-manager-ip-172-31-17-233-n kube系统在这里遇到了完全相同的问题。我应该如何配置IAM策略?我目前只使用ssh连接到AWS实例,并通过kubeadm创建集群。除负载平衡器外,所有工作正常。编辑答案以合并IAM策略问题。谢谢!我确实设置了这些策略,但仍然存在错误。关于标记实例,我在cloud-config.conf文件中有kubernetes,所以我只需转到aws帐户并添加标记kubernetes和值owned。这就是标记实例的方式吗?在这里启动一个新线程,并提供更多日志信息。