Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/kubernetes/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
使用kops创建kubernetes群集失败_Kubernetes_Kops - Fatal编程技术网

使用kops创建kubernetes群集失败

使用kops创建kubernetes群集失败,kubernetes,kops,Kubernetes,Kops,我试图在aws上创建一个非常简单的集群,其中包含一个主节点和两个工作节点的KOP。但创建之后,kops验证集群会抱怨集群不健康 创建群集时使用: kops create cluster --name=mycluster --zones=ap-south-1a --master-size="t2.micro" --node-size="t2.micro" --node-count="2" --cloud aws --ssh-public-

我试图在aws上创建一个非常简单的集群,其中包含一个主节点和两个工作节点的KOP。但创建之后,kops验证集群会抱怨集群不健康

创建群集时使用:

kops create cluster --name=mycluster --zones=ap-south-1a --master-size="t2.micro" --node-size="t2.micro" --node-count="2" --cloud aws --ssh-public-key="~/.ssh/id_rsa.pub"
获取kube系统命名空间中的资源显示:

NAME                                                                       READY   STATUS             RESTARTS   AGE
pod/dns-controller-8d8889c4b-rwnkd                                         1/1     Running            0          47m
pod/etcd-manager-events-ip-xxx-xxx-xxx-xxx..ap-south-1.compute.internal       1/1     Running            0          72m
pod/etcd-manager-main-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal         1/1     Running            0          72m
pod/kops-controller-xxxtk                                                  1/1     Running            11         70m
pod/kube-apiserver-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal            2/2     Running            1          72m
pod/kube-controller-manager-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal   0/1     CrashLoopBackOff   15         72m
pod/kube-dns-696cb84c7-qzqf2                                               3/3     Running            0          16h
pod/kube-dns-696cb84c7-tt7ng                                               3/3     Running            0          16h
pod/kube-dns-autoscaler-55f8f75459-7jbjb                                   1/1     Running            0          16h
pod/kube-proxy-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal                1/1     Running            0          16h
pod/kube-proxy-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal                1/1     Running            0          72m
pod/kube-proxy-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal                1/1     Running            0          16h
pod/kube-scheduler-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal            1/1     Running            15         72m

NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
service/kube-dns   ClusterIP   100.64.0.10   <none>        53/UDP,53/TCP   16h

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                                      AGE
daemonset.apps/kops-controller   1         1         1       1            1           kops.k8s.io/kops-controller-pki=,node-role.kubernetes.io/master=   16h

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dns-controller        1/1     1            1           16h
deployment.apps/kube-dns              2/2     2            2           16h
deployment.apps/kube-dns-autoscaler   1/1     1            1           16h

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/dns-controller-8d8889c4b         1         1         1       16h
replicaset.apps/kube-dns-696cb84c7               2         2         2       16h
replicaset.apps/kube-dns-autoscaler-55f8f75459   1         1         1       16h


我看不出您正在运行的命令有什么特别的错误。但是,t2.micro非常小,可能太小,无法使群集正常工作


您可以查看kops操作员日志,了解其为何未启动。请尝试
kubectl记录kops控制器xxxx
kubectl描述pod kops控制器xxx

我看不出您正在运行的命令有任何特别的错误。但是,t2.micro非常小,可能太小,无法使群集正常工作


您可以查看kops操作员日志,了解其为何未启动。尝试
kubectl记录kops控制器xxxx
kubectl描述pod kops控制器xxx

你知道,在@Markus和你的评论之后,我开始深入挖掘信息,这就是我发现的

第一篇文章。使用T2.medium的示例,包含非常详细的步骤和时间线,描述了那里发生的事情

结论:

我们已经证明了Kubernetes上部署的不可预测性 集群不适合T2/3系列实例。有 由于POD消耗大量资源,可能会限制实例 资源量。充其量这将限制您的应用程序的性能 应用程序和在最坏的情况下可能会导致群集失败(如果使用 由于ETCD问题,主节点为T2/3s。此外 只有在监视CloudWatch时,才会检测到情况 仔细检查或在服务器上执行应用程序性能监视 豆荚

为此,建议避免使用T2/3实例类型族 对于Kubernetes部署,如果您希望在 使用更传统的实例族(如Ms和Rs),然后 看看我们的博客现场实例

在官方信息旁边:

1)t2.micro规格:t2.micro是1个vCPU和1 gb内存 :

2)Kubernetes通常所需的最小内存和CPU(核心):

  • 主节点所需的最小内存为2GB,工作节点所需的最小内存为1GB

  • 主节点至少需要1.5个核,工作节点至少需要0.7个


资源不足。请为大师使用最低T2.medium

你知道,在@Markus和你的评论之后,我开始深入挖掘信息,这就是我发现的

第一篇文章。使用T2.medium的示例,包含非常详细的步骤和时间线,描述了那里发生的事情

结论:

我们已经证明了Kubernetes上部署的不可预测性 集群不适合T2/3系列实例。有 由于POD消耗大量资源,可能会限制实例 资源量。充其量这将限制您的应用程序的性能 应用程序和在最坏的情况下可能会导致群集失败(如果使用 由于ETCD问题,主节点为T2/3s。此外 只有在监视CloudWatch时,才会检测到情况 仔细检查或在服务器上执行应用程序性能监视 豆荚

为此,建议避免使用T2/3实例类型族 对于Kubernetes部署,如果您希望在 使用更传统的实例族(如Ms和Rs),然后 看看我们的博客现场实例

在官方信息旁边:

1)t2.micro规格:t2.micro是1个vCPU和1 gb内存 :

2)Kubernetes通常所需的最小内存和CPU(核心):

  • 主节点所需的最小内存为2GB,工作节点所需的最小内存为1GB

  • 主节点至少需要1.5个核,工作节点至少需要0.7个


资源不足。请为大师赛使用最低T2.M版

您是否尝试过其他区域和地区,而不是ap south?@Vitalii不,我没有。让我试试看它在其他地区是否有效。但我认为这与地区无关。最有可能的问题是我选择的实例类型,并且我只创建了一个主节点。我也将在其他区域进行尝试,并返回结果。我尝试将主节点类型设置为t2.medium,它不会像前面那样验证失败。可能t2.micro的实例太小,无法运行主节点。我在很多地方见过人们使用t2.micro创建一个小集群,所以我仍然不确定为什么只有我看到了这个错误。你有没有尝试过其他区域和地区,而不是ap south?@Vitalii没有,我没有。让我试试看它在其他地区是否有效。但我认为这与地区无关。最有可能的问题是我选择的实例类型,并且我只创建了一个主节点。我也将在其他区域进行尝试,并返回结果。我尝试将主节点类型设置为t2.medium,它不会像前面那样验证失败。可能t2.micro的实例太小,无法运行主节点。我在很多地方看到人们使用t2.micro制作一个小集群,所以我仍然不确定为什么只有我看到这个错误。Markus。我希望测试集群,我不需要提供中间实例,但可能你是对的。让我检查日志,并尝试使用稍大的节点大小进行配置。以下是指向日志的链接:。我不明白这里发生了什么。只是想知道它是否有g
NAME                                                                       READY   STATUS             RESTARTS   AGE
pod/dns-controller-8d8889c4b-rwnkd                                         1/1     Running            0          47m
pod/etcd-manager-events-ip-xxx-xxx-xxx-xxx..ap-south-1.compute.internal       1/1     Running            0          72m
pod/etcd-manager-main-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal         1/1     Running            0          72m
pod/kops-controller-xxxtk                                                  1/1     Running            11         70m
pod/kube-apiserver-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal            2/2     Running            1          72m
pod/kube-controller-manager-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal   0/1     CrashLoopBackOff   15         72m
pod/kube-dns-696cb84c7-qzqf2                                               3/3     Running            0          16h
pod/kube-dns-696cb84c7-tt7ng                                               3/3     Running            0          16h
pod/kube-dns-autoscaler-55f8f75459-7jbjb                                   1/1     Running            0          16h
pod/kube-proxy-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal                1/1     Running            0          16h
pod/kube-proxy-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal                1/1     Running            0          72m
pod/kube-proxy-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal                1/1     Running            0          16h
pod/kube-scheduler-ip-xxx-xxx-xxx-xxx.ap-south-1.compute.internal            1/1     Running            15         72m

NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
service/kube-dns   ClusterIP   100.64.0.10   <none>        53/UDP,53/TCP   16h

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                                      AGE
daemonset.apps/kops-controller   1         1         1       1            1           kops.k8s.io/kops-controller-pki=,node-role.kubernetes.io/master=   16h

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dns-controller        1/1     1            1           16h
deployment.apps/kube-dns              2/2     2            2           16h
deployment.apps/kube-dns-autoscaler   1/1     1            1           16h

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/dns-controller-8d8889c4b         1         1         1       16h
replicaset.apps/kube-dns-696cb84c7               2         2         2       16h
replicaset.apps/kube-dns-autoscaler-55f8f75459   1         1         1       16h

I0211 04:26:45.546427       1 flags.go:59] FLAG: --vmodule=""
I0211 04:26:45.546442       1 flags.go:59] FLAG: --write-config-to=""
I0211 04:26:46.306497       1 serving.go:331] Generated self-signed cert in-memory
W0211 04:26:47.736258       1 authentication.go:368] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0211 04:26:47.765649       1 authentication.go:265] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0211 04:26:47.783852       1 authentication.go:289] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0211 04:26:47.798838       1 authorization.go:187] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0211 04:26:47.831825       1 authorization.go:156] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0211 04:26:55.344064       1 factory.go:210] Creating scheduler from algorithm provider 'DefaultProvider'
I0211 04:26:55.370766       1 registry.go:173] Registering SelectorSpread plugin
I0211 04:26:55.370802       1 registry.go:173] Registering SelectorSpread plugin
I0211 04:26:55.504324       1 server.go:146] Starting Kubernetes Scheduler version v1.19.7
W0211 04:26:55.607516       1 authorization.go:47] Authorization is disabled
W0211 04:26:55.607537       1 authentication.go:40] Authentication is disabled
I0211 04:26:55.618714       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0211 04:26:55.741863       1 tlsconfig.go:200] loaded serving cert ["Generated self signed cert"]: "localhost@1613017606" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer="localhost-ca@1613017605" (2021-02-11 03:26:45 +0000 UTC to 2022-02-11 03:26:45 +0000 UTC (now=2021-02-11 04:26:55.741788572 +0000 UTC))
I0211 04:26:55.746888       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1613017607" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1613017607" (2021-02-11 03:26:46 +0000 UTC to 2022-02-11 03:26:46 +0000 UTC (now=2021-02-11 04:26:55.7468713 +0000 UTC))
I0211 04:26:55.757881       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0211 04:26:55.771581       1 secure_serving.go:197] Serving securely on [::]:10259
I0211 04:26:55.793134       1 reflector.go:207] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0211 04:26:55.815641       1 reflector.go:207] Starting reflector *v1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I0211 04:26:55.841309       1 reflector.go:207] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I0211 04:26:55.857460       1 reflector.go:207] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0211 04:26:55.875096       1 reflector.go:207] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0211 04:26:55.894283       1 reflector.go:207] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I0211 04:26:55.894615       1 reflector.go:207] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0211 04:26:55.895000       1 reflector.go:207] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I0211 04:26:55.895250       1 reflector.go:207] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I0211 04:26:55.902323       1 reflector.go:207] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I0211 04:26:55.902572       1 reflector.go:207] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0211 04:26:55.905927       1 reflector.go:207] Starting reflector *v1.Pod (0s) from k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188
I0211 04:26:56.355570       1 node_tree.go:86] Added node "ip-172-20-43-190.ap-south-1.compute.internal" in group "ap-south-1:\x00:ap-south-1a" to NodeTree
I0211 04:26:56.357441       1 node_tree.go:86] Added node "ip-172-20-63-116.ap-south-1.compute.internal" in group "ap-south-1:\x00:ap-south-1a" to NodeTree
I0211 04:26:56.357578       1 node_tree.go:86] Added node "ip-172-20-60-103.ap-south-1.compute.internal" in group "ap-south-1:\x00:ap-south-1a" to NodeTree
I0211 04:26:56.377402       1 leaderelection.go:243] attempting to acquire leader lease  kube-system/kube-scheduler...
I0211 04:27:12.368681       1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler
I0211 04:27:12.436915       1 scheduler.go:597] "Successfully bound pod to node" pod="default/nginx-deployment-66b6c48dd5-w4hb5" node="ip-172-20-63-116.ap-south-1.compute.internal" evaluatedNodes=3 feasibleNodes=2
I0211 04:27:12.451792       1 scheduler.go:597] "Successfully bound pod to node" pod="default/nginx-deployment-66b6c48dd5-4xz8l" node="ip-172-20-43-190.ap-south-1.compute.internal" evaluatedNodes=3 feasibleNodes=2
E0211 04:32:20.487059       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-scheduler: Get "https://127.0.0.1/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s": context deadline exceeded
I0211 04:32:20.633059       1 leaderelection.go:278] failed to renew lease kube-system/kube-scheduler: timed out waiting for the condition
F0211 04:32:20.673521       1 server.go:199] leaderelection lost
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0005c2d01, 0xc000900800, 0x41, 0x1fd)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
....
... stack trace from go runtime