kubeconfig文件“/etc/kubernetes/admin.conf”;已存在,但获取了错误的API服务器URL

kubeconfig文件“/etc/kubernetes/admin.conf”;已存在,但获取了错误的API服务器URL,kubernetes,kubeadm,Kubernetes,Kubeadm,我正在安装一个高可用性集群 三台主机:10.240.0.4(kb8-master1)、10.240.0.33(kb8-master2)、10.240.0.75(kb8-master3) 磅:10.240.0.16(haproxy) 我已设置kb8-master1,并按照说明将以下文件复制到其余的主控设备(kb8-master2和kb8-master3) 在kb8-master2中 mkdir -p /etc/kubernetes/pki/etcd mv /home/${USER}/ca.crt

我正在安装一个高可用性集群

三台主机:10.240.0.4(kb8-master1)、10.240.0.33(kb8-master2)、10.240.0.75(kb8-master3) 磅:10.240.0.16(haproxy)

我已设置kb8-master1,并按照说明将以下文件复制到其余的主控设备(kb8-master2和kb8-master3)

在kb8-master2中

mkdir -p /etc/kubernetes/pki/etcd

mv /home/${USER}/ca.crt /etc/kubernetes/pki/

mv /home/${USER}/ca.key /etc/kubernetes/pki/

mv /home/${USER}/sa.pub /etc/kubernetes/pki/

mv /home/${USER}/sa.key /etc/kubernetes/pki/

mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/

mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/

mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt

mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key

mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf

After that I started to follow following commands in the kb8-master2

> `sudo kubeadm alpha phase certs all --config kubeadm-config.yaml`

Output:-

[certificates] Generated etcd/ca certificate and key.

[certificates] Generated etcd/server certificate and key.

[certificates] etcd/server serving cert is signed for DNS names [kb8-master2 localhost] and IPs [127.0.0.1 ::1]

[certificates] Generated apiserver-etcd-client certificate and key.

[certificates] Generated etcd/peer certificate and key.

[certificates] etcd/peer serving cert is signed for DNS names [kb8-master2 localhost] and IPs [10.240.0.33 127.0.0.1 ::1]

[certificates] Generated etcd/healthcheck-client certificate and key.

[certificates] Generated ca certificate and key.

[certificates] Generated apiserver-kubelet-client certificate and key.

[certificates] Generated apiserver certificate and key.

[certificates] apiserver serving cert is signed for DNS names [kb8-master2 kubernetes kubernetes.default kubernetes.default.svc 
kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.240.0.33]

[certificates] Generated front-proxy-ca certificate and key.

[certificates] Generated front-proxy-client certificate and key.

[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"

[certificates] Generated sa key and public key.

>`sudo kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml` 

Output:-

[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

>`sudo kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml`

Output:-
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

>`sudo kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml`

Output:-
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

>`sudo systemctl start kubelet`



>`export KUBECONFIG=/etc/kubernetes/admin.conf`


>`sudo kubectl exec -n kube-system etcd-kb8-master1 -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=protocol://10.240.0.4:2379 member add kb8-master2 https://10.240.0.33:2380` 
输出:- 与服务器localhost:8080的连接被拒绝-是否指定了正确的主机或端口

注意:我现在可以在kb8-master2中运行kubectl get po-n kube系统来查看POD

sudo kubeadm alpha phase etcd local --config kubeadm-config.yaml
无输出

sudo kubeadm alpha phase kubeconfig all --config kubeadm-config.yaml
输出:-

kubeconfig文件“/etc/kubernetes/admin.conf”已存在,但获取了错误的API服务器URL

我真的被困在这里了。进一步

下面是我在kb8-master2中使用的kubeadm-config.yaml文件

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
kubernetesVersion: v1.12.2
apiServerCertSANs:
- "10.240.0.16"
controlPlaneEndpoint: "10.240.0.16:6443"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://10.240.0.33:2379"
      advertise-client-urls: "https://10.240.0.33:2379"
      listen-peer-urls: "https://10.240.0.33:2380"
      initial-advertise-peer-urls: "https://10.240.0.33:2380"
      initial-cluster: "kb8-master1=https://10.240.0.4:2380,kb8-master2=https://10.240.0.33:2380"
      initial-cluster-state: existing
    serverCertSANs:
      - kb8-master2
      - 10.240.0.33
    peerCertSANs:
      - kb8-master2
      - 10.240.0.33
networking:
    podSubnet: "10.244.0.0/16"

有没有人面临过同样的问题。我完全被困在这里了

你有没有理由单独执行所有的init和join任务,而不是直接使用init和join?Kubeadm被认为是非常容易使用的

创建
initConfiguration
clusterConfiguraton
清单,并将它们放在主控计算机上的同一文件中。然后创建一个
nodeConfiguration
清单,并将其放到节点上的文件中。然后在主运行上运行
kubeadm init--config=/location/master.yml
,然后在节点上运行
kubeadm join--token 1.2.3.4:6443


与其在文档中详细介绍init和join如何与它们的子任务协同工作,不如直接使用它们的自动化更容易地构建集群。

这是前提条件吗?您使用什么CNI?在
/etc/kubernetes/admin.conf
中显示的url是什么?我正在尝试使用法兰绒。admin.conf文件中的url也是服务器:。这也是前提。我被困在这个问题上了,我喜欢打破这个僵局继续使用kb8