Amazon ec2 kuberntes ingress aws部署负载平衡器挂起
简言之,以下是我所做的步骤:Amazon ec2 kuberntes ingress aws部署负载平衡器挂起,amazon-ec2,kubernetes,amazon-elb,kubernetes-ingress,Amazon Ec2,Kubernetes,Amazon Elb,Kubernetes Ingress,简言之,以下是我所做的步骤: 在aws中启动了2新t3-小型实例,用密钥预先标记 kubernetes.io/cluster/和value成员 使用相同的标签标记新安全性,并打开所有提到的端口 这里- 将hostname更改为curl的输出 http://169.254.169.254/latest/meta-data/local-hostname并已验证 使用hostnamectl 重新启动 配置aws时使用 https://docs.aws.amazon.com/cli/latest/us
t3-小型实例,用密钥预先标记
kubernetes.io/cluster/
和value成员
hostname
更改为curl的输出
http://169.254.169.254/latest/meta-data/local-hostname
并已验证
使用hostnamectl
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
“*”
)权限的IAM角色
,并分配给EC2
实例apt get安装kubelet-kubeadm-kubectl
/etc/default/kubelet
KUBELET\u EXTRA\u ARGS=--cloud provider=aws
kubeadm join…
其他节点EXTERNAL-IP
与
相同
目前的状况是:
kubectl获取pods——所有名称空间——o宽
NAMESPACE NAME IP NODE
ingress ingress-nginx-ingress-controller-77d989fb4d-qz4f5 10.244.1.13 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
ingress ingress-nginx-ingress-default-backend-7f7bf55777-dhj75 10.244.1.12 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system coredns-86c58d9df4-bklt8 10.244.1.14 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system coredns-86c58d9df4-ftn8q 10.244.1.16 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system etcd-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-apiserver-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-controller-manager-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-flannel-ds-amd64-87k8p 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-flannel-ds-amd64-f4wft 172.31.3.106 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system kube-proxy-79cp2 172.31.3.106 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system kube-proxy-sv7md 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-scheduler-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system tiller-deploy-5b7c66d59c-fgwcp 10.244.1.15 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 73m <none>
ingress ingress-nginx-ingress-controller LoadBalancer 10.97.167.197 <pending> 80:32722/TCP,443:30374/TCP 59m app=nginx-ingress,component=controller,release=ingress
ingress ingress-nginx-ingress-default-backend ClusterIP 10.109.198.179 <none> 80/TCP 59m app=nginx-ingress,component=default-backend,release=ingress
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 73m k8s-app=kube-dns
kube-system tiller-deploy ClusterIP 10.96.216.119 <none> 44134/TCP 67m app=helm,name=tiller
kubectl get svc--所有名称空间-o宽
NAMESPACE NAME IP NODE
ingress ingress-nginx-ingress-controller-77d989fb4d-qz4f5 10.244.1.13 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
ingress ingress-nginx-ingress-default-backend-7f7bf55777-dhj75 10.244.1.12 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system coredns-86c58d9df4-bklt8 10.244.1.14 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system coredns-86c58d9df4-ftn8q 10.244.1.16 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system etcd-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-apiserver-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-controller-manager-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-flannel-ds-amd64-87k8p 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-flannel-ds-amd64-f4wft 172.31.3.106 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system kube-proxy-79cp2 172.31.3.106 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system kube-proxy-sv7md 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-scheduler-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system tiller-deploy-5b7c66d59c-fgwcp 10.244.1.15 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 73m <none>
ingress ingress-nginx-ingress-controller LoadBalancer 10.97.167.197 <pending> 80:32722/TCP,443:30374/TCP 59m app=nginx-ingress,component=controller,release=ingress
ingress ingress-nginx-ingress-default-backend ClusterIP 10.109.198.179 <none> 80/TCP 59m app=nginx-ingress,component=default-backend,release=ingress
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 73m k8s-app=kube-dns
kube-system tiller-deploy ClusterIP 10.96.216.119 <none> 44134/TCP 67m app=helm,name=tiller
kubectl获取节点-o宽
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-172-31-12-119.ap-south-1.compute.internal Ready master 6d19h v1.13.4 172.31.12.119 XX.XXX.XXX.XX Ubuntu 16.04.5 LTS 4.4.0-1077-aws docker://18.6.3
ip-172-31-3-106.ap-south-1.compute.internal Ready <none> 6d19h v1.13.4 172.31.3.106 XX.XXX.XX.XXX Ubuntu 16.04.5 LTS 4.4.0-1077-aws docker://18.6.3
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-172-31-12-119.ap-south-1.compute.internal Ready master 6d19h v1.13.4 172.31.12.119 XX.XXX.XXX.XX Ubuntu 16.04.5 LTS 4.4.0-1077-awsdocker://18.6.3
ip-172-31-3-106.ap-south-1.compute.internal Ready 6d19h v1.13.4 172.31.3.106 XX.XXX.XX.XXX Ubuntu 16.04.5 LTS 4.4.0-1077-awsdocker://18.6.3
有人能指出我在这里遗漏了什么吗?因为互联网上到处都说会自动部署
Classic ELB
。对于AWS ELB(type Classic),您必须
--cloud provider=aws
位于主节点上的/etc/kubernetes/manifests
:
kube-controller-manager.yaml
kube apiserver.yaml
sudo systemctl后台程序重新加载
sudo systemctl重启kubelet
与其他命令一起,根据需要在底部或顶部添加。结果应类似于: 在kube-controller-manager.yaml中 在kube-apiserver.yaml中
kubectl是否描述服务入口nginx入口控制器。它卡在
挂起中的原因通常列在事件下@PoweredByOrange更新问题,无事件。嗯,这是公共负载平衡器吗?您的vpc子网是否具有正确的k8s标签?(键kubernetes.io/role/elb
value1
用于公共和kubernetes.io/role/internal elb
,1
用于私有子网)?@PoweredByOrange我需要一个公共负载平衡器,只需将。/role/elb
标记添加到现有子网。下一步怎么办?我想知道这些东西都记录在哪里了!删除服务并重新部署,看看是否有帮助。确保所有公共子网都有kubernetes.io/role/elb
标记。
spec:
containers:
- command:
- kube-controller-manager
- --cloud-provider=aws
spec:
containers:
- command:
- kube-apiserver
- --cloud-provider=aws