Amazon web services 使用terraform部署EKS节点组时出错

Amazon web services 使用terraform部署EKS节点组时出错,amazon-web-services,kubernetes,terraform,terraform-provider-aws,amazon-eks,Amazon Web Services,Kubernetes,Terraform,Terraform Provider Aws,Amazon Eks,我在EKS集群中部署Terraform节点组时遇到问题。这个错误看起来像一个插件有问题,但我不知道如何解决它 如果我在AWS控制台(web)中看到EC2,我可以看到集群的实例,但集群中存在此错误 错误显示在我的管道中: 错误:等待EKS节点组(UNIR-API-REST-CLUSTER-DEV:Node_sping_boot)创建:NodeCreationFailure:实例未能加入kubernetes群集。资源ID:[i-05ed58f8101240dc8] 在EKS.tf第17行的资源“aw

我在EKS集群中部署Terraform节点组时遇到问题。这个错误看起来像一个插件有问题,但我不知道如何解决它

如果我在AWS控制台(web)中看到EC2,我可以看到集群的实例,但集群中存在此错误

错误显示在我的管道中:

错误:等待EKS节点组(UNIR-API-REST-CLUSTER-DEV:Node_sping_boot)创建:NodeCreationFailure:实例未能加入kubernetes群集。资源ID:[i-05ed58f8101240dc8]
在EKS.tf第17行的资源“aws_EKS_node_group”“nodes”中: 17:资源“aws\U eks\U节点\U组”“节点”
2020-06-01T00:03:50.576Z[调试]插件:插件进程退出:path=/home/ubuntu/.jenkins/workspace/shop\u infrastructure\u generator\u pipline/shop proyect dev/.terraform/plugins/linux\u amd64/terraform-provider-aws\u v2.64.0\u x4 pid=13475
2020-06-01T00:03:50.576Z[调试]插件:插件退出

并且错误会打印在AWS控制台中:

这是我用来创建项目的Terraform中的代码:

EKS.tf用于创建集群和de节点

resource "aws_eks_cluster" "CLUSTER" {
  name     = "UNIR-API-REST-CLUSTER-${var.SUFFIX}"
  role_arn = "${aws_iam_role.eks_cluster_role.arn}"
  vpc_config {
    subnet_ids = [
      "${aws_subnet.unir_subnet_cluster_1.id}","${aws_subnet.unir_subnet_cluster_2.id}"
    ]
  }
  depends_on = [
    "aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy",
    "aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy",
    "aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly",
  ]
}


resource "aws_eks_node_group" "nodes" {
  cluster_name    = "${aws_eks_cluster.CLUSTER.name}"
  node_group_name = "node_sping_boot"
  node_role_arn   = "${aws_iam_role.eks_nodes_role.arn}"
  subnet_ids      = [
      "${aws_subnet.unir_subnet_cluster_1.id}","${aws_subnet.unir_subnet_cluster_2.id}"
  ]
  scaling_config {
    desired_size = 1
    max_size     = 5
    min_size     = 1
  }
# instance_types is mediumt3 by default
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
  depends_on = [
    "aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy",
    "aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy",
    "aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly",
  ]
}

output "eks_cluster_endpoint" {
  value = "${aws_eks_cluster.CLUSTER.endpoint}"
}

output "eks_cluster_certificat_authority" {
    value = "${aws_eks_cluster.CLUSTER.certificate_authority}"
}
securityAndGroups.tf

resource "aws_iam_role" "eks_cluster_role" {
  name = "eks-cluster-${var.SUFFIX}"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}


resource "aws_iam_role" "eks_nodes_role" {
  name = "eks-node-${var.SUFFIX}"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}


resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = "${aws_iam_role.eks_cluster_role.name}"
}

resource "aws_iam_role_policy_attachment" "AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = "${aws_iam_role.eks_cluster_role.name}"
}

resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = "${aws_iam_role.eks_nodes_role.name}"
}

resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = "${aws_iam_role.eks_nodes_role.name}"
}

resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = "${aws_iam_role.eks_nodes_role.name}"
}
我的变量:

SUFFIX="DEV"
ZONE="eu-west-1"
TERRAFORM_USER_ID=
TERRAFORM_USER_PASS=
ZONE_SUB="eu-west-1b"
ZONE_SUB_CLUSTER_1="eu-west-1a"
ZONE_SUB_CLUSTER_2="eu-west-1c"
NET_CIDR_BLOCK="172.15.0.0/24"
SUBNET_CIDR_APLICATIONS="172.15.0.0/27"
SUBNET_CIDR_CLUSTER_1="172.15.0.32/27"
SUBNET_CIDR_CLUSTER_2="172.15.0.64/27"
SUBNET_CIDR_CLUSTER_3="172.15.0.128/27"
SUBNET_CIDR_CLUSTER_4="172.15.0.160/27"
SUBNET_CIDR_CLUSTER_5="172.15.0.192/27"
SUBNET_CIDR_CLUSTER_6="172.15.0.224/27"
MONGO_SSH_KEY=
KIBANA_SSH_KEY=
CLUSTER_SSH_KEY=
是否需要更多日志?

根据AWS:

如果收到错误“实例未能加入kubernetes” 在AWS管理控制台中,确保 集群的私有端点访问已启用,或者 为公共端点访问正确配置的CIDR块。更多 有关详细信息,请参阅Amazon EKS群集端点访问控制

我注意到您正在切换子网的可用性区域:

resource "aws_subnet" "unir_subnet_cluster_1" {
  vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
  cidr_block = "${var.SUBNET_CIDR_CLUSTER_1}"
  map_public_ip_on_launch = true
  availability_zone = "${var.ZONE_SUB_CLUSTER_2}"

您已将
var.ZONE\u SUB\u CLUSTER\u 2
分配给
unir\u subnet\u CLUSTER\u 1
var.ZONE\u SUB\u CLUSTER\u 1
分配给
unir\u subnet\u CLUSTER\u 2
。这可能是错误配置的原因。

如“NodeCreationFailure”中所述,此错误可能有两个原因:

NodeCreationFailure:您启动的实例无法注册 使用Amazon EKS集群。这种失败的常见原因是 节点IAM角色权限不足或缺少出站internet 对节点的访问
您的节点必须能够访问internet 使用公共IP地址以正常工作


在我的例子中,群集位于专用子网内,在向NAT网关添加路由后,错误消失。

是的,这就是问题所在,我没有将子网中的“aws\U路由\U表\U关联”放入路由关联中。谢谢大家
resource "aws_subnet" "unir_subnet_cluster_1" {
  vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
  cidr_block = "${var.SUBNET_CIDR_CLUSTER_1}"
  map_public_ip_on_launch = true
  availability_zone = "${var.ZONE_SUB_CLUSTER_2}"