Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/kubernetes/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Amazon web services AWS和Kubernetes供应商提供的地形循环_Amazon Web Services_Kubernetes_Terraform_Terraform Provider Aws - Fatal编程技术网

Amazon web services AWS和Kubernetes供应商提供的地形循环

Amazon web services AWS和Kubernetes供应商提供的地形循环,amazon-web-services,kubernetes,terraform,terraform-provider-aws,Amazon Web Services,Kubernetes,Terraform,Terraform Provider Aws,我的Terraform代码描述了一些用于构建Kubernetes集群的AWS基础设施,包括集群中的一些部署。当我尝试使用terraform plan-destroy-destroy来破坏基础设施时,我得到了一个循环: module.eks_control_plane.aws_eks_cluster.this[0] (destroy) module.eks_control_plane.output.cluster provider.kubernetes module.aws_auth.kubern

我的Terraform代码描述了一些用于构建Kubernetes集群的AWS基础设施,包括集群中的一些部署。当我尝试使用
terraform plan-destroy-destroy来破坏基础设施时,我得到了一个循环:

module.eks_control_plane.aws_eks_cluster.this[0] (destroy)
module.eks_control_plane.output.cluster
provider.kubernetes
module.aws_auth.kubernetes_config_map.this[0] (destroy)
data.aws_eks_cluster_auth.this[0] (destroy)
只需使用
terraformdestroy
手动销毁基础设施即可。不幸的是,Terraform Cloud使用
Terraform plan-Distroy
首先计划销毁,这导致此操作失败。以下是相关代码:

摘自eks_控制_平面模块:

resource "aws_eks_cluster" "this" {
  count = var.enabled ? 1 : 0

  name     = var.cluster_name
  role_arn = aws_iam_role.control_plane[0].arn
  version  = var.k8s_version

  # https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html
  enabled_cluster_log_types = var.control_plane_log_enabled ? var.control_plane_log_types : []

  vpc_config {
    security_group_ids = [aws_security_group.control_plane[0].id]
    subnet_ids         = [for subnet in var.control_plane_subnets : subnet.id]
  }

  tags = merge(var.tags,
    {
    }
  )

  depends_on = [
    var.dependencies,
    aws_security_group.node,
    aws_iam_role_policy_attachment.control_plane_cluster_policy,
    aws_iam_role_policy_attachment.control_plane_service_policy,
    aws_iam_role_policy.eks_cluster_ingress_loadbalancer_creation,
  ]
}

output "cluster" {
  value = length(aws_eks_cluster.this) > 0 ? aws_eks_cluster.this[0] : null
}
aws认证模块的aws认证Kubernetes配置映射:

resource "kubernetes_config_map" "this" {
  count = var.enabled ? 1 : 0

  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }
  data = {
    mapRoles = jsonencode(
      concat(
        [
          {
            rolearn  = var.node_iam_role.arn
            username = "system:node:{{EC2PrivateDNSName}}"
            groups = [
              "system:bootstrappers",
              "system:nodes",
            ]
          }
        ],
        var.map_roles
      )
    )
  }

  depends_on = [
    var.dependencies,
  ]
}

根模块中的Kubernetes提供程序:

data "aws_eks_cluster_auth" "this" {
  count = module.eks_control_plane.cluster != null ? 1 : 0
  name  = module.eks_control_plane.cluster.name
}

provider "kubernetes" {
  version = "~> 1.10"

  load_config_file       = false
  host                   = module.eks_control_plane.cluster != null ? module.eks_control_plane.cluster.endpoint : null
  cluster_ca_certificate = module.eks_control_plane.cluster != null ? base64decode(module.eks_control_plane.cluster.certificate_authority[0].data) : null
  token                  = length(data.aws_eks_cluster_auth.this) > 0 ? data.aws_eks_cluster_auth.this[0].token : null
}
这就是模块的调用方式:

module "eks_control_plane" {
  source  = "app.terraform.io/SDA-SE/eks-control-plane/aws"
  version = "0.0.1"
  enabled = local.k8s_enabled

  cluster_name          = var.name
  control_plane_subnets = module.vpc.private_subnets
  k8s_version           = var.k8s_version
  node_subnets          = module.vpc.private_subnets
  tags                  = var.tags
  vpc                   = module.vpc.vpc

  dependencies = concat(var.dependencies, [
    # Ensure that VPC including all security group rules, network ACL rules,
    # routing table entries, etc. is fully created
    module.vpc,
  ])
}


# aws-auth config map module. Creating this config map will allow nodes and
# Other users to join the cluster.
# CNI and CSI plugins must be set up before creating this config map.
# Enable or disable this via `aws_auth_enabled` variable.
# TODO: Add Developer and other roles.
module "aws_auth" {
  source  = "app.terraform.io/SDA-SE/aws-auth/kubernetes"
  version = "0.0.0"
  enabled = local.aws_auth_enabled

  node_iam_role = module.eks_control_plane.node_iam_role
  map_roles = [
    {
      rolearn  = "arn:aws:iam::${var.aws_account_id}:role/Administrator"
      username = "admin"
      groups = [
        "system:masters",
      ]
    },
    {
      rolearn  = "arn:aws:iam::${var.aws_account_id}:role/Terraform"
      username = "terraform"
      groups = [
        "system:masters",
      ]
    }
  ]
}
删除aws_auth config映射意味着根本不使用Kubernetes提供程序,这打破了这个循环。问题显然是Terraform试图破坏Kubernetes集群,这是Kubernetes提供者所必需的。使用多个
terraformapply
步骤一步一步地手动删除资源也很好


有没有一种方法可以让Terraform先销毁所有Kubernetes资源,这样就不再需要提供程序,然后再销毁EKS群集?

目前看来唯一好的方法是将其分为两个应用步骤。这甚至可以在Terraform文档中的某个地方推荐。真倒霉