将terraform 0.12.6切换到0.13.0将提供程序[“registry.terraform.io/-/null”是必需的,但它已被删除

将terraform 0.12.6切换到0.13.0将提供程序[“registry.terraform.io/-/null”是必需的,但它已被删除,terraform,terraform-provider-aws,amazon-eks,Terraform,Terraform Provider Aws,Amazon Eks,我管理远程地形云中的状态 我已下载并安装了最新的terraform 0.13 CLI 然后我移除了.terraform 然后我运行了terraforminit,没有得到任何错误 然后我做到了 ➜ terraform apply -var-file env.auto.tfvars Error: Provider configuration not present To work with module.kubernetes.module.eks-cluster.data.null_data_s

我管理远程地形云中的状态

我已下载并安装了最新的terraform 0.13 CLI

然后我移除了.terraform

然后我运行了
terraforminit
,没有得到任何错误

然后我做到了

➜ terraform apply -var-file env.auto.tfvars

Error: Provider configuration not present

To work with
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0] its
original provider configuration at provider["registry.terraform.io/-/null"] is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.kubernetes.module.eks-cluster.data.null_data_source.node_groups[0],
after which you can remove the provider configuration again.

Releasing state lock. This may take a few moments...
这是模块/kubernetes/main.tf的内容

###################################################################################
# EKS CLUSTER                                                                     #
#                                                                                 #
# This module contains configuration for EKS cluster running various applications #
###################################################################################

module "eks_label" {
  source      = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
  namespace   = var.project
  environment = var.environment
  attributes  = [var.component]
  name        = "eks"
}


#
# Local computed variables
#
locals {
  names = {
    secretmanage_policy = "secretmanager-${var.environment}-policy"
  }
}

data "aws_eks_cluster" "cluster" {
  name = module.eks-cluster.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-cluster.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.9"
}

module "eks-cluster" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = module.eks_label.id
  cluster_version = var.cluster_version
  subnets         = var.subnets
  vpc_id          = var.vpc_id

  worker_groups = [
    {
      instance_type = var.cluster_node_type
      asg_max_size  = var.cluster_node_count
    }
  ]

  tags = var.tags
}

# Grant secretmanager access to all pods inside kubernetes cluster
# TODO:
# Adjust implementation so that the policy is template based and we only allow
# kubernetes access to a single key based on the environment.
# we should export key from modules/secrets and then grant only specific ARN access
# so that only production cluster is able to read production secrets but not dev or staging
# https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_identity-based-policies.html#permissions_grant-get-secret-value-to-one-secret
resource "aws_iam_policy" "secretmanager-policy" {
  name        = local.names.secretmanage_policy
  description = "allow to read secretmanager secrets ${var.environment}"
  policy      = file("modules/kubernetes/policies/secretmanager.json")
}

#
# Attache the policy to k8s worker role
#
resource "aws_iam_role_policy_attachment" "attach" {
  role       = module.eks-cluster.worker_iam_role_name
  policy_arn = aws_iam_policy.secretmanager-policy.arn
}

#
# Attache the S3 Policy to Workers
# So we can use aws commands inside pods easily if/when needed
#
resource "aws_iam_role_policy_attachment" "attach-s3" {
  role       = module.eks-cluster.worker_iam_role_name
  policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}

当一个处于最新Terraform状态的对象不再在配置中,但Terraform无法销毁它(这通常是预期的),因为用于销毁的提供程序配置也不存在时,就会出现此错误

解决方案:

仅当您最近删除了对象时,才会出现这种情况 “data.null\u data\u source”以及提供程序“null”块。到 继续此
您需要临时恢复该提供程序“null”块
,运行terraform apply以使
terraform销毁对象数据“null\u data\u source”
,然后您可以删除提供程序“null” 阻止,因为不再需要它


此修复程序的所有积分都归在cloudposse slack频道上提及此内容的积分:

terraform状态替换提供程序-自动批准--/null registry.terraform.io/hashicorp/null


这解决了我的问题,这个错误,下一个错误。所有这些都是为了在terraform上升级一个版本。

对于我们,我们更新了代码中使用的所有提供商URL,如下所示:

terraform state replace-provider 'registry.terraform.io/-/null' 
'registry.terraform.io/hashicorp/null'
terraform state replace-provider 'registry.terraform.io/-/archive' 
'registry.terraform.io/hashicorp/archive'
terraform state replace-provider 'registry.terraform.io/-/aws' 
'registry.terraform.io/hashicorp/aws'
我想非常具体的替换,所以我在替换新的URL时使用了损坏的URL

更具体地说,这仅限于


Naveen但我没有任何代码更改-这个提供者怎么会错过?真奇怪。也许您可以尝试添加提供程序null块,并测试它是否运行良好,这是意料之中的,因为Terraform仍处于0.x.y版本库中!在terraform从0.12.26升级到0的过程中帮助了我。13@user3361149很高兴听到这个消息。干杯