如果集群尚未创建,terraform plan将从kubernetes提供程序引发错误

如果集群尚未创建,terraform plan将从kubernetes提供程序引发错误,kubernetes,terraform,google-kubernetes-engine,terraform-provider-kubernetes,Kubernetes,Terraform,Google Kubernetes Engine,Terraform Provider Kubernetes,我有一个terraform配置,它将创建一个GKE集群、节点池,然后调用kubernetes来设置我的应用程序。当我在尚未创建集群的新项目上运行此配置时,kubernetes提供程序抛出以下错误 Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable E

我有一个terraform配置,它将创建一个GKE集群、节点池,然后调用kubernetes来设置我的应用程序。当我在尚未创建集群的新项目上运行此配置时,kubernetes提供程序抛出以下错误

Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable


Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin-binding": dial tcp [::1]:80: connect: connection refused


Error: Get "http://localhost/api/v1/namespaces/rabbitmq": dial tcp [::1]:80: connect: connection refused
如果我注释掉所有kubernetes部分,请运行terraform apply来创建集群,然后取消注释kubernetes部分,并尝试它正常工作并创建所有kubernetes资源

我检查了kubernetes提供者的文档,它说集群应该已经存在了

我如何告诉terraform在规划kubernetes之前等待集群的创建

我的配置如下所示 main.tf


张贴完整的例子。您可能缺少一个或多个依赖于的
。请发布完整的示例。您可能缺少一个或多个所依赖的
There are at least 2 steps involved in scheduling your first container on a Kubernetes cluster. You need the Kubernetes cluster with all its components running somewhere and then schedule the Kubernetes resources, like Pods, Replication Controllers, Services etc.
.
.
.

module "gke" {
  source = "./modules/gke"

  name                     = var.gke_cluster_name
  project_id               = data.google_project.project.project_id
  gke_location             = var.gke_zone
  .
  .
  .
}

data "google_client_config" "provider" {}

provider "kubernetes" {
  version                = "~> 1.13.3"
  alias                  = "my-kuber"
  host                   = "https://${module.gke.endpoint}"
  token                  = data.google_client_config.provider.access_token
  cluster_ca_certificate = module.gke.cluster_ca_certificate
  load_config_file       = false
}

resource "kubernetes_namespace" "ns" {
  provider = kubernetes.my-kuber
  depends_on = [module.gke]

  metadata {
    name = var.namespace
  }
}