Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/kubernetes/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kubernetes kubeadm迁移计划”;无法获取组件配置";_Kubernetes_Kubeadm - Fatal编程技术网

Kubernetes kubeadm迁移计划”;无法获取组件配置";

Kubernetes kubeadm迁移计划”;无法获取组件配置";,kubernetes,kubeadm,Kubernetes,Kubeadm,我刚刚将集群从1.16迁移到1.17.5。现在我想把它迁移到1.18.2(最新版本) 但是第一步失败了(kubeadm迁移计划) 我的kubeadm config configmap似乎遗漏了一些值,但我不知道是哪一个。 我检查了kubeadm config配置映射,1.17.5版本的值正常 有什么想法吗 # kubeadm upgrade plan --v=5 I0507 14:16:12.685214 16010 plan.go:67] [upgrade/plan] verifying

我刚刚将集群从1.16迁移到1.17.5。现在我想把它迁移到1.18.2(最新版本)

但是第一步失败了(kubeadm迁移计划)

我的kubeadm config configmap似乎遗漏了一些值,但我不知道是哪一个。 我检查了kubeadm config配置映射,1.17.5版本的值正常

有什么想法吗

# kubeadm upgrade plan --v=5
I0507 14:16:12.685214   16010 plan.go:67] [upgrade/plan] verifying health of cluster
I0507 14:16:12.685280   16010 plan.go:68] [upgrade/plan] retrieving configuration from cluster
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
invalid configuration: kind and apiVersion is mandatory information that needs to be specified in all YAML documents
failed to get component configs
k8s.io/kubernetes/cmd/kubeadm/app/util/config.getInitConfigurationFromCluster
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/cluster.go:104
k8s.io/kubernetes/cmd/kubeadm/app/util/config.FetchInitConfigurationFromCluster
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/cluster.go:69
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.enforceRequirements
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/common.go:97
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.runPlan
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:69
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdPlan.func1
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:55
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
[upgrade/config] FATAL
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.enforceRequirements
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/common.go:112
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.runPlan
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:69
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdPlan.func1
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:55
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
kubeadm配置映射的内容

apiVersion: v1
data:
  ClusterConfiguration: |
    apiServer:
      certSANs:
      - kubernetes
      - kubernetes.default
      - kubernetes.default.svc
      - kubernetes.default.svc.my-cluster
      - 10.0.22.1
      - localhost
      - 127.0.0.1
      - master1.my-cluster
      - master2.my-cluster
      - master3.my-cluster
      - lb-apiserver.kubernetes.local
      - xxx.xxx.xxx.1
      - xxx.xxx.xxx.3
      - xxx.xxx.xxx.2
      extraArgs:
        allow-privileged: "true"
        anonymous-auth: "True"
        apiserver-count: "3"
        authorization-mode: Node,RBAC
        bind-address: 0.0.0.0
        enable-aggregator-routing: "False"
        endpoint-reconciler-type: lease
        insecure-port: "0"
        kubelet-preferred-address-types: InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP
        profiling: "False"
        request-timeout: 1m0s
        runtime-config: ""
        service-node-port-range: 30000-32767
        storage-backend: etcd3
      extraVolumes:
      - hostPath: /etc/pki/tls
        mountPath: /etc/pki/tls
        name: etc-pki-tls
        readOnly: true
      - hostPath: /etc/pki/ca-trust
        mountPath: /etc/pki/ca-trust
        name: etc-pki-ca-trust
        readOnly: true
      timeoutForControlPlane: 5m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    certificatesDir: /etc/kubernetes/ssl
    clusterName: my-cluster
    controlPlaneEndpoint: xxx.xxx.xxx.1:6443
    controllerManager:
      extraArgs:
        bind-address: 0.0.0.0
        configure-cloud-routes: "false"
        node-cidr-mask-size: "24"
        node-monitor-grace-period: 40s
        node-monitor-period: 5s
        pod-eviction-timeout: 5m0s
        profiling: "False"
        terminated-pod-gc-threshold: "12500"
    dns:
      imageRepository: docker.io/coredns
      imageTag: 1.6.5
      type: CoreDNS
    etcd:
      external:
        caFile: /etc/ssl/etcd/ssl/ca.pem
        certFile: /etc/ssl/etcd/ssl/node-node1.pem
        endpoints:
        - https://xxx.xxx.xxx.1:2379
        - https://xxx.xxx.xxx.3:2379
        - https://xxx.xxx.xxx.2:2379
        keyFile: /etc/ssl/etcd/ssl/node-node1-key.pem
    imageRepository: gcr.io/google-containers
    kind: ClusterConfiguration
    kubernetesVersion: v1.17.5
    networking:
      dnsDomain: my-cluster
      podSubnet: 10.0.20.0/24
      serviceSubnet: 10.0.22.0/24
    scheduler:
      extraArgs:
        bind-address: 0.0.0.0
  ClusterStatus: |
    apiEndpoints:
      master1.my-cluster:
        advertiseAddress: xxx.xxx.xxx.1
        bindPort: 6443
      master2.my-cluster:
        advertiseAddress: xxx.xxx.xxx.2
        bindPort: 6443
      master3.my-cluster:
        advertiseAddress: xxx.xxx.xxx.3
        bindPort: 6443
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterStatus
kind: ConfigMap
metadata:
  creationTimestamp: "2019-10-16T00:57:59Z"
  name: kubeadm-config
  namespace: kube-system
  resourceVersion: "57269932"
  selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
  uid: 84cece40-38f9-4c82-8844-3f8c29089d7d


终于找到了错误的来源。 kubelet配置映射(而不是kubeadm配置映射)中缺少种类和apiVersion。 一旦全部归档就可以了。
我打开了一个功能请求,以添加更多关于引发此错误之王()的配置的调试信息。

在1.17.5-->1.18.2升级过程中,您使用的kubeadm版本是什么?不是有点类似?我用1.18.0和1.18.2版本的kubeadm命令进行了测试,结果不一样,因为我已经在1.17.5中了。我遵循以下文档:如果我尝试使用kubeadm版本的1.17.5:[飞行前]运行飞行前检查。[升级]确保群集正常:[升级]获取可用版本以升级到[升级/版本]群集版本:v1.17.5[升级/版本]kubeadm版本:v1.17.5 I0511 14:38:43.328304 6318版本。go:251]远程版本更新得多:v1.18.2;退回到:stable-1.17[升级/版本]最新稳定版本:v1.17.5[升级/版本]v1.17系列中的最新版本:v1.17.5