Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/docker/10.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Docker 如何在Windows上创建具有多个节点的Kubernetes群集_Docker_Kubernetes_Kubernetes Cluster - Fatal编程技术网

Docker 如何在Windows上创建具有多个节点的Kubernetes群集

Docker 如何在Windows上创建具有多个节点的Kubernetes群集,docker,kubernetes,kubernetes-cluster,Docker,Kubernetes,Kubernetes Cluster,所有kubernetes论坛和文章都要求使用minikube,它只提供单节点kubernetes集群 在Windows环境中,有哪些选项可用于多节点kubernetes群集 问题是Windows节点只能作为一个节点。 您只能创建一个混合集群,并让Windows工作负载运行在Windows播客中,与运行在Linux播客中的Linux工作负载对话 : Kubernetes控制平面,包括主组件,继续在Linux上运行。没有计划使用仅限Windows的Kubernetes群集 有关限制的完整列表,请参阅

所有kubernetes论坛和文章都要求使用minikube,它只提供单节点kubernetes集群


在Windows环境中,有哪些选项可用于多节点kubernetes群集

问题是Windows节点只能作为一个节点。 您只能创建一个混合集群,并让Windows工作负载运行在Windows播客中,与运行在Linux播客中的Linux工作负载对话

:

Kubernetes控制平面,包括主组件,继续在Linux上运行。没有计划使用仅限Windows的Kubernetes群集

有关限制的完整列表,请参阅

:

Windows仅作为Kubernetes中的工作节点受支持 架构和组件矩阵。这意味着库伯内特斯 集群必须始终包含Linux主节点,零个或多个Linux 工作节点,以及零个或多个Windows工作节点

:

Linux cGroup用作中资源控制的pod边界 Linux。在该边界内为网络创建容器, 进程和文件系统隔离。cgroups API可用于 收集cpu/io/内存统计数据。相比之下,Windows使用每个作业的作业对象 包含系统命名空间筛选器的容器,以包含 容器并提供与主机的逻辑隔离。没有办法 运行Windows容器而不进行命名空间筛选。 这意味着无法在的上下文中声明系统权限 主机和特权容器在Windows上不可用。 容器无法假定来自主机的标识,因为 安全帐户管理器(SAM)是独立的


在我的windows-10笔记本电脑上,我使用virtualbox创建了2个ubuntu虚拟机(每个虚拟机->3 GB RAM和50 GB动态大小的虚拟磁盘)。我用的是来自中国的MicroK8。在每个虚拟机上进行非常简单的单线安装:sudo snap install microk8s——经典版

按照……的说明操作。。。。一个VM成为主k8s节点,另一个VM成为加入主节点的工作节点


设置完成后,您可能需要设置别名,如:alias k='microk8s.kubectl'。然后您可以简单地执行以下操作:k apply-f

我可以使用Oracle virtual box在我的windows box上创建一个多节点kubernetes群集

希望这会有所帮助。 我在Windows 10上托管的虚拟机箱中创建了4*centos 8虚拟机。 在4*VM中,一个VM设置为主节点,其余为工作节点

下面是我的逐步设置过程

  • 准备工作

    1.1基本虚拟机模板的准备(node-master-centOS-1)

    1.2在VirtualBox中创建和配置模板VM(node-master-centOS-1) 1.2.1(虚拟机框)文件->主机网络管理器->使用手动地址创建仅主机的以太网适配器(例如192.168.56.1/24,DHCP服务器@192.168.56.100/24,DHCP范围101-254) 1.2.2(虚拟机框)预配置虚拟机实例 1.2.2.1(虚拟机盒)系统(内存=4096MB,引导顺序=硬盘->光盘,处理器=2) 1.2.2.2(虚拟机盒)存储(删除IDE控制器;在SATA控制器下,添加指向centOS-8.x.xxxx-arch-dvdx.iso的光驱,在步骤1.1.1下载) 1.2.2.3(虚拟机盒)网络(适配器1=启用,连接到=NAT;适配器2=启用,连接到=仅主机适配器,名称=仅VirtualBox主机以太网适配器。)注意在步骤1.2.1中创建的适配器2 1.2.2.4(主机)设置->防火墙和网络保护->高级设置->绑定规则->新规则->自定义->所有程序->任何端口和协议->本地IP设置为192.168.56.1(仅限virtualbox主机适配器) ->远程IP设置范围为192.168.56.2-192.168.56.99(或根据需要) 1.2.2.5(主机)设置->网络和Internet->网络连接->具有Internet连接的适配器的属性->获取工作DNS地址(例如192.168.1.1) 1.2.2.6启动虚拟机实例 1.2.3(远程虚拟机)设置网络 1.2.3.1(远程VM)设置->网络->以太网(enp0s3):ipv4(手动,10.0.2.20/24,DNS 10.0.2.3) 1.2.3.2(远程VM)设置->网络->以太网(enp0s8):ipv4(手动,192.168.56.20/24,DNS 192.168.1.1或在步骤1.2.2.5中获得,以便远程VM继承主机的internet DNS) 1.2.3.3(远程VM)终端->sudo-ifdown(然后是ifup)配置文件_1(或enp0s3)->sudo-ifdown(然后是ifup)配置文件_2(或enp0s8)->系统CTL重新启动网络(如果不工作:系统CTL重新启动NetworkManager.service) 1.2.4(远程VM)设置主机名 1.2.4.1(远程VM)主机名CTL设置主机名node-master-centos-1(即{node_1}) 1.2.5验证连通性 1.2.5.1(主机)Ping:Ping 192.168.56.20(即{ip_node_1})成功 1.2.5.2(主机)SSH:SSHroot@192.168.56.20成功->(SSH)wget success(表示网络和DNS正在工作。如果未设置步骤1.2.2.5和1.2.3.2中的DNS,则DNS可能无法工作,尽管基于ip的internet可能工作正常

    1.3准备虚拟机环境 1.3.1可选(远程VM SSH) ->yum安装vim git wget bzh ->sh-c“$(wget-O-”(ohmyzsh为bash提供彩色方案) ->vi.zshrc->更改为ZSH_THEME=“bira”->source.zshrc(这将更改bash颜色方案)

    1.4通过应用基本模板(node-worker-centOS-1、node-worker-centOS-2、node-worker-centOS-3)创建由4个虚拟机组成的集群 ->(虚拟机框):克隆node-master-centOS-1三次,每次使用新的MAC ->(远程VM):分别使用ipv4=10.0.2.21/22/23更新enp0s3。 ->(远程VM):分别使用ipv4=192.168.56.21/22/23更新enp0s8。 ->(远程VM):分别更新hostname=node-worker-centos-1/2/3。 ->(远程VM SSH):为所有节点添加主机映射(192.168.20.20/21/22/23节点主节点/worker-centos-1/2/3)到/etc/hosts

    1.5建立Kubernetes集群(1个主机,3个工人) ->初始化主节点 -> (root@node-master-centos-1~)kubeadm init--pod网络cidr=10.244.0.0/16--apiserver播发地址=192.168.56.20 选择pod网络cide=10.244.0.0是因为
     1.1.1 (Host) Download centOS 8 image (CentOS-8.1.1911-x86_64-dvd1.iso) from http://isoredirect.centos.org/centos/8/isos/x86_64/
     1.1.2 Install Oracle VM Box from https://www.virtualbox.org/wiki/Downloads
    
     1.3.4 Turn off selinux (Remote VM SSH) 
             -> setenforce 0
             -> 
     1.3.5 Install JDK 8
             -> (Remote VM SSH):  yum install java-1.8.0-openjdk-devel
             -> (Remote VM SSH):  
                 -> vi /etc/profile, add "export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.272.b10-3.el8_3.x86_64" and "export PATH=$JAVA_HOME/bin:$PATH"
                 -> source /etc/profile (to avoid duplicated path setting, better skip this step, if 1.3.6 is to be performed)
             -> (Remote VM SSH):  to verify, run javac -version; java -version; which javac; which java; echo $JAVA_HOME; echo $PATH;
    
     1.3.6 Install Apache Maven
             -> (Remote VM SSH): 
                 -> cd /opt
                 -> wget https://www.strategylions.com.au/mirror/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz
                 -> tar xzvf apache-maven-3.6.3-bin.tar.gz
                 -> vi /etc/profile
                 -> add "export PATH=/opt/apache-maven-3.6.3/bin:$PATH"
                 -> source /etc/profile (once is enough)
                 -> to verify, mvn -v
    
     1.3.7 Install Python,  Virtual Env, Tensorflow
             -> (Remote VM SSH) Install Python3 
                 -> yum update -y (update all installed packages)
                 -> yum install gcc openssl-devel bzip2-devel libffi-devel -y
                 -> verify python3:  python3
             -> (Remote VM SSH) Install VirtualEnv and Tensorflow
                 -> python3 -m venv --system-site-packages ./venv
                 -> source ./venv/bin/activate  # sh, bash, or zsh
                 -> pip install --upgrade pip
                 -> pip install --upgrade requests bs4 numpy torch scipy (and so on)
                 -> pip install tenflow==1.15 (tf2.3.x does not work well on my platform)
    
     1.3.8 Install Kubenetes and Docker (Remote VM SSH) 
             -> Turn off selinux  
                 -> setenforce 0
                 -> sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config  (sed -i "s/old text/new text/g" file)
             -> Stop and Disable Firwall
                 -> systemctl stop firewalld
                 -> systemctl disable firewalld
             -> Disable devices and files for paging and swapping
                 -> swapoff -a
                 -> yes | cp /etc/fstab /etc/fstab_bak  ( create a bak file)
                 -> cat /etc/fstab_bak| grep -v swap > /etc/fstab (keep everything back the line with 'swap' to delete swap)
             -> Re-configure network adaptor
                 -> enable br_netfilter
                     -> vi /etc/modules-load.d/k8s.conf
                         -> insert "br_netfilter"
                     -> modprobe br_netfilter
                 -> set sysctl settings
                     -> vi /etc/sysctl.d/k8s.conf
                         -> net.bridge.bridge-nf-call-ip6tables = 1
                         -> net.bridge.bridge-nf-call-iptables = 1
                     -> sysctl --system
                 -> Firwall (k8s use 6443, 2379-2380, 10250-10255 TCP which need to be enabled)
                     -> systemctl enable firewalld
                     -> systemctl start firewalld
                     -> firewall-cmd --permanent --add-port=6443/tcp
                     -> firewall-cmd --permanent --add-port=2379-2380/tcp
                     -> firewall-cmd --permanent --add-port=10250-10255/tcp
                     -> firewall-cmd –reload
                 -> Enable network modules
                     -> vi /etc/sysconfig/modules/ipvs.modules
                         -> insert 
                             -> modprobe -- ip_vs
                             -> modprobe -- ip_vs_rr
                             -> modprobe -- ip_vs_wrr
                             -> modprobe -- ip_vs_sh
                             -> modprobe -- nf_conntrack_ipv4
                     -> modprobe -- ip_vs
                     -> modprobe -- ip_vs_rr
                     -> modprobe -- ip_vs_wrr
                     -> modprobe -- ip_vs_sh
                     -> modprobe -- nf_conntrack_ipv4
                     -> verify:  cut -f1 -d " "  /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4 (shows 5 rows)
             -> Install Kubenetes
                 -> Set up repository
                     -> vi /etc/yum.repos.d/kubernetes.repo, and insert:
                         [kubernetes]
                         name=Kubernetes
                         baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
                         enabled=1
                         gpgcheck=1
                         repo_gpgcheck=1
                         gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    
                 -> Install K8s
                     -> yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
                     -> systemctl enable kubelet
                     -> systemctl start kubelet
                     -> systemctl status kubelet (error255)
                     -> journalctl -xe (missing yaml file /var/lib/kubelet/config.yaml which is expected. )
             -> Install Docker
                 -> Set up repository
                     -> yum install -y yum-utils
                     -> yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
                 -> Install & Run Docker
                     -> yum install docker-ce docker-ce-cli containerd.io
                     -> systemctl enable docker
                     -> systemctl start docker
                     -> verify: docker run helloworld
                     -> verify: docker run -it ubuntu bash
                 -> Update Docker Cgroup
                     -> docker info | grep Cgroup  (shows cgroup driver: cgroupfs.  This needs updated to align with K8s)
                     -> vi /etc/docker/daemon.json, insert:
                         {
                             "exec-opts":["native.cgroupdriver=systemd"]
                         }
                     -> systemctl restart docker
                     -> verify: docker info | grep Cgroup
    
             -> Install node.JS and npm
                 -> yum install epel-release (access to the EPEL repository)
                 -> yum install nodejs (it installs nodeJS and npm)
                 -> verify: node --version (v10.21.0)
                 -> verify: npm version (v6.14.4)
    
             below results are shown:
             # kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.20
             [init] Using Kubernetes version: v1.20.0
             [preflight] Running pre-flight checks
                     [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
             [preflight] Pulling images required for setting up a Kubernetes cluster
             [preflight] This might take a minute or two, depending on the speed of your internet connection
             [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
             [certs] Using certificateDir folder "/etc/kubernetes/pki"
             [certs] Generating "ca" certificate and key
             [certs] Generating "apiserver" certificate and key
             [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node-master-centos-1] and IPs [10.96.0.1 192.168.56.20]
             [certs] Generating "apiserver-kubelet-client" certificate and key
             [certs] Generating "front-proxy-ca" certificate and key
             [certs] Generating "front-proxy-client" certificate and key
             [certs] Generating "etcd/ca" certificate and key
             [certs] Generating "etcd/server" certificate and key
             [certs] etcd/server serving cert is signed for DNS names [localhost node-master-centos-1] and IPs [192.168.56.20 127.0.0.1 ::1]
             [certs] Generating "etcd/peer" certificate and key
             [certs] etcd/peer serving cert is signed for DNS names [localhost node-master-centos-1] and IPs [192.168.56.20 127.0.0.1 ::1]
             [certs] Generating "etcd/healthcheck-client" certificate and key
             [certs] Generating "apiserver-etcd-client" certificate and key
             [certs] Generating "sa" key and public key
             [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
             [kubeconfig] Writing "admin.conf" kubeconfig file
             [kubeconfig] Writing "kubelet.conf" kubeconfig file
             [kubeconfig] Writing "controller-manager.conf" kubeconfig file
             [kubeconfig] Writing "scheduler.conf" kubeconfig file
             [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
             [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
             [kubelet-start] Starting the kubelet
             [control-plane] Using manifest folder "/etc/kubernetes/manifests"
             [control-plane] Creating static Pod manifest for "kube-apiserver"
             [control-plane] Creating static Pod manifest for "kube-controller-manager"
             [control-plane] Creating static Pod manifest for "kube-scheduler"
             [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
             [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
             [apiclient] All control plane components are healthy after 12.004852 seconds
             [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
             [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
             [upload-certs] Skipping phase. Please see --upload-certs
             [mark-control-plane] Marking the node node-master-centos-1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
             [mark-control-plane] Marking the node node-master-centos-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
             [bootstrap-token] Using token: m5ohft.9xi6nyvgu73sxu68
             [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
             [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
             [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
             [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
             [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
             [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
             [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
             [addons] Applied essential addon: CoreDNS
             [addons] Applied essential addon: kube-proxy
    
             Your Kubernetes control-plane has initialized successfully!
    
             To start using your cluster, you need to run the following as a regular user:
    
               mkdir -p $HOME/.kube
               sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
               sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
             Alternatively, if you are the root user, you can run:
    
               export KUBECONFIG=/etc/kubernetes/admin.conf
    
             You should now deploy a pod network to the cluster.
             Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
               https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
             Then you can join any number of worker nodes by running the following on each as root:
    
             kubeadm join 192.168.56.20:6443 --token m5ohft.9xi6nyvgu73sxu68 \
                 --discovery-token-ca-cert-hash sha256:b04371eb9c969f27a0d8f39761e99b7fb88b33c4bf06ba2e0faa0c1c28ac3be0
    
    
         -> (root@node-master-centos-1 ~) vi /etc/kubernetes/admin.conf, and edit to replace "192.168.56.20" to "node-master-centos-1" (use hostname instead of ip address)
         -> (root@node-master-centos-1 ~) sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
         -> (root@node-master-centos-1 ~) sudo chown $(id -u):$(id -g) $HOME/.kube/config
         -> (root@node-master-centos-1 ~) kubectl get nodes
                 NAME                   STATUS     ROLES                  AGE    VERSION
                 node-master-centos-1   NotReady   control-plane,master   4m3s   v1.20.0
         -> root@node-master-centos-1 ~) kubeadm token create --print-join-command (to obtain the command to be run on workers)
    
         -> By now, the k8s master is initialized which sets pod network to be 10.244.0.0/16 with api server at HTTPS://node-master-centos-1:6443. 
         At this stage, the node-master-centos-1 node is NotReady because Pod Network is not yet deployed which we need to use flannel.yaml (one of addons for podnetwork)
    
    
     -> Join Worker Nodes
         -> synchronize system time to avoid X509 certification error duruing kubeadm join. Below updates time offsets and adjust systime in one step.
             -> (root@node-worker-centos-1/2/3 ~) chronyc -a 'burst 4/4'
             -> (root@node-worker-centos-1/2/3 ~) chronyc -a makestep
         -> join the worker to cluster
             -> (root@node-worker-centos-1/2/3 ~) kubeadm join node-master-centos-1:6443 --token cjxoym.okfgvzd8t241grea     --discovery-token-ca-cert-hash sha256:b04371eb9c969f27a0d8f39761e99b7fb88b33c4bf06ba2e0faa0c1c28ac3be0 --v=2
         -> check node worker status on Master
             -> (root@node-master-centos-1 ~) kubectl get nodes -o wide
                     NAME                   STATUS   ROLES                  AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION          CONTAINER-RUNTIME
                     node-master-centos-1   Ready    control-plane,master   4h12m   v1.20.0   192.168.56.20   <none>        CentOS Linux 8   4.18.0-147.el8.x86_64   docker://20.10.0
                     node-worker-centos-1   Ready    <none>                 162m    v1.20.0   192.168.56.21   <none>        CentOS Linux 8   4.18.0-147.el8.x86_64   docker://20.10.0
                     node-worker-centos-2   Ready    <none>                 142m    v1.20.0   192.168.56.22   <none>        CentOS Linux 8   4.18.0-147.el8.x86_64   docker://20.10.0
                     node-worker-centos-3   Ready    <none>                 4m41s   v1.20.0   192.168.56.23   <none>        CentOS Linux 8   4.18.0-147.el8.x86_64   docker://20.10.0
    
             -> (root@node-master-centos-1 ~) kubectl get pods -A
                     NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE
                     kube-system   coredns-74ff55c5b-sfjvd                        1/1     Running   0          112m
                     kube-system   coredns-74ff55c5b-whjrs                        1/1     Running   0          112m
                     kube-system   etcd-node-master-centos-1                      1/1     Running   0          112m
                     kube-system   kube-apiserver-node-master-centos-1            1/1     Running   0          112m
                     kube-system   kube-controller-manager-node-master-centos-1   1/1     Running   0          112m
                     kube-system   kube-flannel-ds-dmqmw                          1/1     Running   0          61m
                     kube-system   kube-flannel-ds-hqwqt                          1/1     Running   0          2m51s
                     kube-system   kube-flannel-ds-qr9ml                          1/1     Running   0          22m
                     kube-system   kube-proxy-4dpk9                               1/1     Running   0          22m
                     kube-system   kube-proxy-6tltc                               1/1     Running   0          2m51s
                     kube-system   kube-proxy-t6k24                               1/1     Running   0          112m
                     kube-system   kube-scheduler-node-master-centos-1            1/1     Running   0          112m
    
     By Now, the kubernetes cluster is set up. As the VMs are not always run, the differences of system time between VMs may cause X509 or other errors.
     It may be therefore necessary to set up auto-sync scripts runnable on OS startup.