docker 1.12 swarm模式:如何连接到覆盖网络上的另一个容器以及如何使用loadbalance?

docker 1.12 swarm模式:如何连接到覆盖网络上的另一个容器以及如何使用loadbalance?,docker,load-balancing,docker-swarm,Docker,Load Balancing,Docker Swarm,我在mac os上使用了docker机器。并创建群集模式群集,如: ➜ docker-machine create --driver virtualbox docker1 ➜ docker-machine create --driver virtualbox docker2 ➜ docker-machine create --driver virtualbox docker3 ➜ config docker-machine ls NAME ACTIVE DRIVER

我在mac os上使用了docker机器。并创建群集模式群集,如:

➜  docker-machine create --driver virtualbox docker1
➜  docker-machine create --driver virtualbox docker2
➜  docker-machine create --driver virtualbox docker3

➜  config docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
docker1   -        virtualbox   Running   tcp://192.168.99.100:2376           v1.12.0-rc4
docker2   -        virtualbox   Running   tcp://192.168.99.101:2376           v1.12.0-rc4
docker3   -        virtualbox   Running   tcp://192.168.99.102:2376           v1.12.0-rc4


➜  config docker-machine ssh docker1
docker@docker1:~$ docker swarm init
No --secret provided. Generated random secret:
    b0wcyub7lbp8574mk1oknvavq

Swarm initialized: current node (8txt830ivgrxxngddtx7k4xe4) is now a manager.

To add a worker to this swarm, run the following command:
    docker swarm join --secret b0wcyub7lbp8574mk1oknvavq \
    --ca-hash sha256:e06f5213f5c67a708b2fa5b819f441fce8006df41d588ad7823e5d0d94f15f02 \
    10.0.2.15:2377


# on host docker2 and host docker3, I run cammand to join the cluster:

docker@docker2:~$ docker swarm join --secret b0wcyub7lbp8574mk1oknvavq --ca-hash sha256:e06f5213f5c67a708b2fa5b819f441fce8006df41d588ad7823e5d0d94f15f02 192.1
68.99.100:2377
This node joined a Swarm as a worker.

docker@docker3:~$ docker swarm join --secret b0wcyub7lbp8574mk1oknvavq --ca-hash sha256:e06f5213f5c67a708b2fa5b819f441fce8006df41d588ad7823e5d0d94f15f02 192.1
68.99.100:2377
This node joined a Swarm as a worker.

# on docker1:
docker@docker1:~$ docker node ls
ID                           HOSTNAME  MEMBERSHIP  STATUS  AVAILABILITY  MANAGER STATUS
8txt830ivgrxxngddtx7k4xe4 *  docker1   Accepted    Ready   Active        Leader
9fliuzb9zl5jcqzqucy9wfl4y    docker2   Accepted    Ready   Active
c4x8rbnferjvr33ff8gh4c6cr    docker3   Accepted    Ready   Active
然后,我在docker1上创建了带有覆盖驱动程序的网络mynet。 第一个问题:但我看不到其他docker主机上的网络:

docker@docker1:~$ docker network create --driver overlay mynet
a1v8i656el5d3r45k985cn44e
docker@docker1:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5ec55ffde8e4        bridge              bridge              local
83967a11e3dd        docker_gwbridge     bridge              local
7f856c9040b3        host                host                local
bpoqtk71o6qo        ingress             overlay             swarm
a1v8i656el5d        mynet               overlay             swarm
829a614aa278        none                null                local

docker@docker2:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
da07b3913bd4        bridge              bridge              local
7a2e627634b9        docker_gwbridge     bridge              local
e8971c2b5b21        host                host                local
bpoqtk71o6qo        ingress             overlay             swarm
c37de5447a14        none                null                local

docker@docker3:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
06eb8f0bad11        bridge              bridge              local
fb5e3bcae41c        docker_gwbridge     bridge              local
e167d97cd07f        host                host                local
bpoqtk71o6qo        ingress             overlay             swarm
6540ece8e146        none                null                local
我创建了nginx服务,该服务在docker1的索引页上回显默认主机名:

docker@docker1:~$ docker service create --name nginx --network mynet --replicas 1 -p 80:80 dhub.yunpro.cn/shenshouer/nginx:hostname
9d7xxa8ukzo7209r30f0rmcut
docker@docker1:~$ docker service tasks nginx
ID                         NAME     SERVICE  IMAGE                                     LAST STATE              DESIRED STATE  NODE
0dvgh9xfwz7301jmsh8yc5zpe  nginx.1  nginx    dhub.yunpro.cn/shenshouer/nginx:hostname  Running 12 seconds ago  Running        docker3
第二个问题:我无法从docker1主机的IP访问该服务。我只得到访问docker3 IP的响应

➜  tools curl 192.168.99.100
curl: (52) Empty reply from server
➜  tools curl 192.168.99.102
fda9fb58f9d4
所以我认为没有负载平衡。如何使用内置的loadbalance

然后我使用busybox image在同一网络上创建另一个服务来测试ping:

docker@docker1:~$ docker service create --name busybox --network mynet --replicas 1 busybox sleep 3000
akxvabx66ebjlak77zj6x1w4h
docker@docker1:~$ docker service tasks busybox
ID                         NAME       SERVICE  IMAGE    LAST STATE              DESIRED STATE  NODE
9yc3svckv98xtmv1d0tvoxbeu  busybox.1  busybox  busybox  Running 11 seconds ago  Running        docke1

# on host docker3. I got the container name and the container IP to ping test:

docker@docker3:~$ docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
fda9fb58f9d4        dhub.yunpro.cn/shenshouer/nginx:hostname   "sh -c /entrypoint.sh"   7 minutes ago       Up 7 minutes        80/tcp, 443/tcp     nginx.1.0dvgh9xfwz7301jmsh8yc5zpe

docker@docker3:~$ docker inspect fda9fb58f9d4
...

            "Networks": {
                "ingress": {
                    "IPAMConfig": {
                        "IPv4Address": "10.255.0.7"
                    },
                    "Links": null,
                    "Aliases": [
                        "fda9fb58f9d4"
                    ],
                    "NetworkID": "bpoqtk71o6qor8t2gyfs07yfc",
                    "EndpointID": "98c98a9cc0fcc71511f0345f6ce19cc9889e2958d9345e200b3634ac0a30edbb",
                    "Gateway": "",
                    "IPAddress": "10.255.0.7",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:0a:ff:00:07"
                },
                "mynet": {
                    "IPAMConfig": {
                        "IPv4Address": "10.0.0.3"
                    },
                    "Links": null,
                    "Aliases": [
                        "fda9fb58f9d4"
                    ],
                    "NetworkID": "a1v8i656el5d3r45k985cn44e",
                    "EndpointID": "5f3c5678d40b6a7a2495963c16a873c6a2ba14e94cf99d2aa3fa087b67a46cce",
                    "Gateway": "",
                    "IPAddress": "10.0.0.3",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:0a:00:00:03"
                }
            }
        }
    }
]


# on host docker1 :
docker@docker1:~$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
b94716e9252e        busybox:latest      "sleep 3000"        2 minutes ago       Up 2 minutes                            busybox.1.9yc3svckv98xtmv1d0tvoxbeu
docker@docker1:~$ docker exec -it b94716e9252e ping nginx.1.0dvgh9xfwz7301jmsh8yc5zpe
ping: bad address 'nginx.1.0dvgh9xfwz7301jmsh8yc5zpe'
docker@docker1:~$ docker exec -it b94716e9252e ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3): 56 data bytes
90 packets transmitted, 0 packets received, 100% packet loss
第三个问题:如何与同一网络上的每个容器通信

网络mynet的名称为:

docker@docker1:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5ec55ffde8e4        bridge              bridge              local
83967a11e3dd        docker_gwbridge     bridge              local
7f856c9040b3        host                host                local
bpoqtk71o6qo        ingress             overlay             swarm
a1v8i656el5d        mynet               overlay             swarm
829a614aa278        none                null                local
docker@docker1:~$ docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "a1v8i656el5d3r45k985cn44e",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "b94716e9252e6616f0f4c81e0c7ef674d7d5f4fafe931953fced9ef059faeb5f": {
                "Name": "busybox.1.9yc3svckv98xtmv1d0tvoxbeu",
                "EndpointID": "794be0e92b34547e44e9a5e697ab41ddd908a5db31d0d31d7833c746395534f5",
                "MacAddress": "02:42:0a:00:00:05",
                "IPv4Address": "10.0.0.5/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "257"
        },
        "Labels": {}
    }
]


docker@docker2:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
da07b3913bd4        bridge              bridge              local
7a2e627634b9        docker_gwbridge     bridge              local
e8971c2b5b21        host                host                local
bpoqtk71o6qo        ingress             overlay             swarm
c37de5447a14        none                null                local

docker@docker3:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
06eb8f0bad11        bridge              bridge              local
fb5e3bcae41c        docker_gwbridge     bridge              local
e167d97cd07f        host                host                local
bpoqtk71o6qo        ingress             overlay             swarm
a1v8i656el5d        mynet               overlay             swarm
6540ece8e146        none                null                local

docker@docker3:~$ docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "a1v8i656el5d3r45k985cn44e",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "fda9fb58f9d46317ef1df60e597bd14214ec3fac43e32f4b18a39bb92925aa7e": {
                "Name": "nginx.1.0dvgh9xfwz7301jmsh8yc5zpe",
                "EndpointID": "5f3c5678d40b6a7a2495963c16a873c6a2ba14e94cf99d2aa3fa087b67a46cce",
                "MacAddress": "02:42:0a:00:00:03",
                "IPv4Address": "10.0.0.3/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "257"
        },
        "Labels": {}
    }
]

因此第四个问题:是否有build int kv store?

问题1:其他主机上的网络是按需创建的,当swarm在该主机上分配任务时,将创建网络

问题2:负载平衡是现成的,docker swarm集群可能存在一些问题。您需要检查iptables和ipvs规则

问题3:同一覆盖网络上的容器(在您的例子中是mynet)可以相互通信,docker有一个内置dns服务器,可以将容器名称解析为ip地址


问题4:是的。谢谢您的重播。我发现了这个问题,因为boot2docker操作系统中的ipvs模块没有加载。因此,负载平衡存在一些问题。我将操作系统改为debian,没有问题。