Linux Can';无法从主机操作系统或从容器到主机操作系统访问docker容器
我在CentOS 7上新安装了docker(XEN VPS)。我在服务器上启动了一个简单容器并转发了端口:Linux Can';无法从主机操作系统或从容器到主机操作系统访问docker容器,linux,docker,centos,containers,centos7,Linux,Docker,Centos,Containers,Centos7,我在CentOS 7上新安装了docker(XEN VPS)。我在服务器上启动了一个简单容器并转发了端口: docker run——名称mynginx2-p 81:80-d nginx 我可以进入container shell并ping另一个容器,但不幸的是,我无法从主机操作系统访问该容器: curl localhost:81 curl: (56) Recv failure: Connection reset by peer 我已经尝试过的是: 重新启动docker 重新安装docker 重
docker run——名称mynginx2-p 81:80-d nginx
我可以进入container shell并ping另一个容器,但不幸的是,我无法从主机操作系统访问该容器:
curl localhost:81
curl: (56) Recv failure: Connection reset by peer
我已经尝试过的是:
- 重新启动docker
- 重新安装docker
- 重新启动服务器
- 在服务器上禁用ipv6
- 绑定vps或0.0.0.0的公共ipv4
- 更改端口和docker映像
- 杀死所有其他linux进程
- 禁用selinux、firewalld、清除iptables规则
netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN 746/dotnet
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 432/sshd
tcp 0 0 127.0.0.1:7070 0.0.0.0:* LISTEN 433/dotnet
tcp6 0 0 :::81 :::* LISTEN 11904/docker-proxy
tcp6 0 0 :::22 :::* LISTEN 432/sshd
udp 0 0 127.0.0.1:323 0.0.0.0:* 443/chronyd
udp6 0 0 ::1:323 :::* 443/chronyd
路线:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gw-XXX-25-185.u 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-d35d4c4caff1
185.25.XXX.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0
接口:
br-d35d4c4caff1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
ether 02:42:f5:d8:05:f5 txqueuelen 0 (Ethernet)
RX packets 4093775 bytes 1107084410 (1.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4252000 bytes 798922091 (761.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:32:23:96:40 txqueuelen 0 (Ethernet)
RX packets 17 bytes 476 (476.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2095 bytes 88246 (86.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 185.25.XXX.ХХ netmask 255.255.252.0 broadcast 185.25.XXX.255
ether 00:16:3e:00:80:8b txqueuelen 1000 (Ethernet)
RX packets 4093775 bytes 1107084410 (1.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4252000 bytes 798922091 (761.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 0 (Local Loopback)
RX packets 1182 bytes 97010 (94.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1182 bytes 97010 (94.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethf4dec48: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 96:0c:b2:76:14:69 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Docker版本:
docker version
Client: Docker Engine - Community
Version: 19.03.12
API version: 1.40
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:46:54 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.12
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:45:28 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
流程列表:
root 5138 0.0 2.0 491904 38888 ? Ssl 16:40 0:03 /usr/bin/containerd
root 11910 0.0 0.1 107692 2940 ? Sl 18:31 0:00 \_ containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/c3
root 11927 0.0 0.1 10620 3324 ? Ss 18:31 0:00 \_ nginx: master process nginx -g daemon off;
101 11969 0.0 0.0 11016 1512 ? S 18:31 0:00 \_ nginx: worker process
root 11735 0.0 3.4 511612 65192 ? Ssl 18:31 0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 11904 0.0 0.1 217044 3212 ? Sl 18:31 0:00 \_ /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 81 -container-ip 172.17.0.2 -container
码头工人信息:
docker info
Client:
Debug Mode: false
Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 19.03.12
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-327.22.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.789GiB
Name: vps-32907
ID: 4W6H:34K5:GRRU:RZZV:JJVU:YNT6:ITN5:SSDO:PIDU:OFUY:WW73:6J5T
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
码头工人检查:
如果您有任何建议,我将不胜感激。在容器内,您能否
curl localhost:80
?您应该看到欢迎使用nginx!pageYes,没问题。你说你清理了iptables。Docker依靠iptables将发布端口(:81)转换为172.17.0.2:80
侦听nginx。如果没有生产容器,则可以重新启动docker deamon以重新创建可能手动删除的内容。在iptables下,您应该会看到nginx容器的DNAT:sudo iptables-t nat-L
==>DNAT tcp--anywhere-anywhere-tcp dpt:81到:172.17.0.2:80
我尝试过iptables规则,但没有任何更改。如果需要,我还可以让您使用SSH访问服务器,这是一个测试VPS。我将其与我的配置进行了比较,并没有发现任何异常的差异。可能是一些微妙的RHEL遗漏的东西。即使我使用ssh连接到VPS,我也没有其他想法……它应该可以正常工作。这是Ubuntu上5分钟的设置。在容器内,你能curl localhost:80
?您应该看到欢迎使用nginx!pageYes,没问题。你说你清理了iptables。Docker依靠iptables将发布端口(:81)转换为172.17.0.2:80
侦听nginx。如果没有生产容器,则可以重新启动docker deamon以重新创建可能手动删除的内容。在iptables下,您应该会看到nginx容器的DNAT:sudo iptables-t nat-L
==>DNAT tcp--anywhere-anywhere-tcp dpt:81到:172.17.0.2:80
我尝试过iptables规则,但没有任何更改。如果需要,我还可以让您使用SSH访问服务器,这是一个测试VPS。我将其与我的配置进行了比较,并没有发现任何异常的差异。可能是一些微妙的RHEL遗漏的东西。即使我使用ssh连接到VPS,我也没有其他想法……它应该可以正常工作。这是Ubuntu上5分钟的设置。。
root 5138 0.0 2.0 491904 38888 ? Ssl 16:40 0:03 /usr/bin/containerd
root 11910 0.0 0.1 107692 2940 ? Sl 18:31 0:00 \_ containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/c3
root 11927 0.0 0.1 10620 3324 ? Ss 18:31 0:00 \_ nginx: master process nginx -g daemon off;
101 11969 0.0 0.0 11016 1512 ? S 18:31 0:00 \_ nginx: worker process
root 11735 0.0 3.4 511612 65192 ? Ssl 18:31 0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 11904 0.0 0.1 217044 3212 ? Sl 18:31 0:00 \_ /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 81 -container-ip 172.17.0.2 -container
docker info
Client:
Debug Mode: false
Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 19.03.12
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-327.22.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.789GiB
Name: vps-32907
ID: 4W6H:34K5:GRRU:RZZV:JJVU:YNT6:ITN5:SSDO:PIDU:OFUY:WW73:6J5T
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false