Kubernetes 访问群集服务/POD的VPN:无法ping除openvpn服务器之外的任何内容
我正在尝试设置VPN以访问群集的工作负载,而不设置公共端点 服务使用OpenVPN helm图表部署,kubernetes使用Rancher v2.3.2部署Kubernetes 访问群集服务/POD的VPN:无法ping除openvpn服务器之外的任何内容,kubernetes,vpn,rancher,Kubernetes,Vpn,Rancher,我正在尝试设置VPN以访问群集的工作负载,而不设置公共端点 服务使用OpenVPN helm图表部署,kubernetes使用Rancher v2.3.2部署 用简单的服务发现替换L4 loadbalacer 编辑configMap以允许TCP通过负载平衡器并到达VPN 什么有效/无效: OpenVPN客户端可以成功连接 无法ping公共服务器 无法ping Kubernetes服务或pods Can ping openvpn群集IP“10.42.2.11” 我的文件 vars.yml
- 用简单的服务发现替换L4 loadbalacer
- 编辑configMap以允许TCP通过负载平衡器并到达VPN
- OpenVPN客户端可以成功连接
- 无法ping公共服务器
- 无法ping Kubernetes服务或pods
- Can ping openvpn群集IP“10.42.2.11”
vars.yml
---
replicaCount: 1
nodeSelector:
openvpn: "true"
openvpn:
OVPN_K8S_POD_NETWORK: "10.42.0.0"
OVPN_K8S_POD_SUBNET: "255.255.0.0"
OVPN_K8S_SVC_NETWORK: "10.43.0.0"
OVPN_K8S_SVC_SUBNET: "255.255.0.0"
persistence:
storageClass: "local-path"
service:
externalPort: 444
连接正常,但我无法访问群集中的任何ip。
我能接触到的唯一ip是openvpn集群ip
openvpn.conf
:
server 10.240.0.0 255.255.0.0
verb 3
key /etc/openvpn/certs/pki/private/server.key
ca /etc/openvpn/certs/pki/ca.crt
cert /etc/openvpn/certs/pki/issued/server.crt
dh /etc/openvpn/certs/pki/dh.pem
key-direction 0
keepalive 10 60
persist-key
persist-tun
proto tcp
port 443
dev tun0
status /tmp/openvpn-status.log
user nobody
group nogroup
push "route 10.42.2.11 255.255.255.255"
push "route 10.42.0.0 255.255.0.0"
push "route 10.43.0.0 255.255.0.0"
push "dhcp-option DOMAIN-SEARCH openvpn.svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH svc.cluster.local"
push "dhcp-option DOMAIN-SEARCH cluster.local"
client.ovpn
client
nobind
dev tun
remote xxxx xxx tcp
CERTS CERTS
dhcp-option DOMAIN openvpn.svc.cluster.local
dhcp-option DOMAIN svc.cluster.local
dhcp-option DOMAIN cluster.local
dhcp-option DOMAIN online.net
我真的不知道如何调试这个
我正在使用windows
路由
来自客户端的命令
Destination Gateway Genmask Flags Metric Ref Use Ifac
0.0.0.0 livebox.home 255.255.255.255 U 0 0 0 eth0
192.168.1.0 0.0.0.0 255.255.255.0 U 256 0 0 eth0
192.168.1.17 0.0.0.0 255.255.255.255 U 256 0 0 eth0
192.168.1.255 0.0.0.0 255.255.255.255 U 256 0 0 eth0
224.0.0.0 0.0.0.0 240.0.0.0 U 256 0 0 eth0
255.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 eth0
224.0.0.0 0.0.0.0 240.0.0.0 U 256 0 0 eth1
255.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 eth1
0.0.0.0 10.240.0.5 255.255.255.255 U 0 0 0 eth1
10.42.2.11 10.240.0.5 255.255.255.255 U 0 0 0 eth1
10.42.0.0 10.240.0.5 255.255.0.0 U 0 0 0 eth1
10.43.0.0 10.240.0.5 255.255.0.0 U 0 0 0 eth1
10.240.0.1 10.240.0.5 255.255.255.255 U 0 0 0 eth1
127.0.0.0 0.0.0.0 255.0.0.0 U 256 0 0 lo
127.0.0.1 0.0.0.0 255.255.255.255 U 256 0 0 lo
127.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 lo
224.0.0.0 0.0.0.0 240.0.0.0 U 256 0 0 lo
255.255.255.255 0.0.0.0 255.255.255.255 U 256 0 0 lo
最后是ifconfig
inet 192.168.1.17 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 2a01:cb00:90c:5300:603c:f8:703e:a876 prefixlen 64 scopeid 0x0<global>
inet6 2a01:cb00:90c:5300:d84b:668b:85f3:3ba2 prefixlen 128 scopeid 0x0<global>
inet6 fe80::603c:f8:703e:a876 prefixlen 64 scopeid 0xfd<compat,link,site,host>
ether 00:d8:61:31:22:32 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.240.0.6 netmask 255.255.255.252 broadcast 10.240.0.7
inet6 fe80::b9cf:39cc:f60a:9db2 prefixlen 64 scopeid 0xfd<compat,link,site,host>
ether 00:ff:42:04:53:4d (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 1500
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0xfe<compat,link,site,host>
loop (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
inet 192.168.1.17网络掩码255.255.255.0广播192.168.1.255
inet6 2a01:cb00:90c:5300:603c:f8:703e:a876预桥64作用域ID 0x0
inet6 2a01:cb00:90c:5300:d84b:668b:85f3:3ba2预桥128作用域ID 0x0
inet6 fe80::603c:f8:703e:a876预桥64作用域ID 0xfd
以太00:d8:61:31:22:32(以太网)
接收数据包0字节0(0.0B)
接收错误0丢弃0超出0帧0
发送数据包0字节0(0.0B)
发送错误0丢弃0溢出0载波0冲突0
eth1:flags=4163 mtu 1500
inet 10.240.0.6网络掩码255.255.255.252广播10.240.0.7
inet6-fe80::b9cf:39cc:f60a:9db2 prefixlen 64 scopeid 0xfd
以太00:ff:42:04:53:4d(以太网)
接收数据包0字节0(0.0B)
接收错误0丢弃0超出0帧0
发送数据包0字节0(0.0B)
发送错误0丢弃0溢出0载波0冲突0
低:标志=73 mtu 1500
inet 127.0.0.1网络掩码255.0.0.0
inet6::1预桥128作用域ID 0xfe
循环(本地环回)
接收数据包0字节0(0.0B)
接收错误0丢弃0超出0帧0
发送数据包0字节0(0.0B)
发送错误0丢弃0溢出0载波0冲突0
不知道答案是否正确
但我通过在我的吊舱中添加一个侧车来执行它
net.ipv4.ip_forward=1
它解决了这个问题对于任何想要工作示例的人来说,这将与容器定义一起进入openvpn部署:
initContainers:
- args:
- -w
- net.ipv4.ip_forward=1
command:
- sysctl
image: busybox
name: openvpn-sidecar
securityContext:
privileged: true
您可以在values.yaml中将ipForwardInitContainer选项设置为“true”