Kubernetes 无法在Istio代理后的k8s中建立到VerneMQ群集的mqtt连接
我正在prem k8s集群上安装k8s。对于测试,我在使用kubeadm设置的vm上使用单节点集群。 我的要求包括在k8s中运行MQTT集群(vernemq),并通过入口(istio)进行外部访问 在不部署入口的情况下,我可以通过NodePort或LoadBalancer服务进行连接(mosquitto_sub) Istio是使用Kubernetes 无法在Istio代理后的k8s中建立到VerneMQ群集的mqtt连接,kubernetes,mqtt,istio,mqtt-vernemq,Kubernetes,Mqtt,Istio,Mqtt Vernemq,我正在prem k8s集群上安装k8s。对于测试,我在使用kubeadm设置的vm上使用单节点集群。 我的要求包括在k8s中运行MQTT集群(vernemq),并通过入口(istio)进行外部访问 在不部署入口的情况下,我可以通过NodePort或LoadBalancer服务进行连接(mosquitto_sub) Istio是使用istioctl安装--set profile=demo安装的 问题 我正在尝试从集群外部访问VerneMQ代理。Ingress(Istio网关)——在这种情况下似乎是
istioctl安装--set profile=demo安装的
问题
我正在尝试从集群外部访问VerneMQ代理。Ingress(Istio网关)——在这种情况下似乎是完美的解决方案,但我无法建立到代理的TCP连接(也不能通过Ingress IP,也不能直接通过svc/vernemq IP)
那么,如何通过Istio入口从外部客户端建立TCP连接呢
我试过的
我创建了两个名称空间:
- 使用istio暴露–使用istio代理注射
- 使用loadbalancer公开-不使用istio代理
命名空间中,我使用loadbalancer服务部署了vernemq。它是有效的,这就是我知道可以访问VerneMQ的方式(使用mosquitto_sub-h-p 1883-t hello
,主机是svc/VerneMQ的ClusterIP或ExternalIP)。可通过主机访问仪表板:8888/状态,仪表板上的“客户端联机”增量
在istio公开的中,我部署了带有ClusterIP服务、Istios网关和VirtualService的vernemq。
代理注入后,MOSQUITO__sub立即无法通过svc/vernemq IP或通过istio入口(网关)IP进行订阅。命令永远挂起,不断重试。
同时,可以通过服务ip和istio网关访问vernemq仪表板端点
我想必须配置istio代理才能让mqtt工作
以下是istio入口高速公路服务:
kubectl描述svc/istio入口通道-istio系统
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
install.operator.istio.io/owning-resource=installed-state
install.operator.istio.io/owning-resource-namespace=istio-system
istio=ingressgateway
istio.io/rev=default
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.7.0
release=istio
Annotations: Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP: 10.100.213.45
LoadBalancer Ingress: 192.168.100.240
Port: status-port 15021/TCP
TargetPort: 15021/TCP
Port: http2 80/TCP
TargetPort: 8080/TCP
Port: https 443/TCP
TargetPort: 8443/TCP
Port: tcp 31400/TCP
TargetPort: 31400/TCP
Port: tls 15443/TCP
TargetPort: 15443/TCP
Session Affinity: None
External Traffic Policy: Cluster
...
下面是来自istio代理的调试日志
kubectl记录svc/vernemq-n测试istio代理
2020-08-24T07:57:52.294477Z debug envoy filter original_dst: New connection accepted
2020-08-24T07:57:52.294516Z debug envoy filter tls inspector: new connection accepted
2020-08-24T07:57:52.294532Z debug envoy filter http inspector: new connection accepted
2020-08-24T07:57:52.294580Z debug envoy filter [C5645] new tcp proxy session
2020-08-24T07:57:52.294614Z debug envoy filter [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294638Z debug envoy pool creating a new connection
2020-08-24T07:57:52.294671Z debug envoy pool [C5646] connecting
2020-08-24T07:57:52.294684Z debug envoy connection [C5646] connecting to 127.0.0.1:1883
2020-08-24T07:57:52.294725Z debug envoy connection [C5646] connection in progress
2020-08-24T07:57:52.294746Z debug envoy pool queueing request due to no available connections
2020-08-24T07:57:52.294750Z debug envoy conn_handler [C5645] new connection
2020-08-24T07:57:52.294768Z debug envoy connection [C5646] delayed connection error: 111
2020-08-24T07:57:52.294772Z debug envoy connection [C5646] closing socket: 0
2020-08-24T07:57:52.294783Z debug envoy pool [C5646] client disconnected
2020-08-24T07:57:52.294790Z debug envoy filter [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294794Z debug envoy connection [C5645] closing data_to_write=0 type=1
2020-08-24T07:57:52.294796Z debug envoy connection [C5645] closing socket: 1
2020-08-24T07:57:52.294864Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=12
2020-08-24T07:57:52.294882Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=16
2020-08-24T07:57:52.294885Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=20
2020-08-24T07:57:52.294887Z debug envoy wasm wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=24
2020-08-24T07:57:52.294891Z debug envoy conn_handler [C5645] adding to cleanup list
2020-08-24T07:57:52.294949Z debug envoy pool [C5646] connection destroyed
这是istio ingressagateway的日志。IP10.244.243.205
属于VerneMQ吊舱,而不是服务(可能是有意的)
我的配置
vernemq-istio-ingress.yaml
apiVersion:v1
种类:名称空间
元数据:
名称:用istio曝光
标签:
istio注入:已启用
---
版本:v1
种类:服务帐户
元数据:
姓名:vernemq
名称空间:使用istio公开
---
种类:角色
apiVersion:rbac.authorization.k8s.io/v1
元数据:
名称:端点读取器
名称空间:使用istio公开
规则:
-APIgroup:[“”、“扩展”、“应用程序”]
资源:[“端点”、“部署”、“复制集”、“POD”]
动词:[“获取”、“列表”]
---
种类:RoleBinding
apiVersion:rbac.authorization.k8s.io/v1beta1
元数据:
名称:端点读取器
名称空间:使用istio公开
学科:
-种类:服务帐户
姓名:vernemq
roleRef:
apiGroup:rbac.authorization.k8s.io
种类:角色
名称:端点读取器
---
版本:v1
种类:服务
元数据:
姓名:vernemq
名称空间:使用istio公开
标签:
应用程序:vernemq
规格:
选择器:
应用程序:vernemq
类型:集群
端口:
-港口:4369
姓名:empd
-港口:44053
姓名:vmq
-港口:8888
名称:http仪表板
-港口:1883
名称:tcp mqtt
目标港:1883
-港口:9001
名称:tcp mqtt ws
目标港:9001
---
apiVersion:apps/v1
种类:部署
元数据:
姓名:vernemq
名称空间:使用istio公开
规格:
副本:1份
选择器:
火柴标签:
应用程序:vernemq
模板:
元数据:
标签:
应用程序:vernemq
规格:
serviceAccountName:vernemq
容器:
-姓名:vernemq
图片:vernemq/vernemq
端口:
-集装箱港口:1883
名称:tcp mqtt
协议:TCP
-集装箱港口:8080
名称:tcp mqtt ws
-集装箱港口:8888
名称:http仪表板
-集装箱港口:4399
姓名:epmd
-集装箱港口:44053
姓名:vmq
-集装箱港口:9100-9109#缩短
环境:
-名称:DOCKER_VERNEMQ_ACCEPT_EULA
值:“是”
-姓名:DOCKER\u VERNEMQ\u ALLOW\u ANONYMOUS
价值:“开”
-名称:DOCKER\u VERNEMQ\u侦听器\u tcp\u允许的\u协议\u版本
值:“3,4,5”
-名称:DOCKER\u VERNEMQ\u允许\u netsplit期间注册\u
价值:“开”
-姓名:DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
值:“1”
-名称:DOCKER_VERNEMQ_KUBERNETES_应用标签
值:“vernemq”
-名称:DOCKER_VERNEMQ_KUBERNETES_命名空间
价值来源:
fieldRef:
fieldPath:metadata.namespace
-姓名:我的名字
价值来源:
fieldRef:
fieldPath:metadata.name
-名称:码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头
值:“9100”
-名称:码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头码头
值:“9109”
-姓名:DOCKER_VERNEMQ_KUBERNETES_Unsecure
值:“1”
vernemq-loadbalancer-service.yaml
---
版本:v1
种类:名称空间
元数据:
名称:使用loadbalancer公开
---
... 除了名称空间和服务类型之外,其余的都是相同的。。。
亚马尔酒店
apiVersion:networking.istio.io/v1beta1
种类:命运法则
元数据:
名称:vernemq目的地
名称空间:使用istio公开
规格:
主机:vernemq.exposed-with-istio.svc.cluster.local
交通政策:
tls:
模式:禁用
---
apiVersion:networking.istio.io/v1beta1
种类:网关
元数据:
名称:vernemq网关
名称空间:使用istio公开
规格:
选择器:
istio:ingressgateway#使用istio默认控制器
服务器:
-端口:
电话:31400
名称
2020-08-24T08:48:31.536593Z debug envoy filter [C13236] new tcp proxy session
2020-08-24T08:48:31.536702Z debug envoy filter [C13236] Creating connection to cluster outbound|1883||vernemq.test.svc.cluster.local
2020-08-24T08:48:31.536728Z debug envoy pool creating a new connection
2020-08-24T08:48:31.536778Z debug envoy pool [C13237] connecting
2020-08-24T08:48:31.536784Z debug envoy connection [C13237] connecting to 10.244.243.205:1883
2020-08-24T08:48:31.537074Z debug envoy connection [C13237] connection in progress
2020-08-24T08:48:31.537116Z debug envoy pool queueing request due to no available connections
2020-08-24T08:48:31.537138Z debug envoy conn_handler [C13236] new connection
2020-08-24T08:48:31.537181Z debug envoy connection [C13237] connected
2020-08-24T08:48:31.537204Z debug envoy pool [C13237] assigning connection
2020-08-24T08:48:31.537221Z debug envoy filter TCP:onUpstreamEvent(), requestedServerName:
2020-08-24T08:48:31.537880Z debug envoy misc Unknown error code 104 details Connection reset by peer
2020-08-24T08:48:31.537907Z debug envoy connection [C13237] remote close
2020-08-24T08:48:31.537913Z debug envoy connection [C13237] closing socket: 0
2020-08-24T08:48:31.537938Z debug envoy pool [C13237] client disconnected
2020-08-24T08:48:31.537953Z debug envoy connection [C13236] closing data_to_write=0 type=0
2020-08-24T08:48:31.537958Z debug envoy connection [C13236] closing socket: 1
2020-08-24T08:48:31.538156Z debug envoy conn_handler [C13236] adding to cleanup list
2020-08-24T08:48:31.538191Z debug envoy pool [C13237] connection destroyed
0.0.0.0 8888 App: HTTP Route: 8888
0.0.0.0 8888 ALL PassthroughCluster
10.107.205.214 1883 ALL Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
... Cluster: outbound|853||istiod.istio-system.svc.cluster.local
10.107.205.214 1883 ALL Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.108.218.134 3000 App: HTTP Route: grafana.istio-system.svc.cluster.local:3000
10.108.218.134 3000 ALL Cluster: outbound|3000||grafana.istio-system.svc.cluster.local
10.107.205.214 4369 App: HTTP Route: vernemq.exposed-with-istio.svc.cluster.local:4369
10.107.205.214 4369 ALL Cluster: outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 8888 App: HTTP Route: 8888
0.0.0.0 8888 ALL PassthroughCluster
10.107.205.214 9001 ALL Cluster: outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 9090 App: HTTP Route: 9090
0.0.0.0 9090 ALL PassthroughCluster
10.96.0.10 9153 App: HTTP Route: kube-dns.kube-system.svc.cluster.local:9153
10.96.0.10 9153 ALL Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0 9411 App: HTTP ...
0.0.0.0 15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4
0.0.0.0 15006 App: TCP TLS Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls; App: TCP TLS Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls; App: TCP TLS Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 Trans: tls Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 ALL Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15006 App: TCP TLS Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0 15010 App: HTTP Route: 15010
0.0.0.0 15010 ALL PassthroughCluster
10.106.166.154 15012 ALL Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0 15014 App: HTTP Route: 15014
0.0.0.0 15014 ALL PassthroughCluster
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
10.100.213.45 15021 App: HTTP Route: istio-ingressgateway.istio-system.svc.cluster.local:15021
10.100.213.45 15021 ALL Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
10.100.213.45 15443 ALL Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.105.193.108 15443 ALL Cluster: outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
0.0.0.0 20001 App: HTTP Route: 20001
0.0.0.0 20001 ALL PassthroughCluster
10.100.213.45 31400 ALL Cluster: outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.107.205.214 44053 App: HTTP Route: vernemq.exposed-with-istio.svc.cluster.local:44053
10.107.205.214 44053 ALL Cluster: outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:1883 HEALTHY OK outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883 HEALTHY OK inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.101.200.113:9411 HEALTHY OK zipkin
10.106.166.154:15012 HEALTHY OK xds-grpc
10.211.55.14:6443 HEALTHY OK outbound|443||kubernetes.default.svc.cluster.local
10.244.243.193:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.193:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.195:53 HEALTHY OK outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.195:9153 HEALTHY OK outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.197:15010 HEALTHY OK outbound|15010||istiod.istio-system.svc.cluster.local
10.244.243.197:15012 HEALTHY OK outbound|15012||istiod.istio-system.svc.cluster.local
10.244.243.197:15014 HEALTHY OK outbound|15014||istiod.istio-system.svc.cluster.local
10.244.243.197:15017 HEALTHY OK outbound|443||istiod.istio-system.svc.cluster.local
10.244.243.197:15053 HEALTHY OK outbound|853||istiod.istio-system.svc.cluster.local
10.244.243.198:8080 HEALTHY OK outbound|80||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:8443 HEALTHY OK outbound|443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:15443 HEALTHY OK outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.199:8080 HEALTHY OK outbound|80||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:8443 HEALTHY OK outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15021 HEALTHY OK outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15443 HEALTHY OK outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:31400 HEALTHY OK outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.201:3000 HEALTHY OK outbound|3000||grafana.istio-system.svc.cluster.local
10.244.243.202:9411 HEALTHY OK outbound|9411||zipkin.istio-system.svc.cluster.local
10.244.243.202:16686 HEALTHY OK outbound|80||tracing.istio-system.svc.cluster.local
10.244.243.203:9090 HEALTHY OK outbound|9090||kiali.istio-system.svc.cluster.local
10.244.243.203:20001 HEALTHY OK outbound|20001||kiali.istio-system.svc.cluster.local
10.244.243.204:9090 HEALTHY OK outbound|9090||prometheus.istio-system.svc.cluster.local
10.244.243.206:1883 HEALTHY OK outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:4369 HEALTHY OK outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:8888 HEALTHY OK outbound|8888||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:9001 HEALTHY OK outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:44053 HEALTHY OK outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883 HEALTHY OK inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:4369 HEALTHY OK inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:8888 HEALTHY OK inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:9001 HEALTHY OK inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:15000 HEALTHY OK prometheus_stats
127.0.0.1:15020 HEALTHY OK agent
127.0.0.1:44053 HEALTHY OK inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
unix://./etc/istio/proxy/SDS HEALTHY OK sds-grpc
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
istio-ingressgateway.istio-system.svc.cluster.local:15021 istio-ingressgateway.istio-system /*
istiod.istio-system.svc.cluster.local:853 istiod.istio-system /*
20001 kiali.istio-system /*
15010 istiod.istio-system /*
15014 istiod.istio-system /*
vernemq.exposed-with-istio.svc.cluster.local:4369 vernemq /*
vernemq.exposed-with-istio.svc.cluster.local:44053 vernemq /*
kube-dns.kube-system.svc.cluster.local:9153 kube-dns.kube-system /*
8888 vernemq /*
80 istio-egressgateway.istio-system /*
80 istio-ingressgateway.istio-system /*
80 tracing.istio-system /*
grafana.istio-system.svc.cluster.local:3000 grafana.istio-system /*
9411 zipkin.istio-system /*
9090 kiali.istio-system /*
9090 prometheus.istio-system /*
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local * /*
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local * /*
* /stats/prometheus*
InboundPassthroughClusterIpv4 * /*
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local * /*
InboundPassthroughClusterIpv4 * /*
* /healthz/ready*
kubectl exec -it <pod-name> -c istio-proxy -- curl -X POST http://localhost:15000/logging?level=trace
kubectl logs <pod-name> -c isito-proxy -f
- port: 1883
name: tcp-mqtt
targetPort: 1883
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mqtt-domain-tld-gw
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- mqtt.domain.tld
- port:
number: 15443
name: tls
protocol: TLS
hosts:
- mqtt.domain.tld
tls:
mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mqtt-domain-tld-vs
spec:
hosts:
- mqtt.domain.tld
gateways:
- mqtt-domain-tld-gw
tcp:
- match:
- port: 31400
route:
- destination:
host: mqtt
port:
number: 1883
tls:
- match:
- port: 15443
sniHosts:
- mqtt.domain.tld
route:
- destination:
host: mqtt
port:
number: 8883
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mqtt
name: mqtt
spec:
ports:
- name: tcp-mqtt
port: 1883
protocol: TCP
targetPort: 1883
appProtocol: tcp
- name: tls-mqtt
port: 8883
protocol: TCP
targetPort: 8883
appProtocol: tls
selector:
app: mqtt
type: LoadBalancer