印花布不起作用的Kubernetes DNS和网络策略
我有一个运行着印花布的minikube集群,我正在努力使网络策略起作用。以下是我的播客和服务: 第一舱(a队): 第二舱(b队): 当我在印花布不起作用的Kubernetes DNS和网络策略,kubernetes,Kubernetes,我有一个运行着印花布的minikube集群,我正在努力使网络策略起作用。以下是我的播客和服务: 第一舱(a队): 第二舱(b队): 当我在team-a中执行bash时,我不能curlorga-2.team-b: dev@ubuntu:~$ kubectl exec -it -n orga-1 team-a /bin/bash root@team-a:/# curl google.de //Body removed... root@team-a:/# curl orga-2.team-
team-a
中执行bash时,我不能curl
orga-2.team-b:
dev@ubuntu:~$ kubectl exec -it -n orga-1 team-a /bin/bash
root@team-a:/# curl google.de
//Body removed...
root@team-a:/# curl orga-2.team-b
curl: (6) Could not resolve host: orga-2.team-b
现在我应用了一个网络策略:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-base-rule
namespace: orga-1
spec:
podSelector: {}
policyTypes:
- Ingress
ingress: []
当我现在将谷歌加入团队a时,它仍然有效。
这是我的豆荚:
kube-system calico-etcd-hbpqc 1/1 Running 0 27m
kube-system calico-kube-controllers-6b86746955-5mk9v 1/1 Running 0 27m
kube-system calico-node-72rcl 2/2 Running 0 27m
kube-system coredns-fb8b8dccf-6j64x 1/1 Running 1 29m
kube-system coredns-fb8b8dccf-vjwl7 1/1 Running 1 29m
kube-system default-http-backend-6864bbb7db-5c25r 1/1 Running 0 29m
kube-system etcd-minikube 1/1 Running 0 28m
kube-system kube-addon-manager-minikube 1/1 Running 0 28m
kube-system kube-apiserver-minikube 1/1 Running 0 28m
kube-system kube-controller-manager-minikube 1/1 Running 0 28m
kube-system kube-proxy-p48xv 1/1 Running 0 29m
kube-system kube-scheduler-minikube 1/1 Running 0 28m
kube-system nginx-ingress-controller-586cdc477c-6rh6w 1/1 Running 0 29m
kube-system storage-provisioner 1/1 Running 0 29m
orga-1 team-a 1/1 Running 0 20m
orga-2 team-b 1/1 Running 0 7m20s
以及我的服务:
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29m
kube-system calico-etcd ClusterIP 10.96.232.136 <none> 6666/TCP 27m
kube-system default-http-backend NodePort 10.105.84.105 <none> 80:30001/TCP 29m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 29m
orga-1 team-a ClusterIP 10.101.4.159 <none> 80/TCP 8m37s
orga-2 team-b ClusterIP 10.105.79.255 <none> 80/TCP 7m54s
默认kubernetes ClusterIP 10.96.0.1 443/TCP 29m
kube system印花布etcd ClusterIP 10.96.232.136 6666/TCP 27m
kube系统默认http后端节点端口10.105.84.105 80:30001/TCP 29m
kube系统kube dns ClusterIP 10.96.0.10 53/UDP、53/TCP、9153/TCP 29m
orga-1 team-a ClusterIP 10.101.4.159 80/TCP 8m37s
orga-2团队b群集IP 10.105.79.255 80/TCP 7m54s
kube dns端点可用,服务也可用
为什么我的网络策略不起作用和为什么另一个pod的旋度不起作用?有人能帮我吗?请快跑
curl team-a.orga-1.svc.cluster.local
curl team-b.orga-2.svc.cluster.local
verify entries in 'cat /etc/resolf.conf'
如果你能够到你的豆荚,请按此操作
拒绝所有进入流量:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: orga-1
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
并允许进入Nginx的流量:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
namespace: orga-1
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- podSelector:
matchLabels: {}
您可以在下面找到有关以下内容的更多信息:
- ,
希望这有帮助。嘿,谢谢你的回答,我会试试这个。只有一个问题:我想我读到podSelector:{}等于podSelector:matchLabels:{}。是吗?据我所知是的。空的pod选择器选择名称空间中的所有pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: orga-1
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
namespace: orga-1
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- podSelector:
matchLabels: {}