为什么kubernetes(GKE)上的轮廓设置会导致2个功能正常的外部IP?

为什么kubernetes(GKE)上的轮廓设置会导致2个功能正常的外部IP?,kubernetes,google-cloud-platform,google-kubernetes-engine,kubernetes-ingress,Kubernetes,Google Cloud Platform,Google Kubernetes Engine,Kubernetes Ingress,我一直在一个测试GKE kubernetes集群上试验作为替代入口控制器 在经过一些修改之后,我得到了一个服务于测试HTTP响应的工作设置 首先,我创建了一个“helloworld”pod,它提供http响应,通过节点端口服务和入口公开: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: helloworld spec: replicas: 4 template: metadata:

我一直在一个测试GKE kubernetes集群上试验作为替代入口控制器

在经过一些修改之后,我得到了一个服务于测试HTTP响应的工作设置

首先,我创建了一个“helloworld”pod,它提供http响应,通过节点端口服务和入口公开:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    name: helloworld
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: helloworld
    spec:
      containers:
        - name: "helloworld-http"
          image: "nginxdemos/hello:plain-text"
          imagePullPolicy: Always
          resources:
            requests:
              cpu: 250m
              memory: 256Mi
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - helloworld
              topologyKey: "kubernetes.io/hostname"
---
apiVersion: v1
kind: Service
metadata:
  name: helloworld-svc
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: helloworld
  sessionAffinity: None
  type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: helloworld-ingress
spec:
  backend:
    serviceName: helloworld-svc
    servicePort: 80
然后,我为
contour
创建了一个直接从他们的文档复制的部署:

apiVersion: v1
kind: Namespace
metadata:
  name: heptio-contour
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: contour
  namespace: heptio-contour
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: contour
  name: contour
  namespace: heptio-contour
spec:
  selector:
    matchLabels:
      app: contour
  replicas: 2
  template:
    metadata:
      labels:
        app: contour
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9001"
        prometheus.io/path: "/stats"
        prometheus.io/format: "prometheus"
    spec:
      containers:
      - image: docker.io/envoyproxy/envoy-alpine:v1.6.0
        name: envoy
        ports:
        - containerPort: 8080
          name: http
        - containerPort: 8443
          name: https
        command: ["envoy"]
        args: ["-c", "/config/contour.yaml", "--service-cluster", "cluster0", "--service-node", "node0", "-l", "info", "--v2-config-only"]
        volumeMounts:
        - name: contour-config
          mountPath: /config
      - image: gcr.io/heptio-images/contour:master
        imagePullPolicy: Always
        name: contour
        command: ["contour"]
        args: ["serve", "--incluster"]
      initContainers:
      - image: gcr.io/heptio-images/contour:master
        imagePullPolicy: Always
        name: envoy-initconfig
        command: ["contour"]
        args: ["bootstrap", "/config/contour.yaml"]
        volumeMounts:
        - name: contour-config
          mountPath: /config
      volumes:
      - name: contour-config
        emptyDir: {}
      dnsPolicy: ClusterFirst
      serviceAccountName: contour
      terminationGracePeriodSeconds: 30
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  app: contour
              topologyKey: kubernetes.io/hostname
---
apiVersion: v1
kind: Service
metadata:
  name: contour
  namespace: heptio-contour
spec:
 ports:
 - port: 80
   name: http
   protocol: TCP
   targetPort: 8080
 - port: 443
   name: https
   protocol: TCP
   targetPort: 8443
 selector:
   app: contour
 type: LoadBalancer
---
默认名称空间和heptio contour名称空间现在如下所示:

$ kubectl get pods,svc,ingress -n default
NAME                              READY     STATUS    RESTARTS   AGE
pod/helloworld-7ddc8c6655-6vgdw   1/1       Running   0          6h
pod/helloworld-7ddc8c6655-92j7x   1/1       Running   0          6h
pod/helloworld-7ddc8c6655-mlvmc   1/1       Running   0          6h
pod/helloworld-7ddc8c6655-w5g7f   1/1       Running   0          6h

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/helloworld-svc   NodePort    10.59.240.105   <none>        80:31481/TCP   34m
service/kubernetes       ClusterIP   10.59.240.1     <none>        443/TCP        7h

NAME                                    HOSTS     ADDRESS         PORTS     AGE
ingress.extensions/helloworld-ingress   *         y.y.y.y   80        34m

$ kubectl get pods,svc,ingress -n heptio-contour
NAME                          READY     STATUS    RESTARTS   AGE
pod/contour-9d758b697-kwk85   2/2       Running   0          34m
pod/contour-9d758b697-mbh47   2/2       Running   0          34m

NAME              TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
service/contour   LoadBalancer   10.59.250.54   x.x.x.x   80:30882/TCP,443:32746/TCP   34m

我有两个公共IP是故意的吗?我应该为客户使用哪一种?我可以根据自己的偏好在TCP和HTTP负载平衡器之间进行选择吗

您可能已经配置了GLBC入口()

您可以尝试使用以下入口定义吗

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "contour"
  name: helloworld-ingress
spec:
  backend:
    serviceName: helloworld-svc
    servicePort: 80

如果您想确保您的流量通过contour,您应该使用
x.x.x.x
ip。

就是这样。事后看来,我应该注意到“应该解释并服务于入口的入口类。如果未设置,则所有入口控制器都服务于入口。如果指定为kubernetes.io/ingres.class:contour,则contour服务于入口”。令人惊讶的是,contour的示例代码没有包含该注释
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "contour"
  name: helloworld-ingress
spec:
  backend:
    serviceName: helloworld-svc
    servicePort: 80