Docker Kubernetes-计算机上已存在容器映像

Docker Kubernetes-计算机上已存在容器映像,docker,kubernetes,apache-kafka,gitlab,consul,Docker,Kubernetes,Apache Kafka,Gitlab,Consul,因此,我在k8s上有两个类似的部署,它们从GitLab获取相同的映像。显然,这导致我的第二次部署出现了一个CrashLoopBackOff错误,我似乎无法连接到端口来检查我的pod的/healthz。记录pod显示pod接收到中断信号,同时描述pod显示以下消息 FirstSeen LastSeen Count From SubObjectPath Type Reason Message

因此,我在k8s上有两个类似的部署,它们从GitLab获取相同的映像。显然,这导致我的第二次部署出现了一个
CrashLoopBackOff
错误,我似乎无法连接到端口来检查我的pod的
/healthz
。记录pod显示pod接收到中断信号,同时描述pod显示以下消息

 FirstSeen  LastSeen    Count   From            SubObjectPath                   Type        Reason          Message
  --------- --------    -----   ----            -------------                   --------    ------          -------
  29m       29m     1   default-scheduler                           Normal      Scheduled       Successfully assigned java-kafka-rest-kafka-data-2-development-5c6f7f597-5t2mr to 172.18.14.110
  29m       29m     1   kubelet, 172.18.14.110                          Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-m4m55" 
  29m       29m     1   kubelet, 172.18.14.110  spec.containers{consul}             Normal      Pulled          Container image "..../consul-image:0.0.10" already present on machine
  29m       29m     1   kubelet, 172.18.14.110  spec.containers{consul}             Normal      Created         Created container
  29m       29m     1   kubelet, 172.18.14.110  spec.containers{consul}             Normal      Started         Started container
  28m       28m     1   kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Normal      Killing         Killing container with id docker://java-kafka-rest-development:Container failed liveness probe.. Container will be killed and recreated.
  29m       28m     2   kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Normal      Created         Created container
  29m       28m     2   kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Normal      Started         Started container
  29m       27m     10  kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Warning     Unhealthy       Readiness probe failed: Get http://10.5.59.35:7533/healthz: dial tcp 10.5.59.35:7533: getsockopt: connection refused
  28m       24m     13  kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Warning     Unhealthy       Liveness probe failed: Get http://10.5.59.35:7533/healthz: dial tcp 10.5.59.35:7533: getsockopt: connection refused
  29m       19m     8   kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Normal      Pulled          Container image "r..../java-kafka-rest:0.3.2-dev" already present on machine
  24m       4m      73  kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Warning     BackOff         Back-off restarting failed container
我曾尝试在不同的映像下重新部署部署,但效果似乎很好。然而,我不认为这将是有效的,因为整个图像是相同的。我该怎么办

以下是我的部署文件的外观:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: "java-kafka-rest-kafka-data-2-development"
  labels:
    repository: "java-kafka-rest"
    project: "java-kafka-rest"
    service: "java-kafka-rest-kafka-data-2"
    env: "development"
spec:
  replicas: 1
  selector:
    matchLabels:
      repository: "java-kafka-rest"
      project: "java-kafka-rest"
      service: "java-kafka-rest-kafka-data-2"
      env: "development"
  template:
    metadata:
      labels:
        repository: "java-kafka-rest"
        project: "java-kafka-rest"
        service: "java-kafka-rest-kafka-data-2"
        env: "development"
        release: "0.3.2-dev"
    spec:
      imagePullSecrets:
      - name: ...
      containers:
      - name: java-kafka-rest-development
        image: registry...../java-kafka-rest:0.3.2-dev
        env:
        - name: DEPLOYMENT_COMMIT_HASH
          value: "0.3.2-dev"
        - name: DEPLOYMENT_PORT
          value: "7533"
        livenessProbe:
          httpGet:
            path: /healthz
            port: 7533
          initialDelaySeconds: 30
          timeoutSeconds: 1
        readinessProbe:
          httpGet:
            path: /healthz
            port: 7533
          timeoutSeconds: 1
        ports:
        - containerPort: 7533
        resources:
          requests:
            cpu: 0.5
            memory: 6Gi
          limits:
            cpu: 3
            memory: 10Gi
        command:
          - /envconsul
          - -consul=127.0.0.1:8500
          - -sanitize
          - -upcase
          - -prefix=java-kafka-rest/
          - -prefix=java-kafka-rest/kafka-data-2
          - java
          - -jar
          - /build/libs/java-kafka-rest-0.3.2-dev.jar
        securityContext:
          readOnlyRootFilesystem: true
      - name: consul
        image: registry.../consul-image:0.0.10
        env:
        - name: SERVICE_NAME
          value: java-kafka-rest-kafka-data-2
        - name: SERVICE_ENVIRONMENT
          value: development
        - name: SERVICE_PORT
          value: "7533"
        - name: CONSUL1
          valueFrom:
            configMapKeyRef:
              name: consul-config-...
              key: node1
        - name: CONSUL2
          valueFrom:
            configMapKeyRef:
              name: consul-config-...
              key: node2
        - name: CONSUL3
          valueFrom:
            configMapKeyRef:
              name: consul-config-...
              key: node3
        - name: CONSUL_ENCRYPT
          valueFrom:
            configMapKeyRef:
              name: consul-config-...
              key: encrypt
        ports:
        - containerPort: 8300
        - containerPort: 8301
        - containerPort: 8302
        - containerPort: 8400
        - containerPort: 8500
        - containerPort: 8600
        command: [ entrypoint, agent, -config-dir=/config, -join=$(CONSUL1), -join=$(CONSUL2), -join=$(CONSUL3), -encrypt=$(CONSUL_ENCRYPT) ]
      terminationGracePeriodSeconds: 30
      nodeSelector:
        env: ...

对于那些有这个问题的人,我发现了问题所在,并找到了解决问题的方法。显然,问题在于我的
service.yml
,其中我的targetPort指向的端口与我在docker映像中打开的端口不同。确保docker映像中打开的端口连接到正确的端口


希望这有帮助。

可能是您的
readinessProbe
正在杀死您的容器。这是卡夫卡经纪人的照片还是@是的,这就是为什么我们也要假设。这确实是一个卡夫卡形象,用来产生卡夫卡的信息。然而,我很困惑是什么导致
readinessProbe
以这种方式触发;据我所知,从GitLab中提取的映像应该放在k8s pod上,与其他pod提取的映像无关。是的,但是在k8s部署文件中定义了
readinesProbe
,因此您可能需要增加值(如果kafka需要大量时间启动)或者甚至移除探针,看看这是否真的杀死了你的播客——据我所知,卡夫卡甚至没有任何健康检查端点。您是否实施过任何自定义健康检查或…?@UroshT。我确实实现了一个定制的健康检查,为了清晰起见,我将其粘贴到并添加了部署文件。但是,即使
readinesProbe
确实是导致此问题的原因,但如果它们从同一个映像中提取,而不是从单个映像中提取,为什么会影响我的部署?