Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/docker/9.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Linux K8s并没有杀死我的网络服务器吊舱_Linux_Docker_Kubernetes_Airflow - Fatal编程技术网

Linux K8s并没有杀死我的网络服务器吊舱

Linux K8s并没有杀死我的网络服务器吊舱,linux,docker,kubernetes,airflow,Linux,Docker,Kubernetes,Airflow,我有气流在k8s容器中运行 Web服务器遇到DNS错误(无法将my db的url转换为ip),Web服务器工作人员被杀死 让我不安的是,k8s并没有试图杀死吊舱,并在原地开始一个新的吊舱 吊舱日志输出: OperationalError: (psycopg2.OperationalError) could not translate host name "my.dbs.url" to address: Temporary failure in name resolution [2017-12-0

我有气流在k8s容器中运行

Web服务器遇到DNS错误(无法将my db的url转换为ip),Web服务器工作人员被杀死

让我不安的是,k8s并没有试图杀死吊舱,并在原地开始一个新的吊舱

吊舱日志输出:

OperationalError: (psycopg2.OperationalError) could not translate host name "my.dbs.url" to address: Temporary failure in name resolution
[2017-12-01 06:06:05 +0000] [2202] [INFO] Worker exiting (pid: 2202)
[2017-12-01 06:06:05 +0000] [2186] [INFO] Worker exiting (pid: 2186)
[2017-12-01 06:06:05 +0000] [2190] [INFO] Worker exiting (pid: 2190)
[2017-12-01 06:06:05 +0000] [2194] [INFO] Worker exiting (pid: 2194)
[2017-12-01 06:06:05 +0000] [2198] [INFO] Worker exiting (pid: 2198)
[2017-12-01 06:06:06 +0000] [13] [INFO] Shutting down: Master
[2017-12-01 06:06:06 +0000] [13] [INFO] Reason: Worker failed to boot.
k8s状态正在运行,但当我在k8s UI中打开exec shell时,我得到以下输出(gunicorn似乎意识到它死了):


您需要定义就绪性和活跃性探测器Kubernetes来检测POD状态

如本页所述


您需要定义就绪性和活跃性探测器Kubernetes来检测POD状态

如本页所述


当容器中的进程死亡时,该容器将退出,kubelet将在同一节点/同一pod内重新启动该容器。这里发生的事情决不是库伯内特斯的错,而是你的容器的问题。您在容器中启动的主进程(无论是从CMD还是通过ENTRYPOINT)需要终止,才能发生上述情况,而您启动的主进程没有终止(一个进入僵尸模式,但没有收获,这是另一个问题的例子-。在这种情况下会有所帮助(如@sfgroups所述)因为如果pod失败,它将终止pod,但这是在治疗症状,而不是根本原因(并不是说您不应该将探头定义为一种良好做法).

当容器中的进程死亡时,该容器将退出,kubelet将在同一节点/同一pod中重新启动该容器。这里发生的事情决不是kubernetes的错误,而是容器的问题。您在容器中启动的主进程(无论是从CMD还是通过ENTRYPOINT)需要死亡,才能发生上述情况,而您启动的没有(一个进入僵尸模式,但没有收获,这是另一个问题的例子-。在这种情况下会有所帮助(如@sfgroups所述),因为如果失败,它将终止pod,但这是治疗症状而不是根本原因(并不是说你不应该把探测定义为一个好的实践)。

这改变了一切。非常有用的链接。谢谢。这改变了一切。非常有用的链接。谢谢。
root@webserver-373771664-3h4v9:/# ps -Al
F S   UID   PID  PPID  C PRI  NI ADDR SZ WCHAN  TTY          TIME CMD
4 S     0     1     0  0  80   0 - 107153 -     ?        00:06:42 /usr/local/bin/
4 Z     0    13     1  0  80   0 -     0 -      ?        00:01:24 gunicorn: maste <defunct>
4 S     0  2206     0  0  80   0 -  4987 -      ?        00:00:00 bash
0 R     0  2224  2206  0  80   0 -  7486 -      ?        00:00:00 ps
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: webserver
  namespace: airflow
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: airflow-webserver
    spec:
      volumes:
      - name: webserver-dags
        emptyDir: {}
      containers:
      - name: airflow-webserver
        image: my.custom.image :latest
        imagePullPolicy: Always
        resources:
          requests:
            cpu: 100m
          limits:
            cpu: 500m
        ports:
        - containerPort: 80
          protocol: TCP
        env:
        - name: AIRFLOW_HOME
          value: /var/lib/airflow
        - name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
          valueFrom:
            secretKeyRef:
              name: db1
              key: sqlalchemy_conn
        volumeMounts:
        - mountPath: /var/lib/airflow/dags/
          name: webserver-dags
        command: ["airflow"]
        args: ["webserver"]
      - name: docker-s3-to-backup
        image: my.custom.image:latest
        imagePullPolicy: Always
        resources:
          requests:
            cpu: 50m
          limits:
            cpu: 500m
        env:
        - name: ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: aws
              key: access_key_id
        - name: SECRET_KEY
          valueFrom:
            secretKeyRef:
              name: aws
              key: secret_access_key
        - name: S3_PATH
          value: s3://my-s3-bucket/dags/
        - name: DATA_PATH
          value: /dags/
        - name: CRON_SCHEDULE
          value: "*/5 * * * *"
        volumeMounts:
        - mountPath: /dags/
          name: webserver-dags
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: webserver
  namespace: airflow
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: webserver
  minReplicas: 2
  maxReplicas: 20
  targetCPUUtilizationPercentage: 75
---
apiVersion: v1
kind: Service
metadata:
  labels:
  name: webserver
  namespace: airflow
spec:
  type: NodePort
  ports:
  - port: 80
  selector:
    app: airflow-webserver
 - containerPort: 8080
    readinessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 10
    livenessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 15
      periodSeconds: 20