Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/315.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 气流本地_task_job.py-任务已退出,返回代码为-9_Python_Airflow_Amazon Ecs - Fatal编程技术网

Python 气流本地_task_job.py-任务已退出,返回代码为-9

Python 气流本地_task_job.py-任务已退出,返回代码为-9,python,airflow,amazon-ecs,Python,Airflow,Amazon Ecs,我正在尝试使用using my fork的v1.10.5版本在ECS中运行apache airflow。我使用env变量将executor、Postgres和Redis信息设置到web服务器 AIRFLOW__CORE__SQL_ALCHEMY_CONN="postgresql+psycopg2://airflow_user:airflow_password@postgres:5432/airflow_db" AIRFLOW__CELERY__RESULT_BACKEND="db+postgre

我正在尝试使用using my fork的
v1.10.5
版本在ECS中运行apache airflow。我使用env变量将executor、Postgres和Redis信息设置到web服务器

AIRFLOW__CORE__SQL_ALCHEMY_CONN="postgresql+psycopg2://airflow_user:airflow_password@postgres:5432/airflow_db"
AIRFLOW__CELERY__RESULT_BACKEND="db+postgresql://airflow_user:airflow_password@postgres:5432/airflow_db"
AIRFLOW__CELERY__BROKER_URL="redis://redis_queue:6379/1"
AIRFLOW__CORE__EXECUTOR=CeleryExecutor
FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
AIRFLOW__CORE__LOAD_EXAMPLES=False
我的任务随机失败,出现以下错误

[2020-01-12 20:06:28,308] {ssh_utils.py:130} WARNING - 20/01/13 01:36:28 INFO db.IntegerSplitter: Split size: 134574; Num splits: 5 from: 2 to: 672873
[2020-01-12 20:06:28,449] {ssh_utils.py:130} WARNING - 20/01/13 01:36:28 INFO mapreduce.JobSubmitter: number of splits:5
[2020-01-12 20:06:28,459] {ssh_utils.py:130} WARNING - 20/01/13 01:36:28 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
[2020-01-12 20:06:28,964] {ssh_utils.py:130} WARNING - 20/01/13 01:36:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1578859373494_0012
[2020-01-12 20:06:29,337] {ssh_utils.py:130} WARNING - 20/01/13 01:36:29 INFO impl.YarnClientImpl: Submitted application application_1578859373494_0012
[2020-01-12 20:06:29,371] {ssh_utils.py:130} WARNING - 20/01/13 01:36:29 INFO mapreduce.Job: The url to track the job: http://ip-XX-XX-XX-XX.ap-southeast-1.compute.internal:20888/proxy/application_1578859373494_0012/
[2020-01-12 20:06:29,371] {ssh_utils.py:130} WARNING - 20/01/13 01:36:29 INFO mapreduce.Job: Running job: job_1578859373494_0012
[2020-01-12 20:06:47,489] {ssh_utils.py:130} WARNING - 20/01/13 01:36:47 INFO mapreduce.Job: Job job_1578859373494_0012 running in uber mode : false
[2020-01-12 20:06:47,490] {ssh_utils.py:130} WARNING - 20/01/13 01:36:47 INFO mapreduce.Job:  map 0% reduce 0%
[2020-01-12 20:06:54,777] {logging_mixin.py:95} INFO - [[34m2020-01-12 20:06:54,777[0m] {[34mlocal_task_job.py:[0m105} INFO[0m - Task exited with return code -9[0m
但当我在EMR UI中检查某个应用程序时,它显示为已成功运行

我的ECS配置如下

气流工人

硬/软内存限制->2560/1024

工人数量->3

气流网络服务器

硬/软内存限制->3072/1024

气流调度器

硬/软内存限制->2048/1024

运行时间->86400


导致此错误的原因是什么?

此错误是由于工作容器达到了硬内存限制,从而随机终止了任务。我通过比较通过本地执行器运行的旧气流部署的内存利用率图来增加内存限制,从而解决了这个问题

这是关于软内存限制的旧内存利用率图

将工作内存配置更改为2560/5120后,现在这是关于软内存限制的内存利用率图