Python 无法创建Dataproc群集

Python 无法创建Dataproc群集,python,google-cloud-platform,google-cloud-dataproc,Python,Google Cloud Platform,Google Cloud Dataproc,我尝试通过Airflow和GoogleCloudUI创建Dataproc集群,但集群创建最终总是失败。下面是我用来创建集群的气流代码- # STEP 1: Libraries needed from datetime import timedelta, datetime from airflow import models from airflow.operators.bash_operator import BashOperator from airflow.contrib.operators

我尝试通过Airflow和GoogleCloudUI创建Dataproc集群,但集群创建最终总是失败。下面是我用来创建集群的气流代码-

# STEP 1: Libraries needed
from datetime import timedelta, datetime
from airflow import models
from airflow.operators.bash_operator import BashOperator
from airflow.contrib.operators import dataproc_operator
from airflow.utils import trigger_rule
from poc.utils.transform import main
from airflow.contrib.hooks.gcp_dataproc_hook import DataProcHook
from airflow.operators.python_operator import BranchPythonOperator

import os

YESTERDAY = datetime.combine(
    datetime.today() - timedelta(1),
    datetime.min.time())
project_name = os.environ['GCP_PROJECT']

# Can pull in spark code from a gcs bucket
# SPARK_CODE = ('gs://us-central1-cl-composer-tes-fa29d311-bucket/spark_files/transformation.py')
dataproc_job_name = 'spark_job_dataproc'

default_dag_args = {
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'start_date': YESTERDAY,
'retry_delay': timedelta(minutes=5),
'project_id': project_name,
'owner': 'DataProc',
}

with models.DAG(
'dataproc-poc',
description='Dag to run a simple dataproc job',
schedule_interval=timedelta(days=1),
default_args=default_dag_args) as dag:

    CLUSTER_NAME = 'dataproc-cluster'
    def ensure_cluster_exists(ds, **kwargs):
        cluster = DataProcHook().get_conn().projects().regions().clusters().get(
            projectId=project_name,
            region='us-east1',
            clusterName=CLUSTER_NAME
        ).execute(num_retries=5)
        print(cluster)
        if cluster is None or len(cluster) == 0 or 'clusterName' not in cluster:
            return 'create_dataproc'
        else:
            return 'run_spark'

    # start = BranchPythonOperator(
    #     task_id='start',
    #     provide_context=True,
    #     python_callable=ensure_cluster_exists,
    # )

    print_date = BashOperator(
    task_id='print_date',
    bash_command='date'
    )

    create_dataproc = dataproc_operator.DataprocClusterCreateOperator(task_id='create_dataproc',
    cluster_name=CLUSTER_NAME,
    num_workers=2,
    use_if_exists='true',
    zone='us-east1-b',
    master_machine_type='n1-standard-1',
    worker_machine_type='n1-standard-1')
    
    # Run the PySpark job
    run_spark = dataproc_operator.DataProcPySparkOperator(
    task_id='run_spark',
    main=main,
    cluster_name=CLUSTER_NAME,
    job_name=dataproc_job_name
    )
    # dataproc_operator
    # Delete Cloud Dataproc cluster.
    # delete_dataproc = dataproc_operator.DataprocClusterDeleteOperator(
    # task_id='delete_dataproc',
    # cluster_name='dataproc-cluster-demo-{{ ds_nodash }}',
    # trigger_rule=trigger_rule.TriggerRule.ALL_DONE)
    # STEP 6: Set DAGs dependencies
    # Each task should run after have finished the task before.
    print_date >> create_dataproc >> run_spark
    # print_date >> start >> create_dataproc >> run_spark
    # start >> run_spark
我检查了集群日志,发现了以下错误-

  • 无法存储主密钥1
  • 无法存储主密钥2
  • 初始化失败。退出125以防止重新启动
  • 无法启动主机:等待2个datanodes和NodeManager时超时。 操作超时:运行所需的最少2个数据节点中只有0个。 操作超时:运行所需的最少2个节点管理器中只有0个

  • 你能补充更多细节吗?1.您显示的4个错误都来自主启动日志?2.错误消息“无法存储主密钥”的上下文是什么。3.你检查过两个工人的日志了吗?是否有任何迹象表明datanodes和NodeManager启动失败?4.您尝试使用的图像版本是什么?看起来您没有指定它,所以它应该是默认的1.3-debian10,但是您能确认吗?您能添加更多详细信息吗?1.您显示的4个错误都来自主启动日志?2.错误消息“无法存储主密钥”的上下文是什么。3.你检查过两个工人的日志了吗?是否有任何迹象表明datanodes和NodeManager启动失败?4.您尝试使用的图像版本是什么?看起来您没有指定它,所以它应该是默认的1.3-debian10,但是您能确认吗?