Pyspark群集模式异常-Java网关进程在向驱动程序发送其端口号之前退出
在apache airflow中,我编写了一个PythonOperator,它使用pyspark在集群模式下运行作业。我初始化sparksession对象,如下所示Pyspark群集模式异常-Java网关进程在向驱动程序发送其端口号之前退出,pyspark,airflow,yarn,Pyspark,Airflow,Yarn,在apache airflow中,我编写了一个PythonOperator,它使用pyspark在集群模式下运行作业。我初始化sparksession对象,如下所示 spark = SparkSession \ .builder \ .appName("test python operator") \ .master("yarn") \ .config("spark.submit.deployMode&qu
spark = SparkSession \
.builder \
.appName("test python operator") \
.master("yarn") \
.config("spark.submit.deployMode","cluster") \
.getOrCreate()
然而,当我运行我的dag时,我得到一个异常
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/airflow/models/taskinstance.py", line 983, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/python3.8/dist-packages/airflow/operators/python_operator.py", line 113, in execute
return_value = self.execute_callable()
File "/usr/local/lib/python3.8/dist-packages/airflow/operators/python_operator.py", line 118, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/catfish/dags/dags_dag_test_python_operator.py", line 39, in print_count
spark = SparkSession \
File "/usr/local/lib/python3.8/dist-packages/pyspark/sql/session.py", line 186, in getOrCreate
sc = SparkContext.getOrCreate(sparkConf)
File "/usr/local/lib/python3.8/dist-packages/pyspark/context.py", line 371, in getOrCreate
SparkContext(conf=conf or SparkConf())
File "/usr/local/lib/python3.8/dist-packages/pyspark/context.py", line 128, in __init__
SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
File "/usr/local/lib/python3.8/dist-packages/pyspark/context.py", line 320, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway(conf)
File "/usr/local/lib/python3.8/dist-packages/pyspark/java_gateway.py", line 105, in launch_gateway
raise Exception("Java gateway process exited before sending its port number")
Exception: Java gateway process exited before sending its port number
我还设置了PYSPARK_SUBMIT_参数,但它对我不起作用 您需要在ubuntu容器上安装spark
RUN apt-get -y install default-jdk scala git curl wget
RUN wget --no-verbose https://downloads.apache.org/spark/spark-2.4.6/spark-2.4.6-bin-hadoop2.7.tgz
RUN tar xvf spark-2.4.6-bin-hadoop2.7.tgz
RUN mv spark-2.4.6-bin-hadoop2.7 /opt/spark
ENV SPARK_HOME=/opt/spark
不幸的是,你们不能用蟒蛇在纱线上运行spark。我建议您使用or。我使用SparkSubmitor,我的问题解决了!真是太好了!Pypark在这方面有奇怪的问题!正如您所提到的,在PythonOperator运行spark纱线簇模式时存在一些问题。