Apache spark 为什么这个PySpark连接失败?

Apache spark 为什么这个PySpark连接失败?,apache-spark,pyspark,apache-spark-sql,pyspark-sql,Apache Spark,Pyspark,Apache Spark Sql,Pyspark Sql,在下面的示例中,我误解了PySpark的性能 我有几个数据帧,因此我加入了它们 打印“用户\u数据” 打印用户_data.show() 打印“calc” 打印计算显示() 打印“用户\类别\数据” 打印用户\u目录\u数据。显示() data1=calc.join(用户数据,['category\u pk','item\u pk','leftouter') 打印“数据1” 打印数据1.show() data2=data1.join(用户\u cat\u数据,['category\u pk'],'

在下面的示例中,我误解了PySpark的性能

我有几个数据帧,因此我加入了它们

打印“用户\u数据”
打印用户_data.show()
打印“calc”
打印计算显示()
打印“用户\类别\数据”
打印用户\u目录\u数据。显示()
data1=calc.join(用户数据,['category\u pk','item\u pk','leftouter')
打印“数据1”
打印数据1.show()
data2=data1.join(用户\u cat\u数据,['category\u pk'],'leftouter')
打印“数据2”
打印数据2.show()
data3=data2.join(category_data,['category_pk'],'leftouter')
打印“数据3”
打印数据3.show()
data4=data3.join(单击数据、['category\u pk'、'item\u pk']、'leftouter')
打印“数据4”
打印数据4.show()
data4.write.parquet(输出+'/test.parquet',mode=“覆盖”)
我希望
leftouter
joining将返回左侧数据帧,其中包含来自右侧数据帧的匹配项(如果有的话)

Soma示例输出:

users_data
+--------------+----------+-------------------------+
|   category_pk|   item_pk|             unique_users|
+--------------+----------+-------------------------+
|           321|       460|                        1|
|           730|       740|                        2|
|           140|       720|                       10|


users_cat_data
+--------------+-----------------------+
|   category_pk|   unique_users_per_cat|
+--------------+-----------------------+
|           111|                    258|
|           100|                    260|
|           750|                      9|
然而,我观察到一种不同的行为。我使用
show()
打印出连接操作中使用的所有数据帧的前5行。所有数据帧都包含数据。但我得到了以下错误:

None
DATA1
Traceback (most recent call last):
  File "mytest.py", line 884, in <module>
    args.field1, args.field2, args.field3)
  File "mytest.py", line 802, in calc
    print data1.show()
  File "/mnt/yarn/usercache/hdfs/appcache/application_1512391881474_5650/container_1512391881474_5650_01_000001/pyspark.zip/pyspark/sql/dataframe.py", line 336, in show
  File "/mnt/yarn/usercache/hdfs/appcache/application_1512391881474_5650/container_1512391881474_5650_01_000001/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/mnt/yarn/usercache/hdfs/appcache/application_1512391881474_5650/container_1512391881474_5650_01_000001/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
  File "/mnt/yarn/usercache/hdfs/appcache/application_1512391881474_5650/container_1512391881474_5650_01_000001/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o802.showString.
: org.apache.spark.SparkException: Exception thrown in awaitResult:
        at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
        at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:123)
        at org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(WholeStageCodegenExec.scala:248)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
    at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2837)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2836)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2153)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2366)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:245)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 

Caused by: org.apache.spark.SparkException: Task not serializable
        at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2287)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:794)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:793)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:793)
无
数据1
回溯(最近一次呼叫最后一次):
文件“mytest.py”,第884行,在
args.field1、args.field2、args.field3)
文件“mytest.py”,第802行,在calc中
打印数据1.show()
文件“/mnt/thread/usercache/hdfs/appcache/application_1512391881474_5650/container_1512391881474_5650_01_000001/pyspark.zip/pyspark/sql/dataframe.py”,第336行,显示
文件“/mnt/thread/usercache/hdfs/appcache/application_1512391881474_5650/container_1512391881474_5650_01_000001/py4j-0.10.4-src.zip/py4j/java_gateway.py”,第1133行,在__
文件“/mnt/thread/usercache/hdfs/appcache/application_1512391881474_5650/container_1512391881474_5650_01_000001/pyspark.zip/pyspark/sql/utils.py”,第63行,装饰
文件“/mnt/thread/usercache/hdfs/appcache/application_1512391881474_5650/container_1512391881474_5650_01_000001/py4j-0.10.4-src.zip/py4j/protocol.py”,第319行,在get\u返回值中
py4j.protocol.Py4JJavaError:调用o802.showString时出错。
:org.apache.spark.SparkException:结果中引发的异常:
位于org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
在org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:123)上
位于org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(whistAgeCodeGeneXec.scala:248)
位于org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:127)
位于org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:127)
位于org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
位于org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2837)
位于org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
位于org.apache.spark.sql.Dataset.withAction(Dataset.scala:2836)
位于org.apache.spark.sql.Dataset.head(Dataset.scala:2153)
位于org.apache.spark.sql.Dataset.take(Dataset.scala:2366)
位于org.apache.spark.sql.Dataset.showString(Dataset.scala:245)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
在
原因:org.apache.spark.SparkException:任务不可序列化
位于org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
位于org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
位于org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
位于org.apache.spark.SparkContext.clean(SparkContext.scala:2287)
位于org.apache.spark.rdd.rdd$$anonfun$mapPartitions$1.apply(rdd.scala:794)
位于org.apache.spark.rdd.rdd$$anonfun$mapPartitions$1.apply(rdd.scala:793)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
位于org.apache.spark.rdd.rdd.withScope(rdd.scala:362)
位于org.apache.spark.rdd.rdd.mapPartitions(rdd.scala:793)
我不明白为什么在第
行print data1.show()
处出现任务序列化错误。用于创建ata1的数据帧不为空。另外,
show()
在这行代码的上面两行成功使用

有时它在最后一行
data4.write.parquet(output+'/test.parquet',mode=“overwrite”)
失败,当我删除它时,它运行良好。但是现在它甚至在前面的
data1.show()行中失败了

如何解决这个问题。任何帮助都将不胜感激。

我认为,在awaitResult中抛出的最顶端的
org.apache.spark.sparkeException:Exception的原因是,在请求
BroadcastExchangeExec
物理运算符以广播关系(也称为表)时,它只是超时了(默认等待5分钟后,直到完成)

这是关于例外含义的低级背景

现在,你可能会问自己,为什么一开始会发生这种情况

spark.sql.broadcastTimeout
设置为
-1
以完全禁用超时(这将导致线程无限期地等待广播完成)或将其增加到10分钟左右

您还可以通过将
spark.sql.autoBroadcastJoinThreshold
设置为
-1
来禁用表广播

然而,这只会解决环境中发生的更严重问题

我的猜测是,您的纱线集群(通过
/mnt/warn/usercache/hdfs/appcache/application\u 1512391881474\u 5650/container\u 1512391881474\u 5650\u 01\u000001猜测)资源紧张,网络可能也很缓慢

总而言之,我猜您查询中的一些表