Apache spark Spark数据帧写入google pubsub

Apache spark Spark数据帧写入google pubsub,apache-spark,apache-spark-sql,google-cloud-pubsub,google-cloud-dataproc,Apache Spark,Apache Spark Sql,Google Cloud Pubsub,Google Cloud Dataproc,我正在尝试通过Dataproc集群上的Spark将拼花文件写入Pubsub 我使用了下面的伪代码 dataFrame .as[MyCaseClass] .foreachPartition(partition => { val topicName = "projects/myproject/topics/mytopic" val publisher = Publisher.newBuilder(topicNa

我正在尝试通过Dataproc集群上的Spark将拼花文件写入Pubsub

我使用了下面的伪代码

dataFrame
      .as[MyCaseClass]
      .foreachPartition(partition => {
          val topicName = "projects/myproject/topics/mytopic"
          val publisher = Publisher.newBuilder(topicName).build()
          partition.foreach(users => {
            try {
              val jsonUser = users.asJson.noSpaces //using circe scala lib
              val data = ByteString.copyFromUtf8(jsonUser)
              val pubsubMessage = PubsubMessage.newBuilder().setData(data).build()
              val message = publisher.publish(pubsubMessage)
            }
            catch {
              case e: Exception => System.out.println("Exception in processing the event " + e.printStackTrace())
            }
          })
          publisher.shutdown()
        }
        catch {
          case e: Exception => System.out.println("Exception in processing the partition = " + e.printStackTrace())
        }
      }
      )
每当我在集群上提交此文件时,我都会收到带有退出代码134的spark预启动错误

我已经在我的pom中为番石榴和普罗图布夫涂上了阴影。如果我通过本地测试用例运行这个示例,它可以工作,但是如果在dataproc上提交,我会得到错误。 我没有找到有关将数据帧写入pub-sub的任何相关信息。 有什么建议吗

更新: 系统详细信息:带有N1-Standard-32(32核,120GB内存)的单节点群集 执行器核心:动态启用

附加堆栈跟踪:

20/12/22 17:51:43 WARN org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Requesting driver to remove executor 1 for reason Container from a bad node: container_1608332157194_0026_01_000002 on host: dataproc-cluster.internal. Exit status: 134. Diagnostics: [2020-12-22 17:51:43.556]Exception from container-launch.
Container id: container_1608332157194_0026_01_000002
Exit code: 134

[2020-12-22 17:51:43.557]Container exited with a non-zero exit code 134. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
/bin/bash: line 1: 19017 Aborted                 /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin/java -server -Xmx5586m -Djava.io.tmpdir=/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1608332157194_0026/container_1608332157194_0026_01_000002/tmp '-Dspark.driver.port=43691' '-Dspark.rpc.message.maxSize=512' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/userlogs/application_1608332157194_0026/container_1608332157194_0026_01_000002 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@dataproc-cluster.internal:43691 --executor-id 1 --hostname dataproc-cluster.internal --cores 2 --app-id application_1608332157194_0026 --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1608332157194_0026/container_1608332157194_0026_01_000002/__app__.jar --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1608332157194_0026/container_1608332157194_0026_01_000002/mySparkJar-1.0.0-0-SNAPSHOT.jar --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1608332157194_0026/container_1608332157194_0026_01_000002/org.apache.spark_spark-avro_2.11-2.4.2.jar --user-class-path file:/hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1608332157194_0026/container_1608332157194_0026_01_000002/org.spark-project.spark_unused-1.0.0.jar > /var/log/hadoop-yarn/userlogs/application_1608332157194_0026/container_1608332157194_0026_01_000002/stdout 2> /var/log/hadoop-yarn/userlogs/application_1608332157194_0026/container_1608332157194_0026_01_000002/stderr
Last 4096 bytes of stderr :
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
20/12/22 17:51:36 INFO org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 11320100 records.
20/12/22 17:51:36 INFO org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
20/12/22 17:51:38 INFO org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy]
20/12/22 17:51:38 INFO org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 2301 ms. row count = 11320100
20/12/22 17:51:39 INFO org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 11320100 records.
20/12/22 17:51:39 INFO org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
20/12/22 17:51:40 INFO org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 1411 ms. row count = 11320100

如果作业提前失败,可能是因为没有足够的内存供Spark Driver启动:


要解决此问题,您需要为Dataproc群集配置具有更多RAM的主节点,或为Spark驱动程序和/或Spark执行器分配更多内存/堆。

如果作业提前失败,则可能是没有足够的内存供Spark驱动程序启动:


要解决此问题,您需要为Dataproc群集配置具有更多RAM的主节点,或为Spark驱动程序和/或Spark执行器分配更多内存/堆。

是否可以为其中一个故障附加stacktrace?是否可以为其中一个故障附加stacktrace?我已在原始线程上添加了堆栈跟踪。由于这是一个具有120GB ram的单节点集群,我无法看到添加更多ram如何解决此问题。正如我们看到的,有一个OOM,似乎Spark在写入pubsubLog之前试图读取内存中的整个目录,这表明Spark executor的纱线容器以
-Xmx5586m
堆开始-可能需要为Spark executor分配更多RAM/堆来解决此问题。我在原始线程上添加了堆栈跟踪。由于这是一个具有120GB ram的单节点集群,我无法看到添加更多ram如何解决此问题。正如我们看到的,有一个OOM,Spark似乎在写入pubsubLog之前尝试读取内存中的整个目录,这表明Spark executor的容器以
-Xmx5586m
堆开始-可能需要为Spark executor分配更多RAM/堆来解决此问题。