Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/17.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Pyspark-java.lang.OutOfMemoryError:作为独立应用程序运行时,但作为docker运行时没有错误_Java_Python 3.x_Docker_Pyspark - Fatal编程技术网

Pyspark-java.lang.OutOfMemoryError:作为独立应用程序运行时,但作为docker运行时没有错误

Pyspark-java.lang.OutOfMemoryError:作为独立应用程序运行时,但作为docker运行时没有错误,java,python-3.x,docker,pyspark,Java,Python 3.x,Docker,Pyspark,在线程“dispatcher-event-loop-0”java.lang.OutOfMemoryError中获取异常:在独立模式下运行pyspark应用程序时java堆空间,但在Docker容器中运行时一切正常 我有一个简单的推荐应用程序,它使用Pyspark进行更快的处理。数据集有1m条记录 当我在本地运行应用程序时,我得到了Java OutofMemory错误,但当我在本地使用容器并运行容器时,一切运行正常。。。 在独立应用程序和docker容器中,一切都是一样的。。。详情如下 这是Doc

在线程“dispatcher-event-loop-0”java.lang.OutOfMemoryError中获取异常:在独立模式下运行pyspark应用程序时java堆空间,但在Docker容器中运行时一切正常

我有一个简单的推荐应用程序,它使用Pyspark进行更快的处理。数据集有1m条记录

当我在本地运行应用程序时,我得到了Java OutofMemory错误,但当我在本地使用容器并运行容器时,一切运行正常。。。 在独立应用程序和docker容器中,一切都是一样的。。。详情如下

这是Dockerfile的一部分

RUN apt-get update && apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends && \
apt-get install -y software-properties-common

RUN apt-get install -y openjdk-8-jre && \
apt-get install -y openjdk-8-jdk
RUN echo "JAVA_HOME=$(which java)" | tee -a /etc/environment
这是pyspark代码

    sc = SparkContext('local')
    sqlContext = SQLContext(sc)

    sc.setCheckpointDir('temp/')

    df = sqlContext.createDataFrame(user_posr_rate_df)
    sc.parallelize(df.collect())
我希望作为独立应用程序运行时的结果与在docker容器中运行时的结果相匹配。。。以下是各自的结果

在Docker中运行时的结果:

 To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
 19/08/16 11:54:26 WARN TaskSetManager: Stage 0 contains a task of very large size (12230 KB). The maximum recommended task size is 100 KB.
 19/08/16 11:54:35 WARN TaskSetManager: Stage 1 contains a task of very large size (12230 KB). The maximum recommended task size is 100 KB.
 19/08/16 11:54:37 WARN TaskSetManager: Stage 3 contains a task of very large size (12230 KB). The maximum recommended task size is 100 KB.
 19/08/16 11:54:40 WARN TaskSetManager: Stage 5 contains a task of very large size (12230 KB). The maximum recommended task size is 100 KB.
  19/08/16 11:54:41 WARN TaskSetManager: Stage 6 contains a task of very large size (12230 KB). The maximum recommended task size is 100 KB.
 19/08/16 11:54:42 WARN TaskSetManager: Stage 7 contains a task of very large size (12230 KB). The maximum recommended task size is 100 KB.
 19/08/16 11:54:43 WARN TaskSetManager: Stage 8 contains a task of very large size (12230 KB). The maximum recommended task size is 100 KB.
作为独立应用程序在本地运行时的结果:

 To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
 19/08/16 17:50:20 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
 19/08/16 16:51:27 WARN TaskSetManager: Stage 0 contains a task of very large size (158329 KB). The maximum recommended task size is 100 KB.
 Exception in thread "dispatcher-event-loop-0" 
 java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at org.apache.spark.util.ByteBufferOutputStream.write(ByteBufferOutputStream.scala:41)
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:43)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.scheduler.TaskSetManager$$anonfun$resourceOffer$1.apply(TaskSetManager.scala:486)
at org.apache.spark.scheduler.TaskSetManager$$anonfun$resourceOffer$1.apply(TaskSetManager.scala:467)
at scala.Option.map(Option.scala:146)
at org.apache.spark.scheduler.TaskSetManager.resourceOffer(TaskSetManager.scala:467)
at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$org$apache$spark$scheduler$TaskSchedulerImpl$$resourceOfferSingleTaskSet$1.apply$mcVI$sp(TaskSchedulerImpl.scala:326)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at org.apache.spark.scheduler.TaskSchedulerImpl.org$apache$spark$scheduler$TaskSchedulerImpl$$resourceOfferSingleTaskSet(TaskSchedulerImpl.scala:321)
at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$4$$anonfun$apply$12.apply(TaskSchedulerImpl.scala:423)
at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$4$$anonfun$apply$12.apply(TaskSchedulerImpl.scala:420)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$4.apply(TaskSchedulerImpl.scala:420)
at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$4.apply(TaskSchedulerImpl.scala:407)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.TaskSchedulerImpl.resourceOffers(TaskSchedulerImpl.scala:407)
at org.apache.spark.scheduler.local.LocalEndpoint.reviveOffers(LocalSchedulerBackend.scala:86)
at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$receive$1.applyOrElse(LocalSchedulerBackend.scala:64)
at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:117)
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)

在SparkContext中添加了配置参数,解决了我的问题

conf = SparkConf().setAll([('spark.executor.memory', '10g'), 
('spark.executor.cores', '3'), ('spark.cores.max', '3'), 
('spark.driver.memory','8g')])

sc = SparkContext(conf=conf)

基本上,将conf添加到SparkContext

尝试将numofSlices增加到1000,sc.parallelize(df.collect(),numSlices=1000),但仍然没有改变