Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/313.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
python程序的Spark错误;java.lang.OutOfMemoryError:java堆空间;_Java_Python_Apache Spark - Fatal编程技术网

python程序的Spark错误;java.lang.OutOfMemoryError:java堆空间;

python程序的Spark错误;java.lang.OutOfMemoryError:java堆空间;,java,python,apache-spark,Java,Python,Apache Spark,我在spark上运行我的pythonkmeans程序,如下命令所示: ./bin/spark-submit --master spark://master_ip:7077 my_kmeans.py 主要的pythonkmeans程序如下所示: sc = spark.sparkContext # data X = jl.load('X.jl.z') data_x = sc.parallelize(X) # kmeans model = KMeans.train(data_x, 10000, ma

我在spark上运行我的
python
kmeans程序,如下命令所示:

./bin/spark-submit --master spark://master_ip:7077 my_kmeans.py
主要的
python
kmeans程序如下所示:

sc = spark.sparkContext
# data
X = jl.load('X.jl.z')
data_x = sc.parallelize(X)
# kmeans
model = KMeans.train(data_x, 10000, maxIterations=5)
文件
'X.jl.z'
大小约为100M

但我得到了火花误差:

  File "/home/xxx/tmp/spark-2.0.2-bin-hadoop2.7/my_kmeans.py", line 24, in <module>
    data_x = sc.parallelize(X)
py4j.protocol.Py4JJavaError: An error occurred while calling    z:org.apache.spark.api.python.PythonRDD.readRDDFromFile.    
  : java.lang.OutOfMemoryError: Java heap space
文件“/home/xxx/tmp/spark-2.0.2-bin-hadoop2.7/my_kmeans.py”,第24行,在
数据_x=sc.parallelize(x)
py4j.protocol.Py4JJavaError:调用z:org.apache.spark.api.python.PythonRDD.readRDDFromFile时出错。
:java.lang.OutOfMemoryError:java堆空间

我知道如何修改
Java
程序的JVM堆大小。但是如何增加我的
python
程序的堆大小?

尝试添加分区数:

data_x = sc.parallelize(X,n)
# n = 2-4 partitions for each CPU in your cluster
或:

可以使用中的spark.driver.memory设置最大堆大小设置 群集模式,并通过中的--driver memory命令行选项 客户端模式


如果它在本地计算机上运行呢?