Python 从S3向spark读取嵌套文本文件时出现内存错误

Python 从S3向spark读取嵌套文本文件时出现内存错误,python,apache-spark,amazon-s3,pyspark,Python,Apache Spark,Amazon S3,Pyspark,我正在尝试从S3将大约一百万个压缩文本文件读入spark。每个文件的压缩大小在50 MB到80 MB之间。总共大约有6.5 TB的数据 不幸的是,我遇到了一个内存不足的异常,我不知道如何解决。简单到: raw_file_list = subprocess.Popen("aws s3 ls --recursive s3://my-bucket/export/", shell=True, stdout=subprocess.PIPE).stdout.read().strip().split('\n'

我正在尝试从
S3
将大约一百万个压缩文本文件读入
spark
。每个文件的压缩大小在50 MB到80 MB之间。总共大约有6.5 TB的数据

不幸的是,我遇到了一个内存不足的异常,我不知道如何解决。简单到:

raw_file_list = subprocess.Popen("aws s3 ls --recursive s3://my-bucket/export/", shell=True, stdout=subprocess.PIPE).stdout.read().strip().split('\n')
cleaned_names = ["s3://my-bucket/" + f.split()[3] for f in raw_file_list if not f.endswith('_SUCCESS')]
dat = sc.textFile(','.join(cleaned_names))
dat.count()
收益率:

 ---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-22-8ce3c7d1073e> in <module>() ----> 1 dat.count()

/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.pyc in count(self)
   1002         3
   1003         """
-> 1004         return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
   1005 
   1006     def stats(self):

/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.pyc in sum(self)
    993         6.0
    994         """
--> 995         return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
    996 
    997     def count(self):

/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.pyc in fold(self, zeroValue, op)
    867         # zeroValue provided to each partition is unique from the one provided
    868         # to the final reduce call
--> 869         vals = self.mapPartitions(func).collect()
    870         return reduce(op, vals, zeroValue)
    871 

/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.pyc in collect(self)
    769         """
    770         with SCCallSiteSync(self.context) as css:
--> 771             port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    772         return list(_load_from_socket(port, self._jrdd_deserializer))
    773 

/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
    811         answer = self.gateway_client.send_command(command)
    812         return_value = get_return_value(
--> 813             answer, self.gateway_client, self.target_id, self.name)
    814 
    815         for temp_arg in temp_args:

/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/sql/utils.pyc in deco(*a, **kw)
     43     def deco(*a, **kw):
     44         try:
---> 45             return f(*a, **kw)
     46         except py4j.protocol.Py4JJavaError as e:
     47             s = e.java_exception.toString()

/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    306                 raise Py4JJavaError(
    307                     "An error occurred while calling {0}{1}{2}.\n".
--> 308                     format(target_id, ".", name), value)
    309             else:
    310                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.lang.OutOfMemoryError: GC overhead limit exceeded
---------------------------------------------------------------------------
Py4JJavaError回溯(最近一次调用)
in()--->1 dat.count()
/tmp/spark tmp lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.pyc计数(self)
1002         3
1003         """
->1004返回self.mapPartitions(lambda i:[sum(1表示i中的u)]).sum()
1005
1006 def状态(自身):
/tmp/spark tmp lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.pyc的总和(self)
993         6.0
994         """
-->995返回self.mapPartitions(lambda x:[求和(x)]).fold(0,运算符.add)
996
997 def计数(自身):
/折叠中的tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.pyc(self,zeroValue,op)
867#提供给每个分区的zeroValue与提供的分区是唯一的
868#到最后的reduce call
-->869 vals=self.mapPartitions(func.collect())
870返回减少(op、VAL、零值)
871
/collect(self)中的tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/rdd.pyc
769         """
770,使用SCCallSiteSync(self.context)作为css:
-->771 port=self.ctx.\u jvm.PythonRDD.collectAndServe(self.\u jrdd.rdd())
772返回列表(_从_套接字加载(端口,self._jrdd_反序列化器))
773
/tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/java_-gateway.py-in_uuu调用(self,*args)
811 answer=self.gateway\u client.send\u命令(command)
812返回值=获取返回值(
-->813应答,self.gateway\u客户端,self.target\u id,self.name)
814
815对于临时参数中的临时参数:
/装饰中的tmp/spark-tmp-lminer/spark-1.6.1-bin-hadoop2.6/python/pyspark/sql/utils.pyc(*a,**kw)
43 def装饰(*a,**千瓦):
44尝试:
--->45返回f(*a,**kw)
46除py4j.protocol.Py4JJavaError外,错误为e:
47 s=e.java_exception.toString()
/获取返回值中的tmp/spark tmp lminer/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/protocol.py(答案、网关客户端、目标id、名称)
306 raise PY4JJAVA错误(
307“调用{0}{1}{2}时出错。\n”。
-->308格式(目标id,“.”,名称),值)
309其他:
310升起Py4JError(
Py4JJavaError:调用z:org.apache.spark.api.python.PythonRDD.collectAndServe时出错。
:java.lang.OutOfMemoryError:超出GC开销限制
更新:


部分问题似乎已通过此解决。spark似乎很难从S3中提取这么多文件。已更新错误,使其现在只反映内存问题。

问题在于文件太多。解决方案似乎是通过读取文件和内存的子集来减少分区数将它们合并为一个较小的数字。但是,您不能使分区太大:500-1000 MB的文件会导致它们自己的问题。

您是否可以尝试为
spark.driver.memory
spark.executor.memory
分配更多内存,可能还有Java堆大小。