Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/300.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python ';s3和x27;资源不存在_Python_Pyspark_Boto3 - Fatal编程技术网

Python ';s3和x27;资源不存在

Python ';s3和x27;资源不存在,python,pyspark,boto3,Python,Pyspark,Boto3,我使用Python和boto3在spark上处理了一些S3文件,当我下载这些文件时,发现“S3”资源不存在 因为boto3没有安装在每个集群节点上,所以我将boto3使用的依赖项包打包为zip,并使用由--py文件提交的spark集群,出现了此异常 Py4JJavaErrorTraceback (most recent call last) <ipython-input-3-8147865bf49c> in <module>() 2 3

我使用Python和boto3在spark上处理了一些S3文件,当我下载这些文件时,发现“S3”资源不存在

因为boto3没有安装在每个集群节点上,所以我将boto3使用的依赖项包打包为zip,并使用由--py文件提交的spark集群,出现了此异常

    Py4JJavaErrorTraceback (most recent call last)
<ipython-input-3-8147865bf49c> in <module>()
      2 
      3 
----> 4 extractor.extract(paths)

/usr/local/lib/python2.7/site-packages/extract-1.0-py2.7.egg/extract.pyc in extract(self, target_files_path)
     52         try:
     53             sc = self.get_spark_context()
---> 54             self._extract_file(sc, target_files_path)
     55         finally:
     56             if sc:

/usr/local/lib/python2.7/site-packages/extract-1.0-py2.7.egg/extract.pyc in _extract_file(self, sc, target_files_path)
    109     def _extract_file(self, sc, target_files_path):
    110         file_rdd = sc.parallelize(target_files_path, len(target_files_path))
--> 111         result_rdd = file_rdd.map(lambda file_path: self.process(file_path, self.func)).collect()
    112         result_rdd.saveAsTextFile(self.result_path)
    113 

/usr/lib/spark/python/pyspark/rdd.py in collect(self)
    769         """
    770         with SCCallSiteSync(self.context) as css:
--> 771             port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    772         return list(_load_from_socket(port, self._jrdd_deserializer))
    773 

/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
    811         answer = self.gateway_client.send_command(command)
    812         return_value = get_return_value(
--> 813             answer, self.gateway_client, self.target_id, self.name)
    814 
    815         for temp_arg in temp_args:

/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    306                 raise Py4JJavaError(
    307                     "An error occurred while calling {0}{1}{2}.\n".
--> 308                     format(target_id, ".", name), value)
    309             else:
    310                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 0.0 failed 4 times, most recent failure: Lost task 2.3 in stage 0.0 (TID 15, ip-172-20-219-210.corp.hpicloud.net): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/mnt/yarn/usercache/chqiang/appcache/application_1521024688288_67008/container_1521024688288_67008_01_000002/pyspark.zip/pyspark/worker.py", line 111, in main
    process()
  File "/mnt/yarn/usercache/chqiang/appcache/application_1521024688288_67008/container_1521024688288_67008_01_000002/pyspark.zip/pyspark/worker.py", line 106, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/mnt/yarn/usercache/chqiang/appcache/application_1521024688288_67008/container_1521024688288_67008_01_000002/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
    vs = list(itertools.islice(iterator, batch))
  File "/usr/local/lib/python2.7/site-packages/extract-1.0-py2.7.egg/extract.py", line 111, in <lambda>
  File "./lib.zip/extract.py", line 115, in process
    local_path = download_file_from_s3(self.app_name, file_path)
  File "./lib.zip/extract.py", line 22, in download_file_from_s3
    s3 = boto3.resource('s3')
  File "./lib.zip/boto3/__init__.py", line 100, in resource
    return _get_default_session().resource(*args, **kwargs)
  File "./lib.zip/boto3/session.py", line 347, in resource
    has_low_level_client)
ResourceNotExistsError: The 's3' resource does not exist.
The available resources are:
   - 


    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405)
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:209)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/mnt/yarn/usercache/chqiang/appcache/application_1521024688288_67008/container_1521024688288_67008_01_000002/pyspark.zip/pyspark/worker.py", line 111, in main
    process()
  File "/mnt/yarn/usercache/chqiang/appcache/application_1521024688288_67008/container_1521024688288_67008_01_000002/pyspark.zip/pyspark/worker.py", line 106, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/mnt/yarn/usercache/chqiang/appcache/application_1521024688288_67008/container_1521024688288_67008_01_000002/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
    vs = list(itertools.islice(iterator, batch))
  File "/usr/local/lib/python2.7/site-packages/extract-1.0-py2.7.egg/extract.py", line 111, in <lambda>
  File "./lib.zip/extract.py", line 115, in process
    local_path = download_file_from_s3(self.app_name, file_path)
  File "./lib.zip/extract.py", line 22, in download_file_from_s3
    s3 = boto3.resource('s3')
  File "./lib.zip/boto3/__init__.py", line 100, in resource
    return _get_default_session().resource(*args, **kwargs)
  File "./lib.zip/boto3/session.py", line 347, in resource
    has_low_level_client)
ResourceNotExistsError: The 's3' resource does not exist.
The available resources are:
   - 


    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    ... 1 more
Py4JJavaErrorTraceback(最近一次调用)
在()
2.
3.
---->4提取程序。提取(路径)
/解压缩中的usr/local/lib/python2.7/site-packages/extract-1.0-py2.7.egg/extract.pyc(self,target\u files\u path)
52尝试:
53 sc=self.get\u spark\u context()
--->54自我提取文件(sc,目标文件路径)
55.最后:
56如果sc:
/usr/local/lib/python2.7/site-packages/extract-1.0-py2.7.egg/extract.pyc在提取文件中(self、sc、target文件路径)
109定义提取文件(自身、sc、目标文件路径):
110 file\u rdd=sc.parallelize(目标文件路径,len(目标文件路径))
-->111 result_rdd=file_rdd.map(lambda file_路径:self.process(file_路径,self.func)).collect()
112 result\u rdd.saveAsTextFile(self.result\u路径)
113
/collect中的usr/lib/spark/python/pyspark/rdd.py(self)
769         """
770,使用SCCallSiteSync(self.context)作为css:
-->771 port=self.ctx.\u jvm.PythonRDD.collectAndServe(self.\u jrdd.rdd())
772返回列表(_从_套接字加载(端口,self._jrdd_反序列化器))
773
/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in____调用(self,*args)
811 answer=self.gateway\u client.send\u命令(command)
812返回值=获取返回值(
-->813应答,self.gateway\u客户端,self.target\u id,self.name)
814
815对于临时参数中的临时参数:
/获取返回值中的usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py(答案、网关客户端、目标id、名称)
306 raise PY4JJAVA错误(
307“调用{0}{1}{2}时出错。\n”。
-->308格式(目标id,“.”,名称),值)
309其他:
310升起Py4JError(
Py4JJavaError:调用z:org.apache.spark.api.python.PythonRDD.collectAndServe时出错。
:org.apache.spark.sparkeexception:作业因阶段失败而中止:阶段0.0中的任务2失败4次,最近的失败:阶段0.0中的任务2.3丢失(TID 15,ip-172-20-219-210.corp.hpicloud.net):org.apache.spark.api.python异常:回溯(最近一次调用):
文件“/mnt/thread/usercache/chqiang/appcache/application_1521024688288_67008/container_1521024688288_67008_01_000002/pyspark.zip/pyspark/worker.py”,主文件第111行
过程()
文件“/mnt/thread/usercache/chqiang/appcache/application_1521024688288_67008/container_1521024688288_67008_01_000002/pyspark.zip/pyspark/worker.py”,第106行,处理中
serializer.dump_流(func(拆分索引,迭代器),outfile)
文件“/mnt/thread/usercache/chqiang/appcache/application_1521024688288_67008/container_1521024688288_67008_01_000002/pyspark.zip/pyspark/serializers.py”,第263行,在转储流中
vs=列表(itertools.islice(迭代器,批处理))
文件“/usr/local/lib/python2.7/site packages/extract-1.0-py2.7.egg/extract.py”,第111行,在
文件“/lib.zip/extract.py”,第115行,正在处理中
本地路径=从s3下载文件(self.app名称,文件路径)
文件“/lib.zip/extract.py”,第22行,从s3下载文件
s3=boto3.resource('s3')
参考资料中第100行的文件“/lib.zip/boto3/_init__.py”
返回_get_default_session().resource(*args,**kwargs)
文件“/lib.zip/boto3/session.py”,第347行,在参考资料中
具有\u低级别\u客户端)
ResourceNotExistError:“s3”资源不存在。
现有资源包括:
- 
位于org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
位于org.apache.spark.api.python.PythonRunner$$anon$1。(PythonRDD.scala:207)
位于org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
位于org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:306)上
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:270)
位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
位于org.apache.spark.scheduler.Task.run(Task.scala:89)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:227)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
运行(Thread.java:745)
驱动程序堆栈跟踪:
位于org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
位于scala.collection.mutable.resizeblearray$class.foreach(resizeblearray.scala:59)
位于scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
位于org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
位于scala.Option.foreach(Option.scala:236)
位于org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:158
botocore.exceptions.DataNotFoundError: Unable to load data for: endpoints