Amazon web services 使用Hadoop版本2.7.2从Spark使用S3a协议访问S3

Amazon web services 使用Hadoop版本2.7.2从Spark使用S3a协议访问S3,amazon-web-services,hadoop,apache-spark,amazon-s3,pyspark,Amazon Web Services,Hadoop,Apache Spark,Amazon S3,Pyspark,我试图从pyspark(版本2.2.0)访问s3(s3a协议),但遇到了一些困难 我正在使用Hadoop和AWS sdk包 pyspark --packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2 下面是我的代码的样子: sc._jsc.hadoopConfiguration().set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFile

我试图从pyspark(版本2.2.0)访问s3(s3a协议),但遇到了一些困难

我正在使用Hadoop和AWS sdk包

pyspark --packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2
下面是我的代码的样子:

sc._jsc.hadoopConfiguration().set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", AWS_ACCESS_KEY_ID)
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", AWS_SECRET_ACCESS_KEY)

rdd = sc.textFile('s3a://spark-test-project/large-file.csv')
print(rdd.first().show())
我明白了:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/rdd.py", line 1361, in first
    rs = self.take(1)
  File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/rdd.py", line 1313, in take
    totalParts = self.getNumPartitions()
  File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/rdd.py", line 385, in getNumPartitions
    return self._jrdd.partitions().size()
  File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/Users/attazadeh/DataEngine/env/lib/python3.4/site-packages/pyspark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o34.partitions.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 32750D3DED4067BD, AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended Request ID: jAhO0tWTblPEUehF1Bul9WZj/9G7woaHFVxb8gzsOpekam82V/Rm9zLgdLDNsGZ6mPizGZmo6xI=
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
    at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
    at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61)
    at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
文件“/Users/attazadeh/DataEngine/env/lib/python3.4/site packages/pyspark/rdd.py”,第一行1361
rs=自取(1)
文件“/Users/attazadeh/DataEngine/env/lib/python3.4/site packages/pyspark/rdd.py”,第1313行,在take中
totalParts=self.getNumPartitions()
getNumPartitions中的文件“/Users/attazadeh/DataEngine/env/lib/python3.4/site packages/pyspark/rdd.py”,第385行
返回self.\u jrdd.partitions().size()
文件“/Users/attazadeh/DataEngine/env/lib/python3.4/site packages/pyspark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py”,第1133行,在_u调用中__
文件“/Users/attazadeh/DataEngine/env/lib/python3.4/site packages/pyspark/sql/utils.py”,第63行,deco格式
返回f(*a,**kw)
文件“/Users/attazadeh/DataEngine/env/lib/python3.4/site packages/pyspark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py”,第319行,在get_返回_值中
py4j.protocol.Py4JJavaError:调用o34.0分区时出错。
:com.amazonaws.services.s3.model.amazons3异常:状态代码:400,AWS服务:Amazon s3,AWS请求ID:32750D3DED4067BD,AWS错误代码:null,AWS错误消息:错误请求,s3扩展请求ID:Jaho0TblpeueHF1Bul9WZJ/9G7woaHFVxb8gzsOpekam82V/RM9ZLGLDNSGZM6MPZMO6XI=
在com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
在com.amazonaws.http.AmazonHttpClient.executeHelper上(AmazonHttpClient.java:421)
在com.amazonaws.http.AmazonHttpClient.execute上(AmazonHttpClient.java:232)
位于com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
位于com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
位于com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
位于org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
位于org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
位于org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
位于org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
位于org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
位于org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
位于org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
位于org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
位于org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
位于org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
位于org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:194)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:252)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:250)
位于scala.Option.getOrElse(Option.scala:121)
位于org.apache.spark.rdd.rdd.partitions(rdd.scala:250)
位于org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:252)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:250)
位于scala.Option.getOrElse(Option.scala:121)
位于org.apache.spark.rdd.rdd.partitions(rdd.scala:250)
位于org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61)
位于org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:280)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:214)
运行(Thread.java:748)

这是AWS Java SDK的错误吗?我是spark的新手,所以我不知道除了AWS错误代码:null之外,是否还有其他方法可以从AWS获得更好的日志信息。我在AWS上的spark-defaults.conf文件中有一行代码:

spark.jars.packages com.amazonaws:aws-java-sdk:1.11.99,org.apache.hadoop:hadoop-aws:2.7.2
我还确保在设置EC2时使用的安全组可以访问s3

在这两件事之后,我从s3中读取文件没有任何问题:

%pyspark
df = spark.read.csv("s3a://my_bucket/name/")
或者,如果您使用AWS EMR,您应该能够直接访问s3:

%pyspark
df = spark.read.csv("s3://my_bucket/name/")

值得一提的是,我在aws上的spark-defaults.conf文件中有一行:

spark.jars.packages com.amazonaws:aws-java-sdk:1.11.99,org.apache.hadoop:hadoop-aws:2.7.2
我还确保在设置EC2时使用的安全组可以访问s3

在这两件事之后,我从s3中读取文件没有任何问题:

%pyspark
df = spark.read.csv("s3a://my_bucket/name/")
或者,如果您使用AWS EMR,您应该能够直接访问s3:

%pyspark
df = spark.read.csv("s3://my_bucket/name/")
“糟糕的请求”是来自S3的恐惧信息,它意味着“这不起作用,我们不会告诉你为什么”

中有一整节关于S3A的故障排除

如果您的bucket托管的用户仅支持S3“v4”身份验证协议(法兰克福、伦敦、首尔),则需要将fs.s3a.endpoint字段设置为特定区域的字段。。。文件中有详细信息

否则,请尝试使用
s3a://landsat pds/scene_list.gz
作为源。它是一个公共CSV文件,不需要身份验证。如果您看不到它,那么您就有严重的麻烦了。

来自S3的消息是“坏请求”,它意味着“这不起作用,我们