Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/xamarin/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x PySpark使用临时AWS令牌进行s3身份验证时出现问题_Python 3.x_Apache Spark_Amazon S3_Pyspark - Fatal编程技术网

Python 3.x PySpark使用临时AWS令牌进行s3身份验证时出现问题

Python 3.x PySpark使用临时AWS令牌进行s3身份验证时出现问题,python-3.x,apache-spark,amazon-s3,pyspark,Python 3.x,Apache Spark,Amazon S3,Pyspark,我已经设置了本地PySpark,但每次我尝试使用s3a协议读取文件s3时,它都返回403 AccessDenied错误。我尝试连接的帐户仅支持AWS assumeRole,它为我提供临时访问密钥、密钥和会话令牌 我正在使用spark 2.4.4、Hadoop 2.7.3和aws-java-sdk-1.7.4 jar文件。我知道问题不在于我的安全令牌,因为我可以在boto3中使用相同的凭据来查询相同的bucket。我将按照以下方式设置Spark课程: spark.sparkContext._con

我已经设置了本地PySpark,但每次我尝试使用s3a协议读取文件s3时,它都返回403 AccessDenied错误。我尝试连接的帐户仅支持AWS assumeRole,它为我提供临时访问密钥、密钥和会话令牌

我正在使用spark 2.4.4、Hadoop 2.7.3和aws-java-sdk-1.7.4 jar文件。我知道问题不在于我的安全令牌,因为我可以在boto3中使用相同的凭据来查询相同的bucket。我将按照以下方式设置Spark课程:

spark.sparkContext._conf.setAll([
[('fs.s3a.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem'), 
('fs.s3a.aws.credentials.provider','org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider'),
("fs.s3a.endpoint", "s3-ap-southeast-2.amazonaws.com"),
('fs.s3a.access.key', "..."),
('fs.s3a.secret.key', "..."),
('fs.s3a.session.token', "...")])
])

spark_01 = spark.builder.config(conf=conf).appName('s3connection').getOrCreate()

df = spark_01.read.load('s3a://<some bucket>')

更新: 完整错误堆栈:

19/10/08 16:37:17 WARN FileStreamSink: Error while looking for metadata directory.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/spark-2.4.4-bin-hadoop2.7/python/pyspark/sql/readwriter.py", line 166, in load
    return self._df(self._jreader.load(path))
  File "/usr/local/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
  File "/usr/local/spark-2.4.4-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/usr/local/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o47.load.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: DFF18E66D647F534, AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: ye5NgB5wRhmHpn37tghQ0EuO9K6vPDE/1+Y6m3Y5sGqxD9iFOktFUjdqzn6hd/aHoakEXmafA9o=
        at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
        at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
        at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:976)
        at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:956)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:892)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:557)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
        at scala.collection.immutable.List.flatMap(List.scala:355)
        at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
        at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
        at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)```
19/10/08 16:37:17警告FileStreamSink:查找元数据目录时出错。
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
加载文件“/usr/local/spark-2.4.4-bin-hadoop2.7/python/pyspark/sql/readwriter.py”,第166行
返回self.\u df(self.\u jreader.load(路径))
文件“/usr/local/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py”,第1257行,在__
文件“/usr/local/spark-2.4.4-bin-hadoop2.7/python/pyspark/sql/utils.py”,第63行,deco格式
返回f(*a,**kw)
文件“/usr/local/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py”,第328行,在get\u返回值中
py4j.protocol.Py4JJavaError:调用o47.load时出错。
:com.amazonaws.services.s3.model.amazons3异常:状态代码:403,AWS服务:Amazon s3,AWS请求ID:DFF18E66D647F534,AWS错误代码:null,AWS错误消息:禁止,s3扩展请求ID:Y5NGB5WRHMHPN37TGHQ0EUO9K6VPDE/1+Y6m3Y5sGqxD9iFOktFUjdqzn6hd/aHoakEXmafA9o=
在com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
在com.amazonaws.http.AmazonHttpClient.executeHelper上(AmazonHttpClient.java:421)
在com.amazonaws.http.AmazonHttpClient.execute上(AmazonHttpClient.java:232)
位于com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
位于com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:976)
位于com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:956)
位于org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:892)
位于org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
位于org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
在org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$datasources$$checkandglobpathif needed$1.apply(DataSource.scala:557)
在org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$datasources$$checkandglobpathif needed$1.apply(DataSource.scala:545)
位于scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
位于scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
位于scala.collection.immutable.List.foreach(List.scala:392)
位于scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
位于scala.collection.immutable.List.flatMap(List.scala:355)
在org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$datasources$DataSource$$checkandglobpathif needed(DataSource.scala:545)
位于org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
位于org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
位于org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
位于org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:282)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:238)
运行(Thread.java:748)```

要解决这个问题,我们需要做以下两件事。(我发现您已经在代码中做了第二件事,所以只需要做第一件事。)

  • 仅使用hadoop-aws-2.8.5.jar,而不要在hadoop-aws-2.7.7.jar中使用aws-java-sdk-1.7.4.jar。(请参阅中的“入门”部分)
  • 设置fs.s3a.aws.credentials.provider,如下所示。 对于您的代码, ('fs.s3a.aws.credentials.provider', 'org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider') 这使您能够使用令牌密钥。通过此设置,您可以在显示或使用系统环境变量时在代码中提供密钥,例如AWS\u ACCESS\u key\u ID、AWS\u SECRET\u ACCESS\u key和AWS\u SESSION\u TOKEN

  • 作为参考,此设置('fs.s3a.aws.credentials.provider','com.amazonaws.auth.DefaultAWSCredentialsProviderChain')对于从~/.aws/credentials加载凭据密钥也很有用,而无需在源代码中进行设置。(请参阅,)

    可能是与签名版本相关的问题吗?@Lamanus您能否详细说明签名版本的含义?Thankss3访问可以通过两种类型完成,它们是V2和V4签名版本,用于从客户端和S3进行通信。因此,您必须检查您的bucket是否需要V4。如果它仅对V4有效,则必须将客户端设置为使用V4签名。@Lamanus我知道要设置以下内容,但仍然会遇到相同的错误:<代码>配置=(SparkConf().set(“spark.executor.extraJavaOptions”,“-Dcom.amazonaws.services.s3.enableV4=true”)
    19/10/08 16:37:17 WARN FileStreamSink: Error while looking for metadata directory.
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/usr/local/spark-2.4.4-bin-hadoop2.7/python/pyspark/sql/readwriter.py", line 166, in load
        return self._df(self._jreader.load(path))
      File "/usr/local/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
      File "/usr/local/spark-2.4.4-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco
        return f(*a, **kw)
      File "/usr/local/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
    py4j.protocol.Py4JJavaError: An error occurred while calling o47.load.
    : com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: DFF18E66D647F534, AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: ye5NgB5wRhmHpn37tghQ0EuO9K6vPDE/1+Y6m3Y5sGqxD9iFOktFUjdqzn6hd/aHoakEXmafA9o=
            at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
            at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
            at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
            at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
            at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:976)
            at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:956)
            at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:892)
            at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77)
            at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
            at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:557)
            at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
            at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
            at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
            at scala.collection.immutable.List.foreach(List.scala:392)
            at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
            at scala.collection.immutable.List.flatMap(List.scala:355)
            at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
            at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
            at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
            at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
            at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
            at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
            at py4j.Gateway.invoke(Gateway.java:282)
            at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
            at py4j.commands.CallCommand.execute(CallCommand.java:79)
            at py4j.GatewayConnection.run(GatewayConnection.java:238)
            at java.lang.Thread.run(Thread.java:748)```