Scala 从ApacheSpark访问公共可用的AmazonS3文件

Scala 从ApacheSpark访问公共可用的AmazonS3文件,scala,amazon-s3,apache-spark,Scala,Amazon S3,Apache Spark,我有一个公共可用的AmazonS3资源(文本文件),希望从spark访问它。这意味着-我没有任何亚马逊凭据-如果我只想下载它,它可以正常工作: val bucket = "<my-bucket>" val key = "<my-key>" val client = new AmazonS3Client val o = client.getObject(bucket, key) val content = o.getObjectContent // <= can b

我有一个公共可用的AmazonS3资源(文本文件),希望从spark访问它。这意味着-我没有任何亚马逊凭据-如果我只想下载它,它可以正常工作:

val bucket = "<my-bucket>"
val key = "<my-key>"

val client = new AmazonS3Client
val o = client.getObject(bucket, key)
val content = o.getObjectContent // <= can be read and used as input stream
我收到stacktrace的以下错误:

Exception in thread "main" com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
    at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)
    at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
    at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:221)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1781)
    at org.apache.spark.rdd.RDD.count(RDD.scala:1099)
    at com.example.Main$.main(Main.scala:14)
    at com.example.Main.main(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
我不想提供任何AWS凭据-我只想匿名访问资源(目前)-如何实现这一点?我可能需要让它使用像匿名AWSCredentialsProvider这样的东西——但如何将它放在spark或hadoop中呢

顺便说一句,我的身材只是以防万一

scalaVersion := "2.11.7"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "1.4.1",
  "org.apache.hadoop" % "hadoop-aws" % "2.7.1"
)
更新:在我做了一些调查之后,我明白了它不起作用的原因

首先,S3AFileSystem使用以下凭证顺序创建AWS客户端:

AWSCredentialsProviderChain credentials = new AWSCredentialsProviderChain(
    new BasicAWSCredentialsProvider(accessKey, secretKey),
    new InstanceProfileCredentialsProvider(),
    new AnonymousAWSCredentialsProvider()
);
public class AnonymousAWSCredentials implements AWSCredentials {

    public String getAWSAccessKeyId() {
        return null;
    }

    public String getAWSSecretKey() {
        return null;
    }
}
“accessKey”和“secretKey”值取自spark conf实例(密钥必须是“fs.s3a.access.key”和“fs.s3a.secret.key”或org.apache.hadoop.fs.s3a.Constants.access_key和org.apache.hadoop.fs.s3a.Constants.secret_密钥常量,这更方便)

第二,您可能会看到匿名AWSCredentialsProvider是第三个选项(最后一个优先级)-这可能有什么问题?请参见匿名AWSCredentials的实现:

AWSCredentialsProviderChain credentials = new AWSCredentialsProviderChain(
    new BasicAWSCredentialsProvider(accessKey, secretKey),
    new InstanceProfileCredentialsProvider(),
    new AnonymousAWSCredentialsProvider()
);
public class AnonymousAWSCredentials implements AWSCredentials {

    public String getAWSAccessKeyId() {
        return null;
    }

    public String getAWSSecretKey() {
        return null;
    }
}
它只是为访问密钥和密钥返回null。听起来很合理。但看看AWSCredentialsProviderChain内部:

AWSCredentials credentials = provider.getCredentials();

if (credentials.getAWSAccessKeyId() != null &&
    credentials.getAWSSecretKey() != null) {
    log.debug("Loading credentials from " + provider.toString());

    lastUsedProvider = provider;
    return credentials;
}
如果两个密钥都为空,它不会选择提供程序-这意味着匿名凭据无法工作。看起来像aws-java-sdk-1.7.4中的一个bug。我尝试使用最新版本,但它与hadoop-aws-2.7.1不兼容


还有其他想法吗?

我个人从未访问过Spark的公共数据。您可以尝试使用虚拟凭据,或仅为此用途创建凭据。直接在SparkConf对象上设置它们

val sparkConf: SparkConf = ???
val accessKeyId: String = ???
val secretAccessKey: String = ???
sparkConf.set("spark.hadoop.fs.s3.awsAccessKeyId", accessKeyId)
sparkConf.set("spark.hadoop.fs.s3n.awsAccessKeyId", accessKeyId)
sparkConf.set("spark.hadoop.fs.s3.awsSecretAccessKey", secretAccessKey)
sparkConf.set("spark.hadoop.fs.s3n.awsSecretAccessKey", secretAccessKey)
或者,阅读
DefaultAWSCredentialsProviderChain
的文档,查看凭证的查找位置。列表(顺序很重要)是:

  • 环境变量-AWS\u访问密钥\u ID和AWS\u密钥
  • Java系统属性-aws.accessKeyId和aws.secretKey
  • 所有aws SDK和aws CLI共享的默认位置(~/.aws/credentials)处的凭据配置文件文件
  • 通过AmazonEC2元数据服务提供的实例配置文件凭据

现在,您似乎可以使用aws.credentials.provider配置键使用org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider提供的匿名访问,它可以正确地访问匿名提供程序。但是,您需要比2.7更新的hadoop aws,这意味着您还需要一个没有捆绑hadoop的spark安装

下面是我是如何做到的colab:

!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q http://apache.osuosl.org/spark/spark-2.3.1/spark-2.3.1-bin-without-hadoop.tgz
!tar xf spark-2.3.1-bin-without-hadoop.tgz
!pip install -q findspark
!pip install -q pyarrow
现在我们在侧面安装hadoop,并将
hadoop classpath
的输出设置为
SPARK\u DIST\u classpath
,以便SPARK可以看到它

import os
!wget -q http://mirror.nbtelecom.com.br/apache/hadoop/common/hadoop-2.8.4/hadoop-2.8.4.tar.gz
!tar xf hadoop-2.8.4.tar.gz
os.environ['HADOOP_HOME']= '/content/hadoop-2.8.4'
os.environ["SPARK_DIST_CLASSPATH"] = "/content/hadoop-2.8.4/etc/hadoop:/content/hadoop-2.8.4/share/hadoop/common/lib/*:/content/hadoop-2.8.4/share/hadoop/common/*:/content/hadoop-2.8.4/share/hadoop/hdfs:/content/hadoop-2.8.4/share/hadoop/hdfs/lib/*:/content/hadoop-2.8.4/share/hadoop/hdfs/*:/content/hadoop-2.8.4/share/hadoop/yarn/lib/*:/content/hadoop-2.8.4/share/hadoop/yarn/*:/content/hadoop-2.8.4/share/hadoop/mapreduce/lib/*:/content/hadoop-2.8.4/share/hadoop/mapreduce/*:/content/hadoop-2.8.4/contrib/capacity-scheduler/*.jar"
然后我们确实喜欢in,但添加了s3a和匿名阅读支持,这就是问题所在

import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.3.1-bin-without-hadoop"
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.amazonaws:aws-java-sdk:1.10.6,org.apache.hadoop:hadoop-aws:2.8.4 --conf spark.sql.execution.arrow.enabled=true --conf spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider pyspark-shell'
最后,我们可以创建会话

import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").getOrCreate()

这就是帮助我的原因:

val session = SparkSession.builder()
  .appName("App")
  .master("local[*]") 
  .config("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider")
  .getOrCreate()

val df = session.read.csv(filesFromS3:_*)
版本:

"org.apache.spark" %% "spark-sql" % "2.4.0",
"org.apache.hadoop" % "hadoop-aws" % "2.8.5",
文件:

有些地方仍然不对劲。我将以下值添加到您给我的密钥中(精确字符串“aaa”作为伪凭据)。我希望在最坏的情况下看到auth错误,但我看到相同的异常“无法从链中的任何提供程序加载AWS凭据”正确的密钥必须是“spark.hadoop.fs.s3a.access.key”和“spark.hadoop.fs.s3a.secret.key”。顺便说一句,提供虚拟值没有帮助-现在我看到403错误。看起来不可能为带有spark的AWS S3使用匿名凭据。根据源代码,凭证的顺序是不同的。AWSCredentialsProviderChain credentials=新的AWSCredentialsProviderChain(新的基本凭证提供者(accessKey、secretKey)、新的InstanceProfileCredentialsProvider()、新的匿名AWSCredentialsProvider());anonymous根本不起作用。好吧,很抱歉,我没有看到您使用的是
s3a
协议。你试过s3n吗?你有没有成功过,也许是最近的版本?没有,我有一段时间没有试过-我甚至忘了,不要用amazon s3做任何事情。如果这个配置失败,请注意它需要hadoop aws 2.8.0或更高版本(请参阅)