Amazon s3 Spark s3读取提供NullPointerException
我正在尝试处理来自S3文件系统的文件。我已经导出了Amazon s3 Spark s3读取提供NullPointerException,amazon-s3,apache-spark,Amazon S3,Apache Spark,我正在尝试处理来自S3文件系统的文件。我已经导出了AWS\u访问密钥\u ID和AWS\u SECRET\u访问密钥。我还设置了配置 hadoopConf.set("fs.s3.awsAccessKeyId","<key>") hadoopConf.set("fs.s3.awsSecretAccessKey","<secret>") build.sbtMy build.sbt具有以下依赖项 libraryDependencies += "org.apache.spa
AWS\u访问密钥\u ID
和AWS\u SECRET\u访问密钥
。我还设置了配置
hadoopConf.set("fs.s3.awsAccessKeyId","<key>")
hadoopConf.set("fs.s3.awsSecretAccessKey","<secret>")
build.sbtMy build.sbt具有以下依赖项
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.5.1"
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.4.0"
libraryDependencies += "net.java.dev.jets3t" % "jets3t" % "0.9.3"
我使用AWS IAM作为访问密钥。我错过什么了吗
任何帮助都将不胜感激。如果没有更多细节,我将盲目射击,并说您的URI路径无效。这是一个错误
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.5.1"
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.4.0"
libraryDependencies += "net.java.dev.jets3t" % "jets3t" % "0.9.3"