Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 使用org.apache.hadoop:hadoopaws从pyspark中的s3读取文件_Apache Spark_Amazon S3_Pyspark - Fatal编程技术网

Apache spark 使用org.apache.hadoop:hadoopaws从pyspark中的s3读取文件

Apache spark 使用org.apache.hadoop:hadoopaws从pyspark中的s3读取文件,apache-spark,amazon-s3,pyspark,Apache Spark,Amazon S3,Pyspark,尝试使用hadoop aws从s3读取文件时,下面提到了用于运行代码的命令。 请帮助我解决这个问题,并理解我做错了什么 # run using command # time spark-submit --packages org.apache.hadoop:hadoop-aws:3.2.1 connect_s3_using_keys.py from pyspark import SparkContext, SparkConf import ConfigParser import pyspark

尝试使用hadoop aws从s3读取文件时,下面提到了用于运行代码的命令。 请帮助我解决这个问题,并理解我做错了什么

# run using command
# time spark-submit --packages org.apache.hadoop:hadoop-aws:3.2.1 connect_s3_using_keys.py

from pyspark import SparkContext, SparkConf
import ConfigParser
import pyspark

# create Spark context with Spark configuration
conf = SparkConf().setAppName("Deepak_1ST_job")
sc = SparkContext(conf=conf)
sc.setLogLevel("ERROR")

hadoop_conf = sc._jsc.hadoopConfiguration()

config = ConfigParser.ConfigParser()
config.read("/home/deepak/Desktop/secure/awsCred.cnf")
accessKeyId = config.get("aws_keys", "access_key")
secretAccessKey = config.get("aws_keys", "secret_key")

hadoop_conf.set(
    "fs.s3n.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
hadoop_conf.set("fs3a.access.key", accessKeyId)
hadoop_conf.set("s3a.secret.key", secretAccessKey)

sqlContext = pyspark.SQLContext(sc)

df = sqlContext.read.json("s3a://bucket_name/logs/20191117log.json")
df.show()
编辑1:

由于我是pyspark的新手,我不知道这些依赖关系,而且错误也不容易理解

将错误获取为

File "/home/deepak/spark/spark-3.0.0-preview-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/sql/utils.py", line 98, in deco
  File "/home/deepak/spark/spark-3.0.0-preview-bin-hadoop3.2/python/lib/py4j-0.10.8.1-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o28.json.
: java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;Ljava/lang/Object;)V
        at org.apache.hadoop.fs.s3a.S3AUtils.lookupPassword(S3AUtils.java:816)
        at org.apache.hadoop.fs.s3a.S3AUtils.lookupPassword(S3AUtils.java:792)
        at org.apache.hadoop.fs.s3a.S3AUtils.getAWSAccessKeys(S3AUtils.java:747)
        at org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider.

我对spark 3.0.0/hadoop 3.2也有同样的问题


对我来说,有效的方法是将
spark-3.0.0-bin-hadoop3.2/jars中的
hadoop-aws-3.2.1.jar
替换为
hadoop-aws-3.2.0.jar
,您看到的错误是什么?到目前为止,你试图解决什么问题?请添加所有这些详细信息,以便有人更好地帮助您。@JayadeepJayaraman我添加了错误,请检查。我是使用linux机器的pysparkIm新手,我还没有hadoop-aws-3.2.1.jar或任何版本的hadoop-aws-jar。我如何得到这个jar,并设置它?只是wget并将文件移动到jars文件夹,或者还有什么我应该做的吗?这对我很有用!