Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 使用Pyspark从S3读取时内容长度分隔的消息正文SparkException过早结束_Apache Spark_Amazon S3_Pyspark_Apache Spark Sql_Pyspark Dataframes - Fatal编程技术网

Apache spark 使用Pyspark从S3读取时内容长度分隔的消息正文SparkException过早结束

Apache spark 使用Pyspark从S3读取时内容长度分隔的消息正文SparkException过早结束,apache-spark,amazon-s3,pyspark,apache-spark-sql,pyspark-dataframes,Apache Spark,Amazon S3,Pyspark,Apache Spark Sql,Pyspark Dataframes,我正在使用以下代码从我的本地计算机读取s3csv文件 from pyspark import SparkConf, SparkContext from pyspark.sql import SparkSession import configparser import os conf = SparkConf() conf.set('spark.jars', '/usr/local/spark/jars/aws-java-sdk-1.7.4.jar,/usr/local/spark/jars/h

我正在使用以下代码从我的本地计算机读取s3csv文件

from pyspark import SparkConf, SparkContext
from pyspark.sql import SparkSession
import configparser
import os

conf = SparkConf()
conf.set('spark.jars', '/usr/local/spark/jars/aws-java-sdk-1.7.4.jar,/usr/local/spark/jars/hadoop-aws-2.7.4.jar')

#Tried by setting this, but failed
conf.set('spark.executor.memory', '8g') 
conf.set('spark.driver.memory', '8g') 

spark_session = SparkSession.builder \
        .config(conf=conf) \
        .appName('s3-write') \
        .getOrCreate()

# getting S3 credentials from file
aws_profile = "lijo" #user profile name
config = configparser.ConfigParser()
config.read(os.path.expanduser("~/.aws/credentials"))
access_key = config.get(aws_profile, "aws_access_key_id") 
secret_key = config.get(aws_profile, "aws_secret_access_key")

# hadoop configuration for S3
hadoop_conf=spark_session._jsc.hadoopConfiguration()
hadoop_conf.set("fs.s3a.access.key", access_key)
hadoop_conf.set("fs.s3a.secret.key", secret_key)
hadoop_conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")

#Tried by setting this, but no use
hadoop_conf.set("fs.s3a.connection.maximum", "1000") 
hadoop_conf.set("fs.s3.maxConnections", "1000") 
hadoop_conf.set("fs.s3a.connection.establish.timeout", "50000") 
hadoop_conf.set("fs.s3a.socket.recv.buffer", "8192000") 
hadoop_conf.set("fs.s3a.readahead.range", "32M")

# 1) Read csv
df = spark_session.read.csv("s3a://pyspark-lijo-test/auction.csv", header=True,mode="DROPMALFORMED")
df.show(2)
下面是我的spark单机版配置详细信息

[('spark.driver.host', '192.168.0.49'),
 ('spark.executor.id', 'driver'),
 ('spark.app.name', 's3-write'),
 ('spark.repl.local.jars',
  'file:///usr/local/spark/jars/aws-java-sdk-1.7.4.jar,file:///usr/local/spark/jars/hadoop-aws-2.7.4.jar'),
 ('spark.jars',
  '/usr/local/spark/jars/aws-java-sdk-1.7.4.jar,/usr/local/spark/jars/hadoop-aws-2.7.4.jar'),
 ('spark.app.id', 'local-1594186616260'),
 ('spark.rdd.compress', 'True'),
 ('spark.driver.memory', '8g'),
 ('spark.driver.port', '35497'),
 ('spark.serializer.objectStreamReset', '100'),
 ('spark.master', 'local[*]'),
 ('spark.executor.memory', '8g'),
 ('spark.submit.pyFiles', ''),
 ('spark.submit.deployMode', 'client'),
 ('spark.ui.showConsoleProgress', 'true')]
但我在读取一个1MB的文件时也会出现以下错误

Py4JJavaError: An error occurred while calling o43.csv.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, 192.168.0.49, executor driver): org.apache.spark.util.TaskCompletionListenerException: Premature end of Content-Length delimited message body (expected: 888,879; received: 16,360)
    at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:145)
    at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:124)
    at org.apache.spark.scheduler.Task.run(Task.scala:143)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
尝试将S3读取代码更改为低于1的代码,它正在工作,但我们需要将RDD转换为Dataframe

2) data = spark_session.sparkContext.textFile("s3a://pyspark-lijo-test/auction.csv").map(lambda line: line.split(","))
data.show(2)

为什么SparkSql代码(1)甚至不能读取小文件,或者需要进行任何设置?

发现了问题。火花3.0中出现了一些问题。切换到最新的Spark 2.4.6版本,它工作正常。

您可以添加来自workers的任何堆栈吗?这还不足以开始debugging@Lijo我使用Hadoop 2.7.3、Hadoop aws 2.7.3 jar和aws-JAVA-SDK 1.7.4版本运行Spark 2.4.7,但仍然面临相同的问题。你能分享你的确切版本吗?如果可能的话,2.4.6 Spark的源代码是什么?我使用Hadoop 2.7.3、Hadoop aws 2.7.3 jar和aws-JAVA-SDK 1.7.4版本运行Spark 2.4.7,但仍然面临相同的问题。你能分享你的确切版本吗?如果可能,2.4.6火花的来源?