Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/fsharp/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark Spark 1.6和hadoop 2.3在哪里为s3设置socketTimeOut?_Apache Spark_Amazon S3_Socket Timeout Exception - Fatal编程技术网

Apache spark Spark 1.6和hadoop 2.3在哪里为s3设置socketTimeOut?

Apache spark Spark 1.6和hadoop 2.3在哪里为s3设置socketTimeOut?,apache-spark,amazon-s3,socket-timeout-exception,Apache Spark,Amazon S3,Socket Timeout Exception,我们在从s3读取期间收到套接字超时。关于增加socketTimeOut的文档并不明显。如果您能在这方面提供帮助,我将不胜感激 java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.S

我们在从s3读取期间收到套接字超时。关于增加socketTimeOut的文档并不明显。如果您能在这方面提供帮助,我将不胜感激

java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at org.apache.http.impl.io.AbstractSessionInputBuffer.read(AbstractSessionInputBuffer.java:198)
at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:200)
at org.apache.http.impl.io.ContentLengthInputStream.close(ContentLengthInputStream.java:103)
at org.apache.http.conn.BasicManagedEntity.streamClosed(BasicManagedEntity.java:164)
at org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:227)
at org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:174)
at org.apache.http.util.EntityUtils.consume(EntityUtils.java:88)
at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.releaseConnection(HttpMethodReleaseInputStream.java:102)
at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.close(HttpMethodReleaseInputStream.java:194)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.seek(NativeS3FileSystem.java:152)

在S3n中没有设置套接字超时的选项。还请注意,堆栈跟踪显示,当客户端试图关闭()流,然后在其他位置重新打开它时,它正试图读取到文件的结尾。同样,一些jets3t版本的已知限制,因此是s3n。这不会在Hadoop中修复,因为所有开发都是在基于AWS库的S3a上进行的


请更新到更高版本的Hadoop JARs(2.7.x)并使用s3a

在S3n中没有设置套接字超时的选项。还请注意,堆栈跟踪显示,当客户端试图关闭()流,然后在其他位置重新打开它时,它正试图读取到文件的结尾。同样,一些jets3t版本的已知限制,因此是s3n。这不会在Hadoop中修复,因为所有开发都是在基于AWS库的S3a上进行的


请更新到更高版本的Hadoop JARs(2.7.x)并使用s3a

你找到解决办法了吗?你找到解决办法了吗?