Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/iphone/44.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark AWS上的Spark主节点无法从外部资源提取数据_Apache Spark_Amazon S3_Pyspark - Fatal编程技术网

Apache spark AWS上的Spark主节点无法从外部资源提取数据

Apache spark AWS上的Spark主节点无法从外部资源提取数据,apache-spark,amazon-s3,pyspark,Apache Spark,Amazon S3,Pyspark,我在AWS上有一个独立的Spark cluster设置。 我有pyspark程序,在本地机器上按预期运行。当我使用spark submit将作业提交到AWS上的主节点时,在我必须从S3提取一些数据之前,一切似乎都很好 这是我在主GUI上为提交的作业获取的日志 Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 16/08/11 16:23:04 INFO CoarseGrainedExec

我在AWS上有一个独立的Spark cluster设置。 我有pyspark程序,在本地机器上按预期运行。当我使用spark submit将作业提交到AWS上的主节点时,在我必须从S3提取一些数据之前,一切似乎都很好

这是我在主GUI上为提交的作业获取的日志

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/08/11 16:23:04 INFO CoarseGrainedExecutorBackend: Started daemon with process name: 8911@ip-10-32-3-4
16/08/11 16:23:04 INFO SignalUtils: Registered signal handler for TERM
16/08/11 16:23:04 INFO SignalUtils: Registered signal handler for HUP
16/08/11 16:23:04 INFO SignalUtils: Registered signal handler for INT
16/08/11 16:23:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/08/11 16:23:04 INFO SecurityManager: Changing view acls to: spark,kasra
16/08/11 16:23:04 INFO SecurityManager: Changing modify acls to: spark,kasra
16/08/11 16:23:04 INFO SecurityManager: Changing view acls groups to: 
16/08/11 16:23:04 INFO SecurityManager: Changing modify acls groups to: 
16/08/11 16:23:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(spark, kasra); groups with view permissions: Set(); users  with modify permissions: Set(spark, kasra); groups with modify permissions: Set()
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
    at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:70)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:174)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:270)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
    at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
    at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216)
    at scala.util.Try$.apply(Try.scala:192)
    at scala.util.Failure.recover(Try.scala:216)
    at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
    at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    at org.spark_project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
    at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
    at scala.concurrent.Promise$class.complete(Promise.scala:55)
    at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
    at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
    at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63)
    at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78)
    at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
    at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
    at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54)
    at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
    at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106)
    at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
    at scala.concurrent.Promise$class.tryFailure(Promise.scala:112)
    at scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:153)
    at org.apache.spark.rpc.netty.NettyRpcEnv.org$apache$spark$rpc$netty$NettyRpcEnv$$onFailure$1(NettyRpcEnv.scala:205)
    at org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:239)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Cannot receive any reply in 120 seconds
    ... 8 more
java.lang.IllegalArgumentException: requirement failed: TransportClient has not yet been set.
    at scala.Predef$.require(Predef.scala:224)
    at org.apache.spark.rpc.netty.RpcOutboxMessage.onTimeout(Outbox.scala:70)
    at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$ask$1.applyOrElse(NettyRpcEnv.scala:232)
    at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$ask$1.applyOrElse(NettyRpcEnv.scala:231)
    at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:138)
    at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:136)
根据上面的日志,在我看来,主服务器对S3的访问被过滤了

这背后的代码片段是

sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", '...')
        sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", '...')
config_dict = {"fs.s3n.awsAccessKeyId":"...",
       "fs.s3n.awsSecretAccessKey":"..."}

rdd = sc.hadoopFile('s3n://kasra.districtm.ca/SEGMENTS/SMALL_CSV/aa',
            'org.apache.hadoop.mapred.TextInputFormat',
            'org.apache.hadoop.io.Text',
            'org.apache.hadoop.io.LongWritable',
            conf=config_dict)

注意:使用spark 2执行器执行任务,它们从S3加载数据,而不是从主机加载数据。您的集群似乎无法正常工作,主要原因是:org.apache.spark.rpc.RpcTimeoutException:无法在120秒内收到任何回复。此超时由spark.rpc.askTimeout控制。你的主人不能和遗嘱执行人说话。您可能希望通过执行类似于
sc.parallelize([1,2,3]).count()的简单操作来验证这一点,谢谢。我想问题出在哪里了。访问S3不是问题所在。问题是我没有设置spark驱动程序主机的ip地址。通过添加:
conf=(SparkConf().set(“spark.driver.host”,“xxx.xxx.xxx.xxx”))
说明:如果您支持vpn,则必须确保您使用spark master node尝试使用的ip与本地计算机通信。执行者执行任务,并从S3加载数据,而不是从主机加载数据。您的集群似乎无法正常工作,主要原因是:org.apache.spark.rpc.RpcTimeoutException:无法在120秒内收到任何回复。此超时由spark.rpc.askTimeout控制
。你的主人不能和遗嘱执行人说话。您可能希望通过执行类似于
sc.parallelize([1,2,3]).count()的简单操作来验证这一点,谢谢。我想问题出在哪里了。访问S3不是问题所在。问题是我没有设置spark驱动程序主机的ip地址。通过添加:
conf=(SparkConf().set(“spark.driver.host”,“xxx.xxx.xxx.xxx”))
说明:如果您支持vpn,则必须确保使用spark master node尝试使用的ip与本地计算机进行通信。