Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Spark:使用基于Hadoop多节点的Spark执行python脚本_Hadoop_Apache Spark_Pyspark_Cluster Computing - Fatal编程技术网

Spark:使用基于Hadoop多节点的Spark执行python脚本

Spark:使用基于Hadoop多节点的Spark执行python脚本,hadoop,apache-spark,pyspark,cluster-computing,Hadoop,Apache Spark,Pyspark,Cluster Computing,我正在寻找基于Hadoop多节点的Spark的使用方法,我对使用集群模式的pythonic脚本有一个疑问 我的配置: 我的Hadoop集群中有: 1名称节点(主节点) 2个数据节点(从节点) 因此,为了使用这个集群,我想用Python执行我的脚本。我知道Spark可以用作独立模式,但我想使用我的节点 我的python脚本: 这是一个非常简单的脚本,可以让我计算文本中的单词 import sys from pyspark import SparkContext sc = SparkConte

我正在寻找基于Hadoop多节点的
Spark
的使用方法,我对使用集群模式的pythonic脚本有一个疑问

我的配置: 我的Hadoop集群中有:

  • 1名称节点(主节点)
  • 2个数据节点(从节点)
因此,为了使用这个集群,我想用Python执行我的脚本。我知道Spark可以用作独立模式,但我想使用我的节点

我的python脚本: 这是一个非常简单的脚本,可以让我计算文本中的单词

import sys
from pyspark import SparkContext

sc = SparkContext()
lines = sc.textFile(sys.argv[1])
words = lines.flatMap(lambda line: line.split(' '))
words_with_1 = words.map(lambda word: (word, 1))
word_counts = words_with_1.reduceByKey(lambda count1, count2: count1 + count2)
result = word_counts.collect()

for (word, count) in result:
    print word.encode("utf8"), count
我的星火司令部: 为了使用Spark,我会:

time ./bin/spark-submit --master spark://master:7077 /home/hduser/count.py /data.txt
但是,这个命令允许在独立模式下执行Spark,对吗? 我如何使用我的Hadoop集群(例如纱线)执行Spark,并在集群上进行并行和分布式计算

我试过:

time ./bin/spark-submit --master yarn /home/hduser/count.py /data.txt
time ./bin/spark-submit --master yarn --deploy-mode cluster /home/hduser/count.py /data.txt
我遇到了一些问题:

2018-03-15 10:13:14 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-03-15 10:13:15 INFO  SparkContext:54 - Running Spark version 2.3.0
2018-03-15 10:13:15 INFO  SparkContext:54 - Submitted application: count.py
2018-03-15 10:13:15 INFO  SecurityManager:54 - Changing view acls to: hduser
2018-03-15 10:13:15 INFO  SecurityManager:54 - Changing modify acls to: hduser
2018-03-15 10:13:15 INFO  SecurityManager:54 - Changing view acls groups to:
2018-03-15 10:13:15 INFO  SecurityManager:54 - Changing modify acls groups to:
2018-03-15 10:13:15 INFO  SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hduser); groups with view permissions: Set(); users  with modify permissions: Set(hduser)$
2018-03-15 10:13:16 INFO  Utils:54 - Successfully started service 'sparkDriver' on port 40388.
2018-03-15 10:13:16 INFO  SparkEnv:54 - Registering MapOutputTracker
2018-03-15 10:13:16 INFO  SparkEnv:54 - Registering BlockManagerMaster
2018-03-15 10:13:16 INFO  BlockManagerMasterEndpoint:54 - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2018-03-15 10:13:16 INFO  BlockManagerMasterEndpoint:54 - BlockManagerMasterEndpoint up
2018-03-15 10:13:16 INFO  DiskBlockManager:54 - Created local directory at /tmp/blockmgr-b131528e-849e-4ba7-94fe-c552572f12fc
2018-03-15 10:13:16 INFO  MemoryStore:54 - MemoryStore started with capacity 413.9 MB
2018-03-15 10:13:16 INFO  SparkEnv:54 - Registering OutputCommitCoordinator
2018-03-15 10:13:17 INFO  log:192 - Logging initialized @5400ms
2018-03-15 10:13:17 INFO  Server:346 - jetty-9.3.z-SNAPSHOT
2018-03-15 10:13:17 INFO  Server:414 - Started @5667ms
2018-03-15 10:13:17 INFO  AbstractConnector:278 - Started ServerConnector@4f835332{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2018-03-15 10:13:17 INFO  Utils:54 - Successfully started service 'SparkUI' on port 4040.
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2f867b0c{/jobs,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2a0105b7{/jobs/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3fd04590{/jobs/job,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2637750b{/jobs/job/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@439f0c7{/stages,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3978d915{/stages/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@596dc76d{/stages/stage,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@7054d173{/stages/stage/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@47b526bb{/stages/pool,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@7896fc75{/stages/pool/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2fd54632{/storage,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@79dcd5f2{/storage/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1732b48c{/storage/rdd,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5888874b{/storage/rdd/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5de9bebe{/environment,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@428593b4{/environment/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4011c9bc{/executors,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5cbfbc2a{/executors/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4c33f54d{/executors/threadDump,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@22c5d74c{/executors/threadDump/json,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@6cd7b681{/static,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5ee342f2{/,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4d68a347{/api,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1e878af1{/jobs/job/kill,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  ContextHandler:781 - Started o.s.j.s.ServletContextHandler@590aa379{/stages/stage/kill,null,AVAILABLE,@Spark}
2018-03-15 10:13:17 INFO  SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://master:4040
2018-03-15 10:13:19 INFO  RMProxy:98 - Connecting to ResourceManager at master/172.30.10.64:8050
2018-03-15 10:13:20 INFO  Client:54 - Requesting a new application from cluster with 3 NodeManagers
2018-03-15 10:13:20 INFO  Client:54 - Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
2018-03-15 10:13:20 INFO  Client:54 - Will allocate AM container, with 896 MB memory including 384 MB overhead
2018-03-15 10:13:20 INFO  Client:54 - Setting up container launch context for our AM
2018-03-15 10:13:20 INFO  Client:54 - Setting up the launch environment for our AM container
2018-03-15 10:13:20 INFO  Client:54 - Preparing resources for our AM container
2018-03-15 10:13:24 WARN  Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
2018-03-15 10:13:29 INFO  Client:54 - Uploading resource file:/tmp/spark-bbfad5cb-3d29-4f45-a1a9-2e37f2c76606/__spark_libs__580552500091841387.zip -> hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007/__s$
2018-03-15 10:13:33 INFO  Client:54 - Uploading resource file:/usr/local/spark/python/lib/pyspark.zip -> hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007/pyspark.zip
2018-03-15 10:13:33 INFO  Client:54 - Uploading resource file:/usr/local/spark/python/lib/py4j-0.10.6-src.zip -> hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007/py4j-0.10.6-src.zip
2018-03-15 10:13:34 INFO  Client:54 - Uploading resource file:/tmp/spark-bbfad5cb-3d29-4f45-a1a9-2e37f2c76606/__spark_conf__7840630163677580304.zip -> hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007/__$
2018-03-15 10:13:34 INFO  SecurityManager:54 - Changing view acls to: hduser
2018-03-15 10:13:34 INFO  SecurityManager:54 - Changing modify acls to: hduser
2018-03-15 10:13:34 INFO  SecurityManager:54 - Changing view acls groups to:
2018-03-15 10:13:34 INFO  SecurityManager:54 - Changing modify acls groups to:
2018-03-15 10:13:34 INFO  SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hduser); groups with view permissions: Set(); users  with modify permissions: Set(hduser)$
2018-03-15 10:13:34 INFO  Client:54 - Submitting application application_1521023754917_0007 to ResourceManager
2018-03-15 10:13:34 INFO  YarnClientImpl:251 - Submitted application application_1521023754917_0007
2018-03-15 10:13:34 INFO  SchedulerExtensionServices:54 - Starting Yarn extension services with app application_1521023754917_0007 and attemptId None
2018-03-15 10:13:35 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:35 INFO  Client:54 -
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1521105214408
         final status: UNDEFINED
         tracking URL: http://master:8088/proxy/application_1521023754917_0007/
         user: hduser
2018-03-15 10:13:36 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:37 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:38 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:39 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:40 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:41 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:42 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:43 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:44 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:45 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:46 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:47 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:48 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:49 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:50 INFO  Client:54 - Application report for application_1521023754917_0007 (state: ACCEPTED)
2018-03-15 10:13:51 INFO  Client:54 - Application report for application_1521023754917_0007 (state: FAILED)
2018-03-15 10:13:51 INFO  Client:54 -
         client token: N/A
         diagnostics: Application application_1521023754917_0007 failed 2 times due to AM Container for appattempt_1521023754917_0007_000002 exited with  exitCode: -103
For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1521023754917_0007Then, click on links to logs of each attempt.
Diagnostics: Container [pid=9363,containerID=container_1521023754917_0007_02_000001] is running beyond virtual memory limits. Current usage: 147.7 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing cont$
Dump of the process-tree for container_1521023754917_0007_02_000001 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 9369 9363 9363 9363 (java) 454 16 2250776576 37073 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/co$
        |- 9363 9361 9363 9363 (bash) 0 0 12869632 742 /bin/bash -c /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0$

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1521105214408
         final status: FAILED
         tracking URL: http://master:8088/cluster/app/application_1521023754917_0007
         user: hduser
2018-03-15 10:13:51 INFO  Client:54 - Deleted staging directory hdfs://master:54310/user/hduser/.sparkStaging/application_1521023754917_0007
2018-03-15 10:13:51 ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
        at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:238)
        at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
        at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
        at py4j.GatewayConnection.run(GatewayConnection.java:214)
        at java.lang.Thread.run(Thread.java:748)
2018-03-15 10:13:51 INFO  AbstractConnector:318 - Stopped Spark@4f835332{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2018-03-15 10:13:51 INFO  SparkUI:54 - Stopped Spark web UI at http://master:4040
2018-03-15 10:13:51 WARN  YarnSchedulerBackend$YarnSchedulerEndpoint:66 - Attempted to request executors before the AM has registered!
2018-03-15 10:13:51 INFO  YarnClientSchedulerBackend:54 - Shutting down all executors
2018-03-15 10:13:51 INFO  YarnSchedulerBackend$YarnDriverEndpoint:54 - Asking each executor to shut down
2018-03-15 10:13:51 INFO  SchedulerExtensionServices:54 - Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
2018-03-15 10:13:51 INFO  YarnClientSchedulerBackend:54 - Stopped
2018-03-15 10:13:51 INFO  MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2018-03-15 10:13:51 INFO  MemoryStore:54 - MemoryStore cleared
2018-03-15 10:13:51 INFO  BlockManager:54 - BlockManager stopped
2018-03-15 10:13:51 INFO  BlockManagerMaster:54 - BlockManagerMaster stopped
2018-03-15 10:13:51 WARN  MetricsSystem:66 - Stopping a MetricsSystem that is not running
2018-03-15 10:13:51 INFO  OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2018-03-15 10:13:52 INFO  SparkContext:54 - Successfully stopped SparkContext
Traceback (most recent call last):
  File "/home/hduser/count.py", line 4, in <module>
    sc = SparkContext()
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 118, in __init__
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 180, in _do_init
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/context.py", line 270, in _initialize_context
  File "/usr/local/spark/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1428, in __call__
  File "/usr/local/spark/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
        at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:238)
        at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
        at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
        at py4j.GatewayConnection.run(GatewayConnection.java:214)
        at java.lang.Thread.run(Thread.java:748)

2018-03-15 10:13:52 INFO  ShutdownHookManager:54 - Shutdown hook called
2018-03-15 10:13:52 INFO  ShutdownHookManager:54 - Deleting directory /tmp/spark-bbfad5cb-3d29-4f45-a1a9-2e37f2c76606
2018-03-15 10:13:52 INFO  ShutdownHookManager:54 - Deleting directory /tmp/spark-f5d31d54-e456-4fcb-bf48-9f950233ad4b
但我还有一个时间问题

我不明白什么?我对大数据非常陌生,因此有可能:/ 编辑:

这是我通过:
纱线应用程序-状态应用程序_1521023754917_0007

18/03/15 10:52:07 INFO client.RMProxy: Connecting to ResourceManager at master/172.30.10.64:8050
18/03/15 10:52:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Application Report : 
    Application-Id : application_1521023754917_0007
    Application-Name : count.py
    Application-Type : SPARK
    User : hduser
    Queue : default
    Start-Time : 1521105214408
    Finish-Time : 1521105231067
    Progress : 0%
    State : FAILED
    Final-State : FAILED
    Tracking-URL : http://master:8088/cluster/app/application_1521023754917_0007
    RPC Port : -1
    AM Host : N/A
    Aggregate Resource Allocation : 16329 MB-seconds, 15 vcore-seconds
    Diagnostics : Application application_1521023754917_0007 failed 2 times due to AM Container for appattempt_1521023754917_0007_000002 exited with  exitCode: -103
For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1521023754917_0007Then, click on links to logs of each attempt.
Diagnostics: Container [pid=9363,containerID=container_1521023754917_0007_02_000001] is running beyond virtual memory limits. Current usage: 147.7 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1521023754917_0007_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 9369 9363 9363 9363 (java) 454 16 2250776576 37073 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/container_1521023754917_0007_02_000001/tmp -Dspark.yarn.app.container.log.dir=/usr/local/hadoop-2.7.5/logs/userlogs/application_1521023754917_0007/container_1521023754917_0007_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg master:40388 --properties-file /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/container_1521023754917_0007_02_000001/__spark_conf__/__spark_conf__.properties 
    |- 9363 9361 9363 9363 (bash) 0 0 12869632 742 /bin/bash -c /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/container_1521023754917_0007_02_000001/tmp -Dspark.yarn.app.container.log.dir=/usr/local/hadoop-2.7.5/logs/userlogs/application_1521023754917_0007/container_1521023754917_0007_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'master:40388' --properties-file /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1521023754917_0007/container_1521023754917_0007_02_000001/__spark_conf__/__spark_conf__.properties 1> /usr/local/hadoop-2.7.5/logs/userlogs/application_1521023754917_0007/container_1521023754917_0007_02_000001/stdout 2> /usr/local/hadoop-2.7.5/logs/userlogs/application_1521023754917_0007/container_1521023754917_0007_02_000001/stderr 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.

对于我来说,此spark提交在所有spark节点上运行python:

spark-submit --master yarn 
             --deploy-mode cluster 
             --num-executors 1 
             --driver-memory 2g 
             --executor-memory 1g 
             --executor-cores 1 
             hdfs://<host>:<port>/home/hduser/count.py /data.txt
spark提交——主纱线
--部署模式群集
--num执行者1
--驱动程序存储器2g
--执行器存储器1g
--执行器核心1
hdfs://:/home/hduser/count.py/data.txt
Spark环境需要通过以下方式扩展: 导出PYSPARK_PYTHON=/opt/bin/PYTHON


此外,py文件需要位于hdfs上,以便集群中的所有spark节点都可以读取它。spark用户需要能够访问py文件

您必须首先在HDFS位置找到py scrip。使用正确的URL名称节点

像hdfs dfs-lshdfs://hostname:1543/

如果您看到您的文件在屏幕上反射,则这是正确的路径

下一次执行

/bin/spark提交——主纱线hdfs://COMPLETEHOSTNAME:1543/count.py /data.txt


它肯定会起作用。

--masterspark://master:7077
不会运行分布式作业。不确定其他错误,但第一种情况应该可以。它表示初始化SparkContext时出现
错误。
。此外,您还可以使用
纱线应用程序-status
检查更多详细信息。@BenWatson Ok,但此命令允许使用autonome Spark cluster manager。未分发作业。@Philantrover我用您命令的结果编辑了我的问题。它将分发到您的群集上;您正在将作业提交给Spark master,它负责几个数据节点。主节点将监督在数据节点上运行作业。Spark将其从您那里抽象出来,这样您就不必担心它。它不起作用,
Thread client
不推荐用于2.x版本。然后我想使用spark作为集群部署模式。对不起,你说得对。我已经更新了我的帖子,请让我知道它是否适合你…文件存储在我的HDFS
hdfs://master:54310/user/hduser
但我有
客户机:54-应用程序的应用程序报告_1521112157923_0011(状态:失败)
经过一些可接受的步骤。。