Java EMR上的MapReduce未联系RMProxy并在等待resourcemanager时卡住?

Java EMR上的MapReduce未联系RMProxy并在等待resourcemanager时卡住?,java,maven,hadoop,mapreduce,amazon-emr,Java,Maven,Hadoop,Mapreduce,Amazon Emr,我正在使用hadoop 2.7.3在EMR上运行mapreduce/hadoop。在AWS上安装,jar是用maven shade插件构建的。它会无限期地等待ResourceManager,但我在日志文件或在线上完全找不到任何东西 在job.waitForCompletion中,出现以下行: 020-01-25 05:52:41,346 INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl (main): Timeline

我正在使用hadoop 2.7.3在EMR上运行mapreduce/hadoop。在AWS上安装,jar是用maven shade插件构建的。它会无限期地等待ResourceManager,但我在日志文件或在线上完全找不到任何东西

job.waitForCompletion
中,出现以下行:

020-01-25 05:52:41,346 INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl (main): Timeline service address: http://ip-172-31-13-41.us-west-2.compute.internal:8188/ws/v1/timeline/
2020-01-25 05:52:41,356 INFO org.apache.hadoop.yarn.client.RMProxy (main): Connecting to ResourceManager at ip-172-31-13-41.us-west-2.compute.internal/172.31.13.41:8032
然后它就坐在那里。。。从未取得进展,必须关闭群集或手动终止任务


有趣的是,在尝试了数百个实验之后,通过运行hadoop jar,似乎有问题的一行是

job.setJar()

为什么,我不知道。它在intellij下工作正常,但在本地和intellij下使用
hadoop
命令都会可靠地崩溃。

exit-1000——这不是常见错误。
After 25 minutes or so, the job produces output of the form:


AM Container for appattempt_1580058321574_0005_000001 exited with exitCode: -1000
For more detailed output, check application tracking page:http://192.168.2.21:8088/cluster/app/application_1580058321574_0005Then, click on links to logs of each attempt.
Diagnostics: /Users/gbronner/hadoopdata/yarn/local/usercache/gbronner/appcache/application_1580058321574_0005/filecache/11_tmp/tmp_job.jar (Is a directory)
java.io.FileNotFoundException: /Users/gbronner/hadoopdata/yarn/local/usercache/gbronner/appcache/application_1580058321574_0005/filecache/11_tmp/tmp_job.jar (Is a directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:225)
at java.util.zip.ZipFile.<init>(ZipFile.java:155)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:130)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:94)
at org.apache.hadoop.yarn.util.FSDownload.unpack(FSDownload.java:297)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:364)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Failing this attempt