hadoop3可以';找不到.nm-local-dir.usercache.hadoop.appcache。做pi测试时

hadoop3可以';找不到.nm-local-dir.usercache.hadoop.appcache。做pi测试时,hadoop,hadoop3,Hadoop,Hadoop3,我正在尝试在本地计算机网络上设置一个hadoop3集群,以小规模启动一个主节点和两个工作节点 在本教程之后,我想我已经找到了一些应该有效的方法 我下载了hadoop版本3.1.1 dfsadim报告: hadoop@######:~/hadoop3/hadoop-3.1.1$ hdfs dfsadmin -report Configured Capacity: 1845878235136 (1.68 TB) Present Capacity: 355431677952 (331.02 GB) D

我正在尝试在本地计算机网络上设置一个hadoop3集群,以小规模启动一个主节点和两个工作节点

在本教程之后,我想我已经找到了一些应该有效的方法 我下载了hadoop版本3.1.1

dfsadim报告:

hadoop@######:~/hadoop3/hadoop-3.1.1$ hdfs dfsadmin -report
Configured Capacity: 1845878235136 (1.68 TB)
Present Capacity: 355431677952 (331.02 GB)
DFS Remaining: 355427651584 (331.02 GB)
DFS Used: 4026368 (3.84 MB)
DFS Used%: 0.00%
Replicated Blocks:
    Under replicated blocks: 6
    Blocks with corrupt replicas: 0
    Missing blocks: 0
    Missing blocks (with replication factor 1): 0
    Pending deletion blocks: 0
Erasure Coded Block Groups: 
    Low redundancy block groups: 0
    Block groups with corrupt internal blocks: 0
    Missing block groups: 0
    Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (2):

Name: ######:9866 (######)
Hostname: ######
Decommission Status : Normal
Configured Capacity: 147511238656 (137.38 GB)
DFS Used: 2150400 (2.05 MB)
Non DFS Used: 46601465856 (43.40 GB)
DFS Remaining: 93390856192 (86.98 GB)
DFS Used%: 0.00%
DFS Remaining%: 63.31%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Sep 06 18:44:21 CEST 2018
Last Block Report: Thu Sep 06 18:08:09 CEST 2018
Num of Blocks: 17


Name: ######:9866 (######)
Hostname: ######
Decommission Status : Normal
Configured Capacity: 1698366996480 (1.54 TB)
DFS Used: 1875968 (1.79 MB)
Non DFS Used: 1350032670720 (1.23 TB)
DFS Remaining: 262036795392 (244.04 GB)
DFS Used%: 0.00%
DFS Remaining%: 15.43%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Sep 06 18:44:22 CEST 2018
Last Block Report: Thu Sep 06 18:08:10 CEST 2018
Num of Blocks: 12
因此,在继续和优化资源管理之前,我尝试运行一个简单的测试,但失败了

这里是pi示例测试

hadoop@#####:~/hadoop3/hadoop-3.1.1$ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar pi 2 10
Number of Maps  = 2
Samples per Map = 10
Wrote input for Map #0
Wrote input for Map #1
Starting Job
2018-09-06 18:51:29,277 INFO client.RMProxy: Connecting to ResourceManager at nameMasterhost/IP:8032
2018-09-06 18:51:29,589 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/hadoop/.staging/job_1536250099280_0005
2018-09-06 18:51:29,771 INFO input.FileInputFormat: Total input files to process : 2
2018-09-06 18:51:30,338 INFO mapreduce.JobSubmitter: number of splits:2
2018-09-06 18:51:30,397 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2018-09-06 18:51:30,967 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1536250099280_0005
2018-09-06 18:51:30,970 INFO mapreduce.JobSubmitter: Executing with tokens: []
2018-09-06 18:51:31,175 INFO conf.Configuration: resource-types.xml not found
2018-09-06 18:51:31,175 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2018-09-06 18:51:31,248 INFO impl.YarnClientImpl: Submitted application application_1536250099280_0005
2018-09-06 18:51:31,295 INFO mapreduce.Job: The url to track the job: http://nameMAster:8088/proxy/application_1536250099280_0005/
2018-09-06 18:51:31,296 INFO mapreduce.Job: Running job: job_1536250099280_0005
2018-09-06 18:51:44,388 INFO mapreduce.Job: Job job_1536250099280_0005 running in uber mode : false
2018-09-06 18:51:44,390 INFO mapreduce.Job:  map 0% reduce 0%
2018-09-06 18:51:44,409 INFO mapreduce.Job: Job job_1536250099280_0005 failed with state FAILED due to: Application application_1536250099280_0005 failed 2 times due to AM Container for appattempt_1536250099280_0005_000002 exited with  exitCode: 1
Failing this attempt.Diagnostics: [2018-09-06 18:51:38.416]Exception from container-launch.
Container id: container_1536250099280_0005_02_000001
Exit code: 1
Exception message: /bin/mv: target '/nm-local-dir/nmPrivate/application_1536250099280_0005/container_1536250099280_0005_02_000001/container_1536250099280_0005_02_000001.pid' is not a directory


[2018-09-06 18:51:38.421]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class .nm-local-dir.usercache.hadoop.appcache.application_1536250099280_0005.container_1536250099280_0005_02_000001.tmp


[2018-09-06 18:51:38.422]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class .nm-local-dir.usercache.hadoop.appcache.application_1536250099280_0005.container_1536250099280_0005_02_000001.tmp


For more detailed output, check the application tracking page: http://nameMaster:8088/cluster/app/application_1536250099280_0005 Then click on links to logs of each attempt.
. Failing the application.
2018-09-06 18:51:44,438 INFO mapreduce.Job: Counters: 0
Job job_1536250099280_0005 failed!
我将添加要求的所有信息,但我不理解问题所在,如果它们不相关,我不想用所有配置文件来淹没问题

hdfs系统文件中没有“/nm local dir/”。 我不明白这条路从何而来


欢迎您的帮助。

HDFS是存储,纱线是计算。如果您想将集群用于纯存储以外的任何用途,您将需要纱线,这意味着您将需要节点管理器(NM)


节点管理器是允许您执行任务的服务器,因此您需要定义
nm local dir
,以便运行
pi
等作业。需要在warn-site.xml中定义
nm local dir
,它是每个运行节点管理器的主机的本地目录(不是HDFS!)。

两周内无法访问我的计算机。我回到办公室后会尽快测试。不过还是谢谢你。