Hadoop:从机的ip不正确

Hadoop:从机的ip不正确,hadoop,Hadoop,1.主机配置: 127.0.0.1 localhost 192.168.1.3 master 172.16.226.129 slave1 2.0文件: slave1 zqj@master:/usr/local/nodetmp$ jps 5377 Jps 4950 SecondaryNameNode 4728 NameNode 5119 ResourceManager zqj@slave1:/usr/local/hadooptmp$ jp

1.主机配置:

 127.0.0.1          localhost  
 192.168.1.3        master  
 172.16.226.129     slave1
2.0文件:

slave1
zqj@master:/usr/local/nodetmp$ jps
5377 Jps
4950 SecondaryNameNode
4728 NameNode
5119 ResourceManager

zqj@slave1:/usr/local/hadooptmp$ jps
2514 NodeManager
2409 DataNode
2639 Jps
zqj@master:/usr/local/nodetmp$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 22588977152 (21.04 GB)
Present Capacity: 16719790080 (15.57 GB)
DFS Remaining: 16719765504 (15.57 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (1):

Name: 192.168.1.3:50010 (master)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 22588977152 (21.04 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 5869187072 (5.47 GB)
DFS Remaining: 16719765504 (15.57 GB)
DFS Used%: 0.00%
DFS Remaining%: 74.02%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Jan 30 17:29:01 CST 2017
3.JPS:

slave1
zqj@master:/usr/local/nodetmp$ jps
5377 Jps
4950 SecondaryNameNode
4728 NameNode
5119 ResourceManager

zqj@slave1:/usr/local/hadooptmp$ jps
2514 NodeManager
2409 DataNode
2639 Jps
zqj@master:/usr/local/nodetmp$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 22588977152 (21.04 GB)
Present Capacity: 16719790080 (15.57 GB)
DFS Remaining: 16719765504 (15.57 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (1):

Name: 192.168.1.3:50010 (master)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 22588977152 (21.04 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 5869187072 (5.47 GB)
DFS Remaining: 16719765504 (15.57 GB)
DFS Used%: 0.00%
DFS Remaining%: 74.02%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Jan 30 17:29:01 CST 2017
4.hadoop dfsadmin-报告:

slave1
zqj@master:/usr/local/nodetmp$ jps
5377 Jps
4950 SecondaryNameNode
4728 NameNode
5119 ResourceManager

zqj@slave1:/usr/local/hadooptmp$ jps
2514 NodeManager
2409 DataNode
2639 Jps
zqj@master:/usr/local/nodetmp$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 22588977152 (21.04 GB)
Present Capacity: 16719790080 (15.57 GB)
DFS Remaining: 16719765504 (15.57 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (1):

Name: 192.168.1.3:50010 (master)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 22588977152 (21.04 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 5869187072 (5.47 GB)
DFS Remaining: 16719765504 (15.57 GB)
DFS Used%: 0.00%
DFS Remaining%: 74.02%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Jan 30 17:29:01 CST 2017

我想知道当namenode在真实机器中而datanode在虚拟机中时,为什么IP不正确。谢谢


当我使用虚拟机作为namenode时,一切正常,ip正确。是否需要在VMware中配置网关或ip之类的内容?

将从属文件放在主节点中。(不在从属节点中)。我希望主机配置在主节点上。。这将解决这个问题

那么您运行的是哪种类型的集群是伪模式还是虚拟模式distributed@siddharthajain干扰模式Hadoop的哪个版本。您配置了从属文件吗?@Siddhartajain Hadoop 2.7.3。我已确认已配置从属文件。您可以在堆栈溢出中搜索关键字“错误的hadoop数据节点”。第一个搜索结果与我的问题类似。感谢您的耐心。这个问题与hadoop 1.x有关。在您的情况下,您没有像他那样配置主机文件。您需要为名称配置主文件。nodeI尝试了您所说的,但仍然无法正常工作。我已删除从属节点中的从属文件。