Java DataNode无法连接到名称节点-“;org.apache.hadoop.ipc.Client:正在重试连接到服务器”;

Java DataNode无法连接到名称节点-“;org.apache.hadoop.ipc.Client:正在重试连接到服务器”;,java,hadoop,hdfs,hadoop2,Java,Hadoop,Hdfs,Hadoop2,我已经部署了一个Hadoop 3.1.2集群,其中有1个Namenode和2个DataNode。NameNode已启动,主节点的secondaryNameNode和ResourceManager也已启动,但DataNode无法连接到NameNode,因此未显示容量 我一直在试图找出可能的错误,但到目前为止还没有成功 删除了域解析,因为我遇到了奇怪的错误: WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10

我已经部署了一个Hadoop 3.1.2集群,其中有1个Namenode和2个DataNode。NameNode已启动,主节点的secondaryNameNode和ResourceManager也已启动,但DataNode无法连接到NameNode,因此未显示容量

我一直在试图找出可能的错误,但到目前为止还没有成功

删除了域解析,因为我遇到了奇怪的错误:

WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [server]
lim_sbo_bigdata_master: ERROR: Cannot set priority of namenode process 11606
Starting datanodes
Starting secondary namenodes [server]
lim_sbo_bigdata_master: ERROR: Cannot set priority of secondarynamenode process 11825
Starting resourcemanager
Starting nodemanagers


* SELinux is disabled
* IPtables is OPEN for all traffic:

hadoop@lim_server]$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination       
  • 服务器属于同一网络
名称节点:
[hadoop@server~]$hadoop版本
Hadoop 3.1.2
源代码存储库https://github.com/apache/hadoop.git -r 1019dde65bcf12e05ef48ac71e84550d589e5d9a
sunilg于2019-01-29T01:39Z编制
用protoc2.5.0编译
来自校验和为64B8BD4CA6E77CCE75A93EB09AB2A9的源
此命令是使用/home/hadoop/hadoop-3.1.2/share/hadoop/common/hadoop-common-3.1.2.jar运行的
[hadoop@server~]$jps
27089日元
26760资源经理
26491第二名称节点
26239名称节点
[hadoop@server~]$hdfs dfsadmin-报告
配置容量:0(0 B)
当前容量:0(0 B)
剩余DFS:0(0 B)
使用的DFS:0(0 B)
使用的DFS%:0.00%
复制块:
在已复制块下:0
具有损坏副本的块:0
缺少的块:0
缺少块(复制因子为1):0
具有最高恢复优先级的低冗余块:0
挂起的删除块:0
擦除编码块组:
低冗余块组:0
内部块损坏的块组:0
缺少块组:0
具有最高恢复优先级的低冗余块:0
挂起的删除块:0
fs.default.name
hdfs://localhost:9000
数据节点错误
[hadoop@server_2]$jps
17052数据节点
17166节点管理器
17406日元
2019-08-27 05:46:09086 INFO org.apache.hadoop.ipc.Server:启动端口9867的套接字读取器#1
2019-08-27 05:46:09229 INFO org.apache.hadoop.hdfs.server.datanode.datanode:在/0.0.0.0:9867打开IPC服务器
2019-08-27 05:46:09243 INFO org.apache.hadoop.hdfs.server.datanode.datanode:收到的名称服务刷新请求:null
2019-08-27 05:46:09251 INFO org.apache.hadoop.hdfs.server.datanode.datanode:为名称服务启动BPOfferServices:
2019-08-27 05:46:09260 INFO org.apache.hadoop.hdfs.server.datanode.datanode:Block pool(datanode Uuid unassigned)服务到/10.30.17.228:9000开始提供服务
冰
2019-08-27 05:46:09265 INFO org.apache.hadoop.ipc.Server:ipc服务器响应程序:启动
2019-08-27 05:46:09265 INFO org.apache.hadoop.ipc.Server:9867上的ipc服务器侦听器:正在启动
2019-08-27 05:46:10330 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器:10.30.17.228/10.30.17.228:9000。已尝试0次;重试策略为RetryUpToMaximumCountW
ithFixedSleep(maxRetries=10,sleepTime=1000毫秒)
2019-08-27 05:46:11331 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器:10.30.17.228/10.30.17.228:9000。已试过1次;重试策略为RetryUpToMaximumCountW
ithFixedSleep(maxRetries=10,sleepTime=1000毫秒)

尝试将“localhost”更改为namenodes的实际主机名或IP。

尝试将“localhost”更改为namenodes的实际主机名或IP。

执行此操作时,会出现以下错误:lim_服务器:错误:无法设置namenode进程的优先级11606启动datanodes启动辅助namenodes[服务器]lim_服务器:错误:无法设置secondarynamenode进程11825的优先级启动resourcemanager启动NodeManager如果这是您遇到的问题:也可以共享您的hdfs站点和hadoop env配置Shello伙计们,我使用hdfs-site.xml文件的IP地址解决了这个问题,正确更改mapred-site.xml和warn-site.xml。执行此操作时,会出现以下错误:lim_服务器:错误:无法设置namenode进程的优先级11606启动datanodes启动辅助namenodes[服务器]lim_服务器:错误:无法设置secondarynamenode进程11825的优先级启动resourcemanager启动NodeManager如果这是您遇到的问题:也可以共享您的hdfs站点和hadoop env配置Shello伙计们,我使用hdfs-site.xml文件的IP地址解决了这个问题,正确更改mapred-site.xml和warn-site.xml。
[hadoop@server ~]$ hadoop version
Hadoop 3.1.2
Source code repository https://github.com/apache/hadoop.git -r 1019dde65bcf12e05ef48ac71e84550d589e5d9a
Compiled by sunilg on 2019-01-29T01:39Z
Compiled with protoc 2.5.0
From source with checksum 64b8bdd4ca6e77cce75a93eb09ab2a9
This command was run using /home/hadoop/hadoop-3.1.2/share/hadoop/common/hadoop-common-3.1.2.jar

[hadoop@server ~]$ jps
27089 Jps
26760 ResourceManager
26491 SecondaryNameNode
26239 NameNode

[hadoop@server ~]$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: 0.00%
Replicated Blocks:
    Under replicated blocks: 0
    Blocks with corrupt replicas: 0
    Missing blocks: 0
    Missing blocks (with replication factor 1): 0
    Low redundancy blocks with highest priority to recover: 0
    Pending deletion blocks: 0
Erasure Coded Block Groups: 
    Low redundancy block groups: 0
    Block groups with corrupt internal blocks: 0
    Missing block groups: 0
    Low redundancy blocks with highest priority to recover: 0
    Pending deletion blocks: 0

<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
[hadoop@server_2]$ jps
17052 DataNode
17166 NodeManager
17406 Jps

2019-08-27 05:46:09,086 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9867
2019-08-27 05:46:09,229 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:9867
2019-08-27 05:46:09,243 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2019-08-27 05:46:09,251 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2019-08-27 05:46:09,260 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to /10.30.17.228:9000 starting to offer serv
ice
2019-08-27 05:46:09,265 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-08-27 05:46:09,265 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9867: starting
2019-08-27 05:46:10,330 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 10.30.17.228/10.30.17.228:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountW
ithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2019-08-27 05:46:11,331 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 10.30.17.228/10.30.17.228:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountW
ithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)