Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 使用JPS在多节点集群中工作的所有服务,但;hdfs dfsadmin-报告“;什么也看不出来_Hadoop - Fatal编程技术网

Hadoop 使用JPS在多节点集群中工作的所有服务,但;hdfs dfsadmin-报告“;什么也看不出来

Hadoop 使用JPS在多节点集群中工作的所有服务,但;hdfs dfsadmin-报告“;什么也看不出来,hadoop,Hadoop,正如我在namenode和相应的从属节点“slave1”和“slave2”上运行JPS命令时所看到的,所有服务都在运行 但是,当我检查hdfs dfsadmin-report命令时,我得到的结果如下: [root@master ~]# jps 10197 SecondaryNameNode 10805 Jps 10358 ResourceManager 9998 NameNode [root@slave1 ~]# jps 5872 NodeManager 5767 DataNode 6186

正如我在namenode和相应的从属节点“slave1”和“slave2”上运行JPS命令时所看到的,所有服务都在运行

但是,当我检查
hdfs dfsadmin-report
命令时,我得到的结果如下:

[root@master ~]# jps
10197 SecondaryNameNode
10805 Jps
10358 ResourceManager
9998 NameNode

[root@slave1 ~]# jps
5872 NodeManager
5767 DataNode
6186 Jps

[root@slave2 ~]# jps
5859 Jps
5421 DataNode 
5534 NodeManager
这就是问题所在。我知道,在这个特定的主题上,有很多文章我已经提到并禁用了我的防火墙,用datanode群集ID格式化了群集,并解决了虚拟箱中的IP问题,在虚拟箱中,我从主服务器ping从服务器获取重复包

数据节点似乎没有启动。有一次,即使幸运的是他们这样做了,我在复制HDFS上的文件时也会出现以下错误

[root@master ~]# hdfs dfsadmin -report
17/09/01 12:11:29 WARN util.NativeCodeLoader: Unable to load native-hadoop          library for your platform... using builtin-java classes where applicable
  Configured Capacity: 0 (0 B)
  Present Capacity: 0 (0 B)
  DFS Remaining: 0 (0 B)
  DFS Used: 0 (0 B)
  DFS Used%: NaN%
  Under replicated blocks: 0
  Blocks with corrupt replicas: 0
  Missing blocks: 0
  Missing blocks (with replication factor 1): 0

  -------------------------------------------------

fsck
命令工作正常,但没有任何用处,因为它与
dfsadmin-report

数据节点日志中有什么内容一样空?有错误吗?2017-09-01 12:52:47005 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器:master/192.168.1.187:8020。已试过5次;重试策略为RetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1000毫秒)。这是日志文件中的最后一项。因此,日志文件中没有错误。另外,我能够在没有任何重复数据包的情况下成功地从datanodes ping namenode。您解决了这个问题吗?datanode日志中有什么?有错误吗?2017-09-01 12:52:47005 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器:master/192.168.1.187:8020。已试过5次;重试策略为RetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1000毫秒)。这是日志文件中的最后一项。因此,日志文件中没有错误。另外,我能够在没有任何重复数据包的情况下成功地从datanodes ping namenode。您解决了这个问题吗?
[root@master ~]# hdfs dfs -moveFromLocal /home/master/Downloads                    /citibike.tar /user/citibike
17/09/01 12:17:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/09/01 12:17:34 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File                 /user/citibike._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1628)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3121)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3045)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:493)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)

at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1588)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1373)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
 moveFromLocal: File /user/citibike._COPYING_ could only be replicated to 0    nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.