Hadoop:有0个datanode正在运行,此操作中没有排除任何节点

Hadoop:有0个datanode正在运行,此操作中没有排除任何节点,hadoop,hdfs,Hadoop,Hdfs,我在VMware上部署了Hadoop群集。他们都在CentOS 7上 在主机上发出命令jps: [root@hadoopmaster anna]# jps 6225 NameNode 6995 ResourceManager 6580 SecondaryNameNode 7254 Jps 在从机上发出命令jps: [root@hadoopslave1 anna]# jps 5066 DataNode 5818 Jps 5503 NodeManager 但是,我不知道为什么活动节点在 显示0。

我在VMware上部署了Hadoop群集。他们都在CentOS 7上

在主机上发出命令jps:

[root@hadoopmaster anna]# jps
6225 NameNode
6995 ResourceManager
6580 SecondaryNameNode
7254 Jps
在从机上发出命令jps:

[root@hadoopslave1 anna]# jps
5066 DataNode
5818 Jps
5503 NodeManager
但是,我不知道为什么活动节点在 显示0。我不能在/file/f1中发布hdfs-dfs-put。它显示错误消息:

[root@hadoopmaster hadoop]# hdfs dfs -put in/file/f1 /user
16/01/06 02:53:14 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3110)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3034)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:723)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

    at org.apache.hadoop.ipc.Client.call(Client.java:1476)
    at org.apache.hadoop.ipc.Client.call(Client.java:1407)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1430)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1226)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449)
put: File /user._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
我也尝试过其他类似的帖子

rm -R /tmp/*
然后检查ssh

关于master:

[root@hadoopmaster hadoop]# ssh hadoopmaster
Last login: Wed Jan  6 02:56:27 2016 from hadoopslave1
[root@hadoopmaster ~]# exit
logout
Connection to hadoopmaster closed.
[root@hadoopmaster hadoop]# ssh hadoopslave1
Last login: Wed Jan  6 02:43:21 2016
[root@hadoopslave1 ~]# exit
logout
Connection to hadoopslave1 closed.
[root@hadoopmaster hadoop]#
在从机上:

[root@hadoopslave1 .ssh]# ssh hadoopmaster
Last login: Wed Jan  6 03:04:45 2016 from hadoopmaster
[root@hadoopmaster ~]# exit
logout
Connection to hadoopmaster closed.
[root@hadoopslave1 .ssh]# ssh hadoopslave1
Last login: Wed Jan  6 03:04:40 2016 from hadoopmaster
[root@hadoopslave1 ~]# exit
logout
Connection to hadoopslave1 closed.
[root@hadoopslave1 .ssh]# 

您需要查看datanode日志,以确认从属服务器上的数据节点是否实际运行良好。仅运行jps命令是不够的,有时datanode可能会断开连接。如果配置文件正确,请运行以下命令:

  • 运行stop-all.sh
  • 在所有节点上运行jps,如果有任何进程仍在运行-杀死它们
  • 运行start-all.sh
  • 在所有节点上运行jps命令
  • 检查namenode日志和datanode日志,确认一切正常

从名称节点运行以下命令,以确保数据节点正常运行

bin/hadoop dfsadmin -report
你可以看到这样的报告

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: 127.0.0.1:50010
Decommission Status : Normal
Configured Capacity: 176945963008 (164.79 GB)
DFS Used: 2140192768 (1.99 GB)
Non DFS Used: 42513027072 (39.59 GB)
DFS Remaining: 132292743168(123.21 GB)
DFS Used%: 1.21%
DFS Remaining%: 74.76%
Last contact: Wed Jan 06 20:04:51 IST 2016

我通过在/etc/hosts中配置机器解决了类似的问题。查看datanode日志表明Datanodes无法解析Namenode。

即使我也有同样的问题。 copyFromLocal:文件。复制只能复制到0个节点,而不能复制到minReplication(=1)。有1个datanode正在运行,此操作中没有排除任何节点。
我通过释放一些空间解决了这个问题。您还可以尝试停止数据节点并重新启动它。

这很可能是由于可用磁盘空间造成的。使用
df-h


关于这个问题,有一些类似的问题和答案,如

看一看:不推荐使用此脚本执行dfsadmin这不是答案