Hadoop put命令抛出-只能复制到0个节点,而不是1个节点

Hadoop put命令抛出-只能复制到0个节点,而不是1个节点,hadoop,Hadoop,我是Hadoop新手,我试图在我的ubuntu机器上进行伪分布式模式设置,并且面临着Hadoop put命令的问题。我的配置详细信息可在此帖子中找到--> 现在,我尝试使用以下命令向HDFS添加一些文件: hadoop fs –mkdir /user/myuser hadoop fs -lsr / $ ./hadoop fs -lsr / drwxr-xr-x - myuser supergroup 0 2014-11-26 16:04 /tmp drwxr-xr-x

我是Hadoop新手,我试图在我的ubuntu机器上进行伪分布式模式设置,并且面临着Hadoop put命令的问题。我的配置详细信息可在此帖子中找到-->

现在,我尝试使用以下命令向HDFS添加一些文件:

hadoop fs –mkdir /user/myuser

hadoop fs -lsr /

$ ./hadoop fs -lsr /
drwxr-xr-x   - myuser supergroup          0 2014-11-26 16:04 /tmp
drwxr-xr-x   - myuser supergroup          0 2014-11-26 16:04 /tmp/hadoop-myuser
drwxr-xr-x   - myuser supergroup          0 2014-11-26 16:04 /tmp/hadoop-myuser/dfs
-rw-r--r--   1 myuser supergroup          0 2014-11-26 16:04 /tmp/hadoop-myuser/dfs/name
drwxr-xr-x   - myuser supergroup          0 2014-11-26 16:04 /tmp/hadoop-myuser/mapred
drwx------   - myuser supergroup          0 2014-11-26 16:12 /tmp/hadoop-myuser/mapred/system
drwxr-xr-x   - myuser supergroup          0 2014-11-26 16:04 /user
drwxr-xr-x   - myuser supergroup          0 2014-11-26 16:06 /user/myuser
现在我正在运行
put
命令,但得到如下异常:

$ ./hadoop fs -put example.txt .
14/11/26 16:06:19 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/myuser/example.txt could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
    at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)

14/11/26 16:06:19 WARN hdfs.DFSClient: Error Recovery for null bad datanode[0] nodes == null
14/11/26 16:06:19 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/myuser/example.txt" - Aborting...
put: java.io.IOException: File /user/myuser/example.txt could only be replicated to 0 nodes, instead of 1
14/11/26 16:06:19 ERROR hdfs.DFSClient: Failed to close file /user/myuser/example.txt
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/myuser/example.txt could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
    at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)
有人能帮我解决这个问题吗

问题的解决方案:

根据提供的答案,我能够通过以下步骤解决问题:

1) 停止所有服务:

./stop-all.sh
./start-all.sh
2) 删除数据目录:

rm -rf /tmp/hadoop-myuser/dfs/data/
3) 启动服务:

./stop-all.sh
./start-all.sh
4) 然后将文件放入HDFS:

./hadoop fs -put example.txt .

这是由于数据节点问题造成的。
启动数据节点并立即执行操作

这是由于数据节点问题造成的。 启动您的数据节点并立即执行操作

是否检查“dfs.replication”“hdfs site.xml”中的最小值为1。我想你可以把0复制

还要检查所有hadoop服务是否正在运行

要检查运行状态,请执行以下操作: 运行
JPS
命令

单独启动服务: 转到
…\hadoop\bin\

start hadoop {datanode \ namenode}
start yarn {nodemanager \ resourcemanager}
您是否检查了“dfs.replication”“hdfs site.xml”上的“dfs.replication”的最小值为1。我想你可以把0复制

还要检查所有hadoop服务是否正在运行

要检查运行状态,请执行以下操作: 运行
JPS
命令

单独启动服务: 转到
…\hadoop\bin\

start hadoop {datanode \ namenode}
start yarn {nodemanager \ resourcemanager}


完全限定的hdfs名称,即
/hadoop fs-put./example.txt/user/myuser/example.txt
,会发生什么情况?@davek,我收到相同的错误消息。您是否检查了“hdfs site.xml”中的“dfs.replication”至少有1个。我想你可以把0复制。还要检查所有hadoop服务是否仍在运行?@ǨVË276; RŊRĀ478; 308; 260Ņ,dfs.replications在我的xml文件中设置为1。我如何检查hadoop服务是否已关闭?请告诉我。使用JPS命令检查。完全限定的hdfs名称,即
/hadoop fs-put./example.txt/user/myuser/example.txt
,会发生什么情况?@davek,我收到相同的错误消息。您是否检查了“hdfs site.xml”中的“dfs.replication”至少有1个。我想你可以把0复制。还要检查所有hadoop服务是否仍在运行?@ǨVË276; RŊRĀ478; 308; 260Ņ,dfs.replications在我的xml文件中设置为1。我如何检查hadoop服务是否已关闭?请告诉我。使用JPS命令检查。如果这个答案对你的意思是正确的,请接受这个答案。您能告诉我如何检查datanode是否已关闭,以及如何启动data node吗?使用命令JPS,您可以查看正在运行的hadoop服务,您可以通过hadoop\u home/bin pathGreat下的hadoop datanode comamnd启动datanode,使用JPS,我可以看到datanode已关闭。现在,当我尝试启动datanode时,我得到的错误是:
14/11/26 17:32:32错误datanode.datanode:java.io.IOException:namespaceID在/tmp/hadoop chaitanya/dfs/data:namenode namespaceID=1198050192;datanode namespaceID=976283118
停止namenode并删除dfs.datanode.data.dir中的datanode目录,然后启动namenode和datanode。如果此答案对您的意思正确,请接受此答案。您能告诉我如何检查datanode是否已关闭,以及如何启动data node吗?使用命令JPS,您可以查看正在运行的hadoop服务,您可以通过hadoop\u home/bin pathGreat下的hadoop datanode comamnd启动datanode,使用JPS,我可以看到datanode已关闭。现在,当我尝试启动datanode时,我得到的错误是:
14/11/26 17:32:32错误datanode.datanode:java.io.IOException:namespaceID在/tmp/hadoop chaitanya/dfs/data:namenode namespaceID=1198050192;datanode namespaceID=976283118
停止namenode并删除dfs.datanode.data.dir中的datanode目录,然后启动namenode和datanode。我在启动datanode时出错,因为-14/11/26 17:32:32错误datanode.datanode:java.io.IOException:/tmp/hadoop chaitanya/dfs/data:namenode namespaceID=1198050192;datanode namespaceID=976283118这将有助于您启动datanode时出现错误-14/11/26 17:32:32 error datanode.datanode:java.io.IOException:/tmp/hadoop chaitanya/dfs/data:namenode namespaceID=1198050192;datanode namespaceID=976283118这将对您有所帮助