Hadoop:将文件本地目录复制到Hdfs时出错

Hadoop:将文件本地目录复制到Hdfs时出错,hadoop,Hadoop,当我尝试在hdfs中复制3个文件的目录时,我得到以下错误 hduser@saket-K53SM:/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg /user/hduser/gutenberg 12/08/01 23:48:46 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.

当我尝试在hdfs中复制3个文件的目录时,我得到以下错误

     hduser@saket-K53SM:/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg /user/hduser/gutenberg
12/08/01 23:48:46 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hduser/gutenberg/gutenberg/pg20417.txt could only be replicated to 0 nodes, instead of 1
12/08/01 23:48:46 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
12/08/01 23:48:46 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/hduser/gutenberg/gutenberg/pg20417.txt" - Aborting...
copyFromLocal: java.io.IOException: File /user/hduser/gutenberg/gutenberg/pg20417.txt could only be replicated to 0 nodes, instead of 1
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hduser/gutenberg/gutenberg/pg20417.txt could only be replicated to 0 nodes, instead of 1
我的fsck输出是

hduser@saket-K53SM:/usr/local/hadoop$ bin/hadoop fsck -blocks
FSCK started by hduser from /127.0.0.1 for path / at Wed Aug 01 23:50:49 IST 2012
Status: HEALTHY
Total size: 0 B
Total dirs: 10
Total files:    0 (Files currently being written: 2)
Total blocks (validated):   0
Minimally replicated blocks:    0
Over-replicated blocks: 0
Under-replicated blocks:    0
Mis-replicated blocks:      0
Default replication factor: 1
Average block replication:  0.0
Corrupt blocks:     0
Missing replicas:       0
Number of data-nodes:       0
Number of racks:        0
FSCK ended at Wed Aug 01 23:50:49 IST 2012 in 3 milliseconds


The filesystem under path '/' is HEALTHY
另外,当我尝试格式化namenode时,我会遇到以下错误

hduser@saket-K53SM:/usr/local/hadoop$ bin/hadoop namenode -format
12/08/01 23:53:07 INFO namenode.NameNode: STARTUP_MSG: 
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = saket-K53SM/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.0.3
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-   1.0 -r 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012

Re-format filesystem in /app/hadoop/tmp/dfs/name ? (Y or N) y
Format aborted in /app/hadoop/tmp/dfs/name
12/08/01 23:53:09 INFO namenode.NameNode: SHUTDOWN_MSG: 
SHUTDOWN_MSG: Shutting down NameNode at saket-K53SM/127.0.1.1

任何帮助都将不胜感激。

我认为这是一个非常愚蠢的问题。输入“Y”而不是小写的“Y”(它应该是大写的)

您是否尝试过:

  • 停止命名节点
  • 停止数据节点
  • 删除/app/hadoop*
  • 格式名称节点
  • 再次启动datanode和namenode

停止所有守护进程(如stop all.sh)后,删除保存namenode临时文件的数据目录。删除数据后目录再次启动hadoop守护程序i.g start-all.sh。 “data”目录的路径是conf/core-site.xml中hadoop.tmp.dir属性的值。


我想这会解决你的问题

你能在日志中查找你的NameNode并粘贴任何相关的错误/警告消息吗?你能做jps并检查所有进程是否正常工作吗?我不知道为什么这会被否决。我也这样做了(我输入了
y
而不是
y
),它没有重新格式化文件系统。没错,它只格式化为大写的“y”。很明显,莫伦克斯不是一个白痴!谢谢你的甜言蜜语,杰斯潘。