Hadoop命令,Hadoop fs-ls正在抛出重试连接到服务器错误?

Hadoop命令,Hadoop fs-ls正在抛出重试连接到服务器错误?,hadoop,hdfs,hadoop2,Hadoop,Hdfs,Hadoop2,当我键入hadoop fs-ls时,会收到以下错误消息: deepak@deepak:~$ hadoop fs -ls 14/03/19 12:18:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime

当我键入hadoop fs-ls时,会收到以下错误消息:

deepak@deepak:~$ hadoop fs -ls
14/03/19 12:18:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/03/19 12:18:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
hadoop namenode的输出格式为

deepak@deepak:~/programs/hadoop-1.2.0/bin$ hadoop namenode -format
14/03/19 14:11:22 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = deepak/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.2.0
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May  6 06:59:37 UTC 2013
STARTUP_MSG:   java = 1.7.0_51
************************************************************/
14/03/19 14:11:22 INFO util.GSet: Computing capacity for map BlocksMap
14/03/19 14:11:22 INFO util.GSet: VM type       = 32-bit
14/03/19 14:11:22 INFO util.GSet: 2.0% max memory = 932184064
14/03/19 14:11:22 INFO util.GSet: capacity      = 2^22 = 4194304 entries
14/03/19 14:11:22 INFO util.GSet: recommended=4194304, actual=4194304
14/03/19 14:11:23 INFO namenode.FSNamesystem: fsOwner=deepak
14/03/19 14:11:23 INFO namenode.FSNamesystem: supergroup=supergroup
14/03/19 14:11:23 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/03/19 14:11:23 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/03/19 14:11:23 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/03/19 14:11:23 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
14/03/19 14:11:23 INFO namenode.NameNode: Caching file names occuring more than 10 times 
14/03/19 14:11:23 INFO common.Storage: Image file of size 112 saved in 0 seconds.
14/03/19 14:11:24 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop-deepak/dfs/name/current/edits
14/03/19 14:11:24 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop-deepak/dfs/name/current/edits
14/03/19 14:11:24 INFO common.Storage: Storage directory /tmp/hadoop-deepak/dfs/name has been successfully formatted.
14/03/19 14:11:24 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at deepak/127.0.1.1
************************************************************/

你能检查一下你的Namenode状态吗。将“jps”放入namenode计算机并检查其状态。这可能是因为Namenode已关闭

deepak@deepak:~/programs/hadoop-1.2.0/bin$jps 2718 DataNode 3298 TaskTracker 3058 JobTracker 2962 secondaryname node 4560 jpi我不认为我的名字在运行吗?我现在该怎么办?检查namenode的日志,看看是否有任何错误。它可能在namenode的
$HADOOP\u HOME/logs/
目录中。@vefthym:实际上,日志帮助我找到了。。谢谢你们@我也有同样的问题。解决办法是什么?