无法启动名称节点:java.net.BindException:地址已在使用中

无法启动名称节点:java.net.BindException:地址已在使用中,java,hadoop,Java,Hadoop,我正在尝试启动namenode,但它一直显示:无法启动namenode。java.net.BindException:地址已在使用中 netstat -a | grep 9000 返回 tcp 0 0 :9000 *: LISTEN tcp6 0 0 [::]:9000 [::]:* LISTEN 这是正常的还是我需要终止其中一个进程 名称节点在安装后启动并运行,在我运行WordCount作业后突然停止工作。我多次尝试重新启动VM并格式化namenode,但都没有帮助 hdfs-site.x

我正在尝试启动namenode,但它一直显示:无法启动namenode。java.net.BindException:地址已在使用中

netstat -a | grep 9000
返回

tcp 0 0 :9000 *: LISTEN
tcp6 0 0 [::]:9000 [::]:* LISTEN
这是正常的还是我需要终止其中一个进程

名称节点在安装后启动并运行,在我运行WordCount作业后突然停止工作。我多次尝试重新启动VM并格式化namenode,但都没有帮助

hdfs-site.xml如下所示:

dfs.1复制dfs.namenode.name.dirfile:///usr/local/hdfs/namenode dfs.datanode.data.dirfile:///usr/local/hdfs/datanode

namenode日志如下所示:
2015-07-10 00:27:02513 INFO org.apache.hadoop.hdfs.server.namenode.namenode:[TERM,HUP,INT]的注册UNIX信号处理程序
2015-07-10 00:27:02538 INFO org.apache.hadoop.hdfs.server.namenode.namenode:createNameNode[]
2015-07-10 00:27:07549 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:从hadoop-metrics2.properties加载的属性
2015-07-10 00:27:09284 INFO org.apache.hadoop.metrics2.impl.metricsystemimpl:计划的快照时间为10秒。
2015-07-10 00:27:09285 INFO org.apache.hadoop.metrics2.impl.metricsystemimpl:NameNode度量系统启动
2015-07-10 00:27:09339 INFO org.apache.hadoop.hdfs.server.namenode.namenode:fs.defaultFS是hdfs://localhost:9000
2015-07-10 00:27:09340 INFO org.apache.hadoop.hdfs.server.namenode.namenode:客户端将使用localhost:9000访问此namenode/服务。
2015-07-10 00:27:12475警告org.apache.hadoop.util.NativeCodeLoader:无法为您的平台加载本机hadoop库。。。在适用的情况下使用内置java类
2015-07-10 00:27:16632 INFO org.apache.hadoop.hdfs.DFSUtil:在以下位置启动hdfs的Web服务器:http://0.0.0.0:50070
2015-07-10 00:27:17491信息org.mortbay.log:通过org.mortbay.log.slf4jloggeradapter(org.mortbay.log)登录到org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
2015-07-10 00:27:17702 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter:无法初始化FileSignerSecretProvider,返回使用随机机密。
2015-07-10 00:27:17876 INFO org.apache.hadoop.http.HttpRequestLog:未定义http.requests.namenode的http请求日志
2015-07-10 00:27:17941 INFO org.apache.hadoop.http.HttpServer2:添加了全局筛选器“安全”(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-07-10 00:27:17977 INFO org.apache.hadoop.http.HttpServer2:在上下文hdfs中添加了过滤器static_user_过滤器(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
2015-07-10 00:27:17977 INFO org.apache.hadoop.http.HttpServer2:将过滤器static_user_过滤器(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)添加到上下文static
2015-07-10 00:27:17977 INFO org.apache.hadoop.http.HttpServer2:在上下文日志中添加了过滤器static_user_过滤器(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
2015-07-10 00:27:18441 INFO org.apache.hadoop.http.HttpServer2:添加了过滤器“org.apache.hadoop.hdfs.web.AuthFilter”(class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-07-10 00:27:18525 INFO org.apache.hadoop.http.HttpServer2:addJerseyResourcePackage:packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,pathSpec=/webhdfs/v1/*
2015-07-10 00:27:18747 INFO org.apache.hadoop.http.HttpServer2:Jetty绑定到端口50070
2015-07-10 00:27:18760信息org.mortbay.log:jetty-6.1.26
2015-07-10 00:27:20832 INFO org.mortbay.log:已启动HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-07-10 00:27:23404警告org.apache.hadoop.hdfs.server.namenode.FSNamesystem:仅配置了一个映像存储目录(dfs.namenode.name.dir)。当心由于缺少冗余存储目录而导致的数据丢失!
2015-07-10 00:27:23416警告org.apache.hadoop.hdfs.server.namenode.FSNamesystem:仅配置了一个名称空间编辑存储目录(dfs.namenode.edits.dir)。当心由于缺少冗余存储目录而导致的数据丢失!
2015-07-10 00:27:24034 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:未找到密钥提供程序。
2015-07-10 00:27:24036 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:fsLock是公平的:true
2015-07-10 00:27:24773 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:dfs.block.invalidate.limit=1000
2015-07-10 00:27:24776 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:dfs.namenode.datanode.registration.ip hostname check=true
2015-07-10 00:27:24852 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:dfs.namenode.startup.delay.block.deletation.sec设置为000:00:00.000
2015-07-10 00:27:24854 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:块删除将在2015年7月10日00:27:24左右开始
2015-07-10 00:27:24867 INFO org.apache.hadoop.util.GSet:地图块的计算能力map
2015-07-10 00:27:24883 INFO org.apache.hadoop.util.GSet:VM type=32位
2015-07-10 00:27:24900 INFO org.apache.hadoop.util.GSet:2.0%最大内存966.7 MB=19.3 MB
2015-07-10 00:27:24901 INFO org.apache.hadoop.util.GSet:capacity=2^22=4194304个条目
2015-07-10 00:27:25563 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:dfs.block.access.token.enable=false
2015-07-10 00:27:25564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:defaultReplication=1
2015-07-10 00:27:25564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:maxplication=512
2015-07-10 00:27:25564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:minReplication=1
2015-07-10 00:27:25564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:maxReplicationStreams=2
2015-07-10 00:27:25564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:shouldCheckForEnoughRacks=false
2015-07-10 00:27:25564 INFO org.apache.hadoop.hdf
2015-07-10 00:27:02,513 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-07-10 00:27:02,538 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-07-10 00:27:07,549 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-07-10 00:27:09,284 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-07-10 00:27:09,285 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-07-10 00:27:09,339 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2015-07-10 00:27:09,340 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2015-07-10 00:27:12,475 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-07-10 00:27:16,632 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-07-10 00:27:17,491 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-07-10 00:27:17,702 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2015-07-10 00:27:17,876 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-07-10 00:27:17,941 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-07-10 00:27:17,977 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-07-10 00:27:17,977 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-07-10 00:27:17,977 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-07-10 00:27:18,441 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-07-10 00:27:18,525 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-07-10 00:27:18,747 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-07-10 00:27:18,760 INFO org.mortbay.log: jetty-6.1.26
2015-07-10 00:27:20,832 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-07-10 00:27:23,404 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-07-10 00:27:23,416 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-07-10 00:27:24,034 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-07-10 00:27:24,036 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-07-10 00:27:24,773 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-07-10 00:27:24,776 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-07-10 00:27:24,852 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-07-10 00:27:24,854 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Jul 10 00:27:24
2015-07-10 00:27:24,867 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-07-10 00:27:24,883 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2015-07-10 00:27:24,900 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2015-07-10 00:27:24,901 INFO org.apache.hadoop.util.GSet: capacity = 2^22 = 4194304 entries
2015-07-10 00:27:25,563 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-07-10 00:27:25,638 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = joe (auth:SIMPLE)
2015-07-10 00:27:25,639 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-07-10 00:27:25,639 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-07-10 00:27:25,639 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-07-10 00:27:25,658 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-07-10 00:27:26,354 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-07-10 00:27:26,354 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2015-07-10 00:27:26,355 INFO org.apache.hadoop.util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
2015-07-10 00:27:26,355 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2015-07-10 00:27:26,993 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2015-07-10 00:27:26,994 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2015-07-10 00:27:26,994 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2015-07-10 00:27:26,994 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-07-10 00:27:27,064 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-07-10 00:27:27,069 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2015-07-10 00:27:27,070 INFO org.apache.hadoop.util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
2015-07-10 00:27:27,070 INFO org.apache.hadoop.util.GSet: capacity = 2^19 = 524288 entries
2015-07-10 00:27:27,083 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-07-10 00:27:27,085 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-07-10 00:27:27,085 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2015-07-10 00:27:27,105 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2015-07-10 00:27:27,105 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2015-07-10 00:27:27,105 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2015-07-10 00:27:27,113 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-07-10 00:27:27,113 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-07-10 00:27:27,197 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-07-10 00:27:27,197 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2015-07-10 00:27:27,197 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
2015-07-10 00:27:27,197 INFO org.apache.hadoop.util.GSet: capacity = 2^16 = 65536 entries
2015-07-10 00:27:27,403 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hdfs/namenode/in_use.lock acquired by nodename 11822@joe-virtual-machine
2015-07-10 00:27:27,882 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /usr/local/hdfs/namenode/current
2015-07-10 00:27:28,446 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2015-07-10 00:27:28,758 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2015-07-10 00:27:28,784 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /usr/local/hdfs/namenode/current/fsimage_0000000000000000000
2015-07-10 00:27:28,826 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@fd6cd8 expecting start txid #1
2015-07-10 00:27:28,840 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file /usr/local/hdfs/namenode/current/edits_0000000000000000001-0000000000000000002
2015-07-10 00:27:28,912 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream '/usr/local/hdfs/namenode/current/edits_0000000000000000001-0000000000000000002' to transaction ID 1
2015-07-10 00:27:29,079 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /usr/local/hdfs/namenode/current/edits_0000000000000000001-0000000000000000002 of size 42 edits # 2 loaded in 0 seconds
2015-07-10 00:27:29,164 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2015-07-10 00:27:29,174 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3
2015-07-10 00:27:29,854 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2015-07-10 00:27:29,855 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 2611 msecs
2015-07-10 00:27:33,403 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to localhost:9000
2015-07-10 00:27:33,490 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-07-10 00:27:33,625 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2015-07-10 00:27:33,628 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3
2015-07-10 00:27:33,639 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 57
2015-07-10 00:27:33,642 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/hdfs/namenode/current/edits_inprogress_0000000000000000003 -> /usr/local/hdfs/namenode/current/edits_0000000000000000003-0000000000000000004
2015-07-10 00:27:33,781 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2015-07-10 00:27:33,788 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2015-07-10 00:27:33,885 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-07-10 00:27:33,905 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-07-10 00:27:33,907 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-07-10 00:27:33,907 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-07-10 00:27:33,970 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [localhost:9000] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:425)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:938)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:783)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:343)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:672)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:810)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:794)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1487)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1553)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:408)
... 13 more
2015-07-10 00:27:34,004 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-07-10 00:27:34,007 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at joe-virtual-machine/192.168.197.146
************************************************************/
<property>
   <name>fs.default.name</name>
  <value>hdfs://localhost:9001</value>
</property>
joe@joe-virtual-machine:~$ jps
3387  ResourceManager
3935  Jps
2850  NameNode
3163  SecondaryNameNode
2981  DataNode
3517  NodeManager

joe@joe-virtual-machine:~$ netstat -a - p | grep 9001
tcp        0      0 localhost:9001          *:*                     LISTEN     
tcp        0      0 localhost:54460         localhost:9001          ESTABLISHED
tcp        0      0 localhost:9001          localhost:54460         ESTABLISHED
tcp        0      0 localhost:54598         localhost:9001          TIME_WAIT  
joe@joe-virtual-machine:~$ netstat -a - p | grep 9000
tcp        0      0 *:9000                  *:*                     LISTEN     
tcp6       0      0 [::]:9000               [::]:*                  LISTEN