Hadoop Datanode启动,但不启动namenode

Hadoop Datanode启动,但不启动namenode,hadoop,hdfs,Hadoop,Hdfs,经过一番努力,我终于在伪分布式节点中使用了hadoop,一个namenode和一个jobtracker工作正常(在http://localhost:50070和http://localhost:50030) 昨天我尝试用以下命令重新启动我的namenode,datanode,等等: $hadoop namenode -format $start-all.sh 而jps为我提供了以下输出: 17148 DataNode 17295 SecondaryNameNode 17419 JobTrack

经过一番努力,我终于在伪分布式节点中使用了hadoop,一个
namenode
和一个
jobtracker
工作正常(在
http://localhost:50070
http://localhost:50030

昨天我尝试用以下命令重新启动我的
namenode
datanode
,等等:

$hadoop namenode -format
$start-all.sh
jps
为我提供了以下输出:

17148 DataNode
17295 SecondaryNameNode
17419 JobTracker
17669 Jps
Namenode似乎不愿意再开始了。。。几秒钟后Jobtracker就死了

请注意,我没有重新启动我的计算机,我已经尝试了以下线程中给出的解决方案,但没有帮助

下面是namenode的日志,其中包含一系列错误。我根本不知道如何解决我的问题

    2013-08-16 09:02:21,647 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.lan/192.168.1.94
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.7.0_25
************************************************************/
2013-08-16 09:02:21,839 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-08-16 09:02:21,868 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-08-16 09:02:21,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-08-16 09:02:21,871 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-08-16 09:02:22,098 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-08-16 09:02:22,103 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-08-16 09:02:22,110 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-08-16 09:02:22,111 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 64-bit
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 932118528
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^21 = 2097152 entries
2013-08-16 09:02:22,140 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-08-16 09:02:22,174 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=rlk
2013-08-16 09:02:22,174 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-08-16 09:02:22,174 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-08-16 09:02:22,189 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-08-16 09:02:22,189 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-08-16 09:02:22,271 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-08-16 09:02:22,320 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2013-08-16 09:02:22,321 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2013-08-16 09:02:22,363 INFO org.apache.hadoop.hdfs.server.common.Storage: Start loading image file /home/rlk/hduser/dfs/name/current/fsimage
2013-08-16 09:02:22,364 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2013-08-16 09:02:22,372 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-08-16 09:02:22,375 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /home/rlk/hduser/dfs/name/current/fsimage of size 109 bytes loaded in 0 seconds.
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start loading edits file /home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: EOF of /home/rlk/hduser/dfs/name/current/edits, reached end of edit log Number of transactions found: 0.  Bytes read: 4
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start checking end of edit log (/home/rlk/hduser/dfs/name/current/edits) ...
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Checked the bytes after the end of edit log (/home/rlk/hduser/dfs/name/current/edits):
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Padding position  = -1 (-1 means padding not found)
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Edit log length   = 4
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Read length       = 4
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Corruption length = 0
2013-08-16 09:02:22,376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Toleration length = 0 (= dfs.namenode.edits.toleration.length)
2013-08-16 09:02:22,382 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Summary: |---------- Read=4 ----------|-- Corrupt=0 --|-- Pad=0 --|
2013-08-16 09:02:22,382 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edits file /home/rlk/hduser/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-08-16 09:02:22,387 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /home/rlk/hduser/dfs/name/current/fsimage of size 109 bytes saved in 0 seconds.
2013-08-16 09:02:22,553 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:22,553 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:22,933 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-08-16 09:02:22,933 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 776 msecs
2013-08-16 09:02:22,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.threshold.pct          = 0.9990000128746033
2013-08-16 09:02:22,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2013-08-16 09:02:22,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.extension              = 30000
2013-08-16 09:02:22,935 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks excluded by safe block count: 0 total blocks: 0 and thus the safe blocks: 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 21 msec
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs
2013-08-16 09:02:22,956 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2013-08-16 09:02:22,962 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2013-08-16 09:02:22,972 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-08-16 09:02:22,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2013-08-16 09:02:22,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec processing time, 1 msec clock time, 1 cycles
2013-08-16 09:02:22,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2013-08-16 09:02:22,974 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-08-16 09:02:22,983 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-08-16 09:02:23,026 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-08-16 09:02:23,029 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort8020 registered.
2013-08-16 09:02:23,030 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort8020 registered.
2013-08-16 09:02:23,037 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost.localdomain/127.0.0.1:8020
2013-08-16 09:02:23,195 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-08-16 09:02:23,306 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-08-16 09:02:23,318 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
2013-08-16 09:02:23,329 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2013-08-16 09:02:23,331 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2013-08-16 09:02:23,331 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2013-08-16 09:02:23,331 INFO org.mortbay.log: jetty-6.1.26
2013-08-16 09:02:23,386 INFO org.mortbay.log: Extract jar:file:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/jre/lib/ext/hadoop-core-1.2.1.jar!/webapps/hdfs to /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08/webapp
2013-08-16 09:02:25,171 WARN org.mortbay.log: failed jsp: java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
2013-08-16 09:02:25,215 WARN org.mortbay.log: failed org.mortbay.jetty.webapp.WebAppContext@12305d34{/,jar:file:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/jre/lib/ext/hadoop-core-1.2.1.jar!/webapps/hdfs}: java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
2013-08-16 09:02:25,225 WARN org.mortbay.log: failed ContextHandlerCollection@25370a40: java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
2013-08-16 09:02:25,226 ERROR org.mortbay.log: Error starting handlers
java.lang.NoClassDefFoundError: javax/servlet/jsp/JspFactory
    at org.apache.jasper.servlet.JspServlet.init(JspServlet.java:99)
    at org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:440)
    at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:263)
    at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
    at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:736)
    at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
    at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
    at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
    at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
    at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
    at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
    at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
    at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
    at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
    at org.mortbay.jetty.Server.doStart(Server.java:224)
    at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:638)
    at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:517)
    at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:395)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:395)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:337)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
Caused by: java.lang.ClassNotFoundException: javax.servlet.jsp.JspFactory
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 27 more
2013-08-16 09:02:25,307 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
2013-08-16 09:02:25,307 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:rlk cause:java.io.IOException: Problem in starting http server. Server handlers failed
2013-08-16 09:02:25,308 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:50070
2013-08-16 09:02:25,308 ERROR org.mortbay.log: EXCEPTION 
java.lang.NullPointerException
    at org.apache.jasper.servlet.JspServlet.destroy(JspServlet.java:282)
    at org.mortbay.jetty.servlet.ServletHolder.destroyInstance(ServletHolder.java:318)
    at org.mortbay.jetty.servlet.ServletHolder.doStop(ServletHolder.java:289)
    at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
    at org.mortbay.jetty.servlet.ServletHandler.doStop(ServletHandler.java:185)
    at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
    at org.mortbay.jetty.handler.HandlerWrapper.doStop(HandlerWrapper.java:142)
    at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
    at org.mortbay.jetty.handler.HandlerWrapper.doStop(HandlerWrapper.java:142)
    at org.mortbay.jetty.servlet.SessionHandler.doStop(SessionHandler.java:125)
    at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
    at org.mortbay.jetty.handler.HandlerWrapper.doStop(HandlerWrapper.java:142)
    at org.mortbay.jetty.handler.ContextHandler.doStop(ContextHandler.java:592)
    at org.mortbay.jetty.webapp.WebAppContext.doStop(WebAppContext.java:537)
    at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
    at org.mortbay.jetty.handler.HandlerCollection.doStop(HandlerCollection.java:169)
    at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
    at org.mortbay.jetty.handler.HandlerWrapper.doStop(HandlerWrapper.java:142)
    at org.mortbay.jetty.Server.doStop(Server.java:283)
    at org.mortbay.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:76)
    at org.apache.hadoop.http.HttpServer.stop(HttpServer.java:688)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:604)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-08-16 09:02:25,336 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedExceptionjava.lang.InterruptedException: sleep interrupted
2013-08-16 09:02:25,337 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:724)
2013-08-16 09:02:25,339 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
2013-08-16 09:02:25,375 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:25,375 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:25,403 INFO org.apache.hadoop.ipc.Server: Stopping server on 8020
2013-08-16 09:02:25,411 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
2013-08-16 09:02:25,412 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: Problem in starting http server. Server handlers failed
    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:662)
    at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:517)
    at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:395)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:395)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:337)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

2013-08-16 09:02:25,413 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.lan/192.168.1.94
************************************************************/
2013-08-16 09:02:21647 INFO org.apache.hadoop.hdfs.server.namenode.namenode:STARTUP\u MSG:
/************************************************************
STARTUP\u MSG:正在启动NameNode
启动消息:host=localhost.lan/192.168.1.94
启动消息:args=[]
启动消息:版本=1.2.1
启动\u消息:生成=https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152;由“mattf”于2013年7月22日(星期一)15:23:09编制
启动消息:java=1.7.0\u 25
************************************************************/
2013-08-16 09:02:21839 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:从hadoop-metrics2.properties加载的属性
2013-08-16 09:02:21868 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:MBean用于源MetricsSystem,sub=Stats注册。
2013-08-16 09:02:21871 INFO org.apache.hadoop.metrics2.impl.metricsystemimpl:计划的快照周期为10秒。
2013-08-16 09:02:21871 INFO org.apache.hadoop.metrics2.impl.metricsystemimpl:NameNode度量系统启动
2013-08-16 09:02:22098 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:源ugi的MBean已注册。
2013-08-16 09:02:22103警告org.apache.hadoop.metrics2.impl.metricsystemimpl:源名称ugi已存在!
2013-08-16 09:02:22110 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:已注册源jvm的MBean。
2013-08-16 09:02:22111 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:已注册源名称节点的MBean。
2013-08-16 09:02:22140信息org.apache.hadoop.hdfs.util.GSet:地图块的计算能力map
2013-08-16 09:02:22140 INFO org.apache.hadoop.hdfs.util.GSet:VM type=64位
2013-08-16 09:02:22140 INFO org.apache.hadoop.hdfs.util.GSet:2.0%最大内存=932118528
2013-08-16 09:02:22140 INFO org.apache.hadoop.hdfs.util.GSet:capacity=2^21=2097152条
2013-08-16 09:02:22140 INFO org.apache.hadoop.hdfs.util.GSet:推荐=2097152,实际=2097152
2013-08-16 09:02:22174 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:fsOwner=rlk
2013-08-16 09:02:22174 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:supergroup=supergroup
2013-08-16 09:02:22174 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:isPermissionEnabled=true
2013-08-16 09:02:22189 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:dfs.block.invalidate.limit=100
2013-08-16 09:02:22189 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:isAccessTokenEnabled=false accessKeyUpdateInterval=0分钟,accessTokenLifetime=0分钟
2013-08-16 09:02:22271 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:注册的FSNamesystemStateMBean和NameNodeMXBean
2013-08-16 09:02:22320 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:dfs.namenode.edits.toleration.length=0
2013-08-16 09:02:22321 INFO org.apache.hadoop.hdfs.server.namenode.namenode:缓存文件名超过10次
2013-08-16 09:02:22363 INFO org.apache.hadoop.hdfs.server.common.Storage:开始加载映像文件/home/rlk/hduser/dfs/name/current/fsimage
2013-08-16 09:02:22364 INFO org.apache.hadoop.hdfs.server.common.Storage:文件数=1
2013-08-16 09:02:22372 INFO org.apache.hadoop.hdfs.server.common.Storage:正在构建的文件数=0
2013-08-16 09:02:22375 INFO org.apache.hadoop.hdfs.server.common.Storage:Image file/home/rlk/hduser/dfs/name/current/fsimage大小为109字节,在0秒内加载。
2013-08-16 09:02:22376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:开始加载编辑文件/home/rlk/hduser/dfs/name/current/edits
2013-08-16 09:02:22376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:EOF of/home/rlk/hduser/dfs/name/current/edits,已到达编辑日志末尾找到的事务数:0。读取字节数:4
2013-08-16 09:02:22376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:开始检查编辑日志的结尾(/home/rlk/hduser/dfs/name/current/edits)。。。
2013-08-16 09:02:22376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:检查编辑日志结束后的字节(/home/rlk/hduser/dfs/name/current/edits):
2013-08-16 09:02:22376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:Padding position=-1(-1表示未找到Padding)
2013-08-16 09:02:22376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:Edit log length=4
2013-08-16 09:02:22376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:Read length=4
2013-08-16 09:02:22376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:Corruption length=0
2013-08-16 09:02:22376 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:Toleration length=0(=dfs.namenode.edits.Toleration.length)
2013-08-16 09:02:22382 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:Summary:|--------Read=4-------------Corrupt=0--Pad=0--|
2013-08-16 09:02:22382 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:Edits file/home/rlk/hduser/dfs/name/current/Edits大小为4的编辑#0在0秒内加载。
2013-08-16 09:02:22387 INFO org.apache.hadoop.hdfs.server.common.Storage:Image file/home/rlk/hduser/dfs/name/current/fsimage大小为109字节,保存时间为0秒。
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- core-site.xml -->
<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/rlk/hduser</value>
  </property>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost/</value>
  </property>
</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- hdfs-site.xml -->
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- mapred-site.xml -->
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
  </property>
</configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/rlk/hduser</value>
  </property>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost/90000</value>
  </property>
<property>
  <name>fs.default.name</name>
  <value>hdfs://master:port</value>
</property>