Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/351.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/linux/24.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Namenode的demon启动错误_Java_Linux_Unix_Hadoop_Mapreduce - Fatal编程技术网

Java Namenode的demon启动错误

Java Namenode的demon启动错误,java,linux,unix,hadoop,mapreduce,Java,Linux,Unix,Hadoop,Mapreduce,我的目的-发射名节点恶魔。我需要使用hdfs的文件系统,从本地文件系统复制这些文件,在hdfs中创建文件夹,并且需要在configuration/conf/core-site.xml文件中指定的端口上启动namenode。 我启动了一个脚本 ./hadoop namenode 因此,我收到了以下消息 2013-02-17 12:29:37,493 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /*****

我的目的-发射名节点恶魔。我需要使用hdfs的文件系统,从本地文件系统复制这些文件,在hdfs中创建文件夹,并且需要在configuration/conf/core-site.xml文件中指定的端口上启动namenode。 我启动了一个脚本

./hadoop namenode
因此,我收到了以下消息

2013-02-17 12:29:37,493 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = one/192.168.1.8
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012
************************************************************/
2013-02-17 12:29:38,325 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-02-17 12:29:38,400 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-02-17 12:29:38,427 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-02-17 12:29:38,427 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-02-17 12:29:39,509 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-02-17 12:29:39,542 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-02-17 12:29:39,633 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-02-17 12:29:39,635 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-02-17 12:29:39,704 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 32-bit
2013-02-17 12:29:39,708 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2013-02-17 12:29:39,708 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^22 = 4194304 entries
2013-02-17 12:29:39,708 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2013-02-17 12:29:42,718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
2013-02-17 12:29:42,737 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-02-17 12:29:42,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-02-17 12:29:42,937 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-02-17 12:29:42,940 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-02-17 12:29:45,820 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-02-17 12:29:46,229 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2013-02-17 12:29:46,836 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2013-02-17 12:29:47,133 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-02-17 12:29:47,134 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 loaded in 0 seconds.
2013-02-17 12:29:47,134 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-hadoop/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-02-17 12:29:47,163 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds.
2013-02-17 12:29:47,375 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds.
2013-02-17 12:29:47,479 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-02-17 12:29:47,480 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 6294 msecs
2013-02-17 12:29:47,919 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0
2013-02-17 12:29:47,919 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2013-02-17 12:29:47,920 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0
2013-02-17 12:29:47,920 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2013-02-17 12:29:47,920 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 430 msec
2013-02-17 12:29:47,920 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 6 secs.
2013-02-17 12:29:47,920 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2013-02-17 12:29:47,920 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2013-02-17 12:29:48,198 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-02-17 12:29:48,279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 129 msec
2013-02-17 12:29:48,279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 129 msec processing time, 129 msec clock time, 1 cycles
2013-02-17 12:29:48,280 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2013-02-17 12:29:48,280 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-02-17 12:29:48,280 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-02-17 12:29:48,711 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-02-17 12:29:48,836 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort2000 registered.
2013-02-17 12:29:48,836 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort2000 registered.
2013-02-17 12:29:48,865 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: one/192.168.1.8:2000
2013-02-17 12:30:23,264 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-02-17 12:30:25,326 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-02-17 12:30:25,727 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
2013-02-17 12:30:25,997 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2013-02-17 12:30:26,269 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.net.BindException: Address already in use
2013-02-17 12:30:26,442 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
2013-02-17 12:30:26,445 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 
2013-02-17 12:30:26,446 INFO org.apache.hadoop.ipc.Server: Stopping server on 2000
2013-02-17 12:30:26,446 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
2013-02-17 12:30:26,616 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:722)
2013-02-17 12:30:26,761 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:344)
    at sun.nio.ch.Net.bind(Net.java:336)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:199)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:581)
    at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:445)
    at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:353)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:353)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:305)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

2013-02-17 12:30:26,784 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at one/192.168.1.8
************************************************************/
2013-02-17 12:29:37493 INFO org.apache.hadoop.hdfs.server.namenode.namenode:STARTUP\u MSG:
/************************************************************
STARTUP\u MSG:正在启动NameNode
启动消息:主机=one/192.168.1.8
启动消息:args=[]
启动消息:版本=1.0.1
启动\u消息:生成=https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1243785;由“hortonfo”于2012年2月14日星期二08:15:38 UTC编制
************************************************************/
2013-02-17 12:29:38325 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:从hadoop-metrics2.properties加载的属性
2013-02-17 12:29:38400 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:MBean for source MetricsSystem,sub=Stats registered。
2013-02-17 12:29:38427 INFO org.apache.hadoop.metrics2.impl.metricsystemimpl:计划的快照周期为10秒。
2013-02-17 12:29:38427 INFO org.apache.hadoop.metrics2.impl.metricsystemimpl:NameNode度量系统启动
2013-02-17 12:29:39509 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:源ugi的MBean已注册。
2013-02-17 12:29:39542警告org.apache.hadoop.metrics2.impl.metricsystemimpl:源名称ugi已存在!
2013-02-17 12:29:39633 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:已注册源jvm的MBean。
2013-02-17 12:29:39635 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:已注册源名称节点的MBean。
2013-02-17 12:29:39704 INFO org.apache.hadoop.hdfs.util.GSet:VM type=32位
2013-02-17 12:29:39708 INFO org.apache.hadoop.hdfs.util.GSet:2%最大内存=19.33375 MB
2013-02-17 12:29:39708 INFO org.apache.hadoop.hdfs.util.GSet:capacity=2^22=4194304个条目
2013-02-17 12:29:39708 INFO org.apache.hadoop.hdfs.util.GSet:推荐=4194304,实际=4194304
2013-02-17 12:29:42718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:fsOwner=hadoop
2013-02-17 12:29:42737 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:supergroup=supergroup
2013-02-17 12:29:42738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:isPermissionEnabled=true
2013-02-17 12:29:42937 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:dfs.block.invalidate.limit=100
2013-02-17 12:29:42940 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:isAccessTokenEnabled=false accessKeyUpdateInterval=0分钟,accessTokenLifetime=0分钟
2013-02-17 12:29:45820 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:注册的FSNamesystemStateMBean和NameNodeMXBean
2013-02-17 12:29:46229 INFO org.apache.hadoop.hdfs.server.namenode.namenode:缓存文件名超过10次
2013-02-17 12:29:46836 INFO org.apache.hadoop.hdfs.server.common.Storage:文件数=1
2013-02-17 12:29:47133 INFO org.apache.hadoop.hdfs.server.common.Storage:正在构建的文件数=0
2013-02-17 12:29:47134 INFO org.apache.hadoop.hdfs.server.common.Storage:在0秒内加载大小为112的映像文件。
2013-02-17 12:29:47134 INFO org.apache.hadoop.hdfs.server.common.Storage:编辑文件/tmp/hadoop-hadoop/dfs/name/current/Edits大小为4的编辑#0在0秒内加载。
2013-02-17 12:29:47163 INFO org.apache.hadoop.hdfs.server.common.Storage:大小为112的图像文件在0秒内保存。
2013-02-17 12:29:47375 INFO org.apache.hadoop.hdfs.server.common.Storage:大小为112的图像文件在0秒内保存。
2013-02-17 12:29:47479 INFO org.apache.hadoop.hdfs.server.namenode.NameCache:初始化为0个条目0个查找
2013-02-17 12:29:47480 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:在6294毫秒内完成加载FSImage
2013-02-17 12:29:47919 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:块总数=0
2013-02-17 12:29:47919 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:无效块数=0
2013-02-17 12:29:47920 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:未复制的块数=0
2013-02-17 12:29:47920 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:过度复制的块数=0
2013-02-17 12:29:47920 INFO org.apache.hadoop.hdfs.StateChange:STATE*安全模式终止扫描,在430毫秒内完成对无效、过复制和欠复制块的扫描
2013-02-17 12:29:47920 INFO org.apache.hadoop.hdfs.StateChange:STATE*在6秒后离开安全模式。
2013-02-17 12:29:47920 INFO org.apache.hadoop.hdfs.StateChange:STATE*网络拓扑有0个机架和0个数据节点
2013-02-17 12:29:47920 INFO org.apache.hadoop.hdfs.StateChange:STATE*underplicatedblocks有0个块
2013-02-17 12:29:48198 INFO org.apache.hadoop.util.HostsFileReader:刷新主机(包括/排除)列表
2013-02-17 12:29:48279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:ReplicateQueue QueueProcessingStatistics:第一个周期在129毫秒内完成了0个块
2013-02-17 12:29:48279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:ReplicateQueue QueueProcessingStatistics:Queue flush在129毫秒的处理时间、129毫秒的时钟时间和1个周期内完成了0个块
2013-02-17 12:29:48280 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:invalidatequeueue队列处理统计:第一个周期在0毫秒内完成了0个块
2013-02-17 12:29:48280 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:invalidatequeueue QueueProcessingStatistics:Queue flush在0毫秒处理时间、0毫秒时钟时间、1个周期内完成了0个块
2013-02-17 12:29:48280 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter:MBean for so
2013-02-17 12:30:26,761 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Address already in use