Java 刚刚将我的hadoop集群升级到2.4.1,但一切都不正常
在配置节点并运行start-all.sh之后,所有节点都表示它们是星号节点,但查看从属节点的节点,我在日志中看到以下内容:Java 刚刚将我的hadoop集群升级到2.4.1,但一切都不正常,java,apache,web-services,hadoop,Java,Apache,Web Services,Hadoop,在配置节点并运行start-all.sh之后,所有节点都表示它们是星号节点,但查看从属节点的节点,我在日志中看到以下内容: 2014-08-05 06:41:05,790 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2014-08-05 06:41:05,791 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8010: starting 2
2014-08-05 06:41:05,790 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-08-05 06:41:05,791 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8010: starting
2014-08-05 06:41:14,604 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56
2014-08-05 06:41:14,711 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /hadoop/hdfs/namenode/in_use.lock acquired by nodename 4796@hadoop03
2014-08-05 06:41:14,997 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-633751026-127.0.1.1-1407152865456
2014-08-05 06:41:14,997 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2014-08-05 06:41:15,025 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from trash.
2014-08-05 06:41:15,211 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=298887827;bpid=BP-633751026-127.0.1.1-1407152865456;lv=-55;nsInfo=lv=-56;cid=CID-a343ba30-a7b$
2014-08-05 06:41:15,231 WARN org.apache.hadoop.hdfs.server.common.Util: Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2014-08-05 06:41:15,233 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /hadoop/hdfs/namenode/current, StorageType: DISK
2014-08-05 06:41:15,293 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2014-08-05 06:41:15,296 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1407257140296 with interval 21600000
2014-08-05 06:41:15,296 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-633751026-127.0.1.1-1407152865456
2014-08-05 06:41:15,297 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-633751026-127.0.1.1-1407152865456 on volume /hadoop/hdfs/namenode/current...
2014-08-05 06:41:15,484 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-633751026-127.0.1.1-1407152865456 on /hadoop/hdfs/namenode/curren$
2014-08-05 06:41:15,484 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-633751026-127.0.1.1-1407152865456: 188ms
2014-08-05 06:41:15,484 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-633751026-127.0.1.1-1407152865456 on volume /hadoop/hdfs/$
2014-08-05 06:41:15,484 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-633751026-127.0.1.1-1407152865456 on volume /hadoop/$
2014-08-05 06:41:15,484 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 0ms
2014-08-05 06:41:15,486 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-633751026-127.0.1.1-1407152865456 (Datanode Uuid null) service to /192.168.0.5:8020 beginning handshake $
2014-08-05 06:41:30,664 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-633751026-127.0.1.1-1407152865456 (Datanode Uuid null) service to /192.168.0.$
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:806)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4240)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:992)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28057)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
有人能提供我的群集中正在发生的事情的任何见解吗?
如果需要,我可以提供完整的配置文件和信息。在每个节点中将hadoop更新为2.4.1之后,您是否也更新了配置文件?
如果有,能否提供日志文件和配置文件。
我认为问题在于初始化核心站点中的hadoop data files属性。xml我从src构建了hadoop 2.4.1版本,并制作了一组文件分发到集群。节点上的所有设置都与主节点一致,尽可能接近。一旦删除每个DataNode中的文件系统并格式化NameNode。我认为您的DataNodes仍然是指旧的NameNode