hadoop可以';t start./sbin/start-dfs.sh

hadoop可以';t start./sbin/start-dfs.sh,hadoop,hdfs,hadoop2,Hadoop,Hdfs,Hadoop2,我已经启动了shall(/sbin/start-dfs.sh) 太平绅士 3098 Jps<br> 2492 NameNode<br> 2700 SecondaryNameNode 3098 Jps 2492名称节点 2700 SecondaryNameNode hadoop数据节点日志 2017-02-15 15:55:12,787 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to ad

我已经启动了shall(/sbin/start-dfs.sh)

  • 太平绅士

     3098 Jps<br>
     2492 NameNode<br>
     2700 SecondaryNameNode
    
    3098 Jps
    2492名称节点
    2700 SecondaryNameNode
  • hadoop数据节点日志

     2017-02-15 15:55:12,787 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/usr/local/Cellar/hadoop/2.7.3/libexec/%3E/data/hadoop/hdfs/datanode/
    java.io.IOException: Incompatible clusterIDs in /usr/local/Cellar/hadoop/2.7.3/libexec/>/data/hadoop/hdfs/datanode: namenode clusterID = CID-4c9d5df1-10c6-45cb-9fe0-e1631e4d13e2; datanode clusterID = CID-6dc3d755-f713-4bec-a62a-c47e96dcbc0d
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
        at java.lang.Thread.run(Thread.java:745)
    2017-02-15 15:55:12,792 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting.
    java.io.IOException: All specified directories are failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
        at java.lang.Thread.run(Thread.java:745)
    2017-02-15 15:55:12,793 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
    2017-02-15 15:55:12,799 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
    2017-02-15 15:55:14,800 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
    2017-02-15 15:55:14,802 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
    2017-02-15 15:55:14,803 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    
    2017-02-15 15:55:12787警告org.apache.hadoop.hdfs.server.common.Storage:添加存储目录[DISK]文件失败:/usr/local/ceral/hadoop/2.7.3/libexec/%3E/data/hadoop/hdfs/datanode/
    java.io.IOException:/usr/local/ceral/hadoop/2.7.3/libexec/>/data/hadoop/hdfs/datanode:namenode clusterID=CID-4c9d5df1-10c6-45cb-9fe0-e1631e4d13e2中的不兼容集群;数据节点群集ID=CID-6dc3d755-f713-4bec-a62a-c47e96dcbc0d
    位于org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775)
    位于org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
    位于org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
    位于org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
    位于org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
    位于org.apache.hadoop.hdfs.server.datanode.datanode.initStorage(datanode.java:1362)
    位于org.apache.hadoop.hdfs.server.datanode.datanode.initBlockPool(datanode.java:1327)
    位于org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
    位于org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connecttonandhandshake(BPServiceActor.java:223)
    位于org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
    运行(Thread.java:745)
    2017-02-15 15:55:12792致命org.apache.hadoop.hdfs.server.datanode.datanode:localhost的块池(datanode Uuid unassigned)服务初始化失败/127.0.0.1:9000。退出。
    java.io.IOException:无法加载所有指定的目录。
    位于org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)
    位于org.apache.hadoop.hdfs.server.datanode.datanode.initStorage(datanode.java:1362)
    位于org.apache.hadoop.hdfs.server.datanode.datanode.initBlockPool(datanode.java:1327)
    位于org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
    位于org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connecttonandhandshake(BPServiceActor.java:223)
    位于org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
    运行(Thread.java:745)
    2017-02-15 15:55:12793 WARN org.apache.hadoop.hdfs.server.datanode.datanode:终止块池服务:块池(datanode Uuid unassigned)服务到localhost/127.0.0.1:9000
    2017-02-15 15:55:12799 INFO org.apache.hadoop.hdfs.server.datanode.datanode:删除的块池(datanode Uuid未分配)
    2017-02-15 15:55:14800警告org.apache.hadoop.hdfs.server.datanode.datanode:正在退出datanode
    2017-02-15 15:55:14802 INFO org.apache.hadoop.util.ExitUtil:正在退出,状态为0
    2017-02-15 15:55:14803 INFO org.apache.hadoop.hdfs.server.datanode.datanode:SHUTDOWN\u MSG:
    

  • 看起来您已在工作群集上格式化了namenode

    删除数据目录并在所有节点中再次启动datanode进程

    rm -rf <dfs.datanode.data.dir>
    
    ./sbin/hadoop-daemon.sh start datanode
    
    rm-rf
    ./sbin/hadoop-daemon.sh启动数据节点