hadoop上的Hbase未在分布式模式下连接

hadoop上的Hbase未在分布式模式下连接,hadoop,hbase,bigdata,ubuntu-14.04,distributed,Hadoop,Hbase,Bigdata,Ubuntu 14.04,Distributed,您好,我正在尝试在HADOOP(HADOOP-2.7.0)上设置HBASE(HBASE-0.98.12-hadoop2) Hadoop正在本地主机560070上运行,运行正常 我的hbase-site.xml如下所示 <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:9000/hbase</value&

您好,我正在尝试在HADOOP(HADOOP-2.7.0)上设置HBASE(HBASE-0.98.12-hadoop2)
Hadoop正在本地主机560070上运行,运行正常

我的hbase-site.xml如下所示

<configuration>
    <property>
    <name>hbase.rootdir</name>
    <value>hdfs://localhost:9000/hbase</value>
  </property>

  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>

  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>localhost</value>
  </property>

<!--  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>-->

  <property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
  • 短暂性脑缺血发作
我使用的是Openjdk,这里是JPS命令类似的结果 布雷奇-linux@bredgelinux-桌面:~$sudo netstat-plten | grep java tcp 0.0.0.0:8042 0.0.0.0:*听0 29563 3356/java
tcp 0.0.0.0:50090 0.0.0.0:*听0 27575 3063/java
tcp 0.0.0.0:46766 0.0.0.0:*听0 29555 3356/java
tcp 0.0.0.0:50070 0.0.0.0:*听0 25124 2723/java
tcp 0.0.0.0:8088 0.0.0.0:*听0 29579 3224/java
tcp 0.0.0.0:13562 0.0.0.0:*听0 29562 3356/java
tcp 0.0.0.0:8030 0.0.0.0:*听一听31542 3224/java
tcp 0.0.0.0:8031 0.0.0.0:*听0 29571 3224/java
tcp 0.0.0.0:8032 0.0.0.0:*听一听31546 3224/java
tcp 0.0.0.0:8033 0.0.0.0:*听0 29581 3224/java
tcp 0.0.0.0:8040 0.0.0.0:*听一听31536 3356/java
tcp 0 0 127.0.0.1:9000 0.0.0.0:*听0 28260 2723/java

数据节点日志文件

2015-05-22 14:21:33,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 starting to offer service
2015-05-22 14:21:33,985 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-05-22 14:21:33,985 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2015-05-22 14:21:35,073 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-05-22 14:21:36,073 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-05-22 14:21:36,391 INFO org.apache.hadoop.hdfs.server.common.Storage: DataNode version: -56 and NameNode layout version: -60
2015-05-22 14:21:36,443 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop_store/hdfs/datanode/in_use.lock acquired by nodename 4902@bredgelinux-desktop
2015-05-22 14:21:36,457 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting. 
java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop_store/hdfs/datanode: namenode clusterID = CID-654b4574-5929-4de9-ac12-f47de7f9fd75; datanode clusterID = CID-f70f0a9a-da72-4c70-b453-35227ceca6ce
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)
    at java.lang.Thread.run(Thread.java:745)
2015-05-22 14:21:36,459 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
2015-05-22 14:21:36,461 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2015-05-22 14:21:38,461 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2015-05-22 14:21:38,474 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2015-05-22 14:21:38,476 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at bredgelinux-desktop/127.0.1.1
************************************************************/
2015-05-22 14:21:33980 INFO org.apache.hadoop.hdfs.server.datanode.datanode:Block pool(datanode Uuid unassigned)服务到localhost/127.0.0.1:9000开始提供服务
2015-05-22 14:21:33985 INFO org.apache.hadoop.ipc.Server:ipc服务器响应程序:启动
2015-05-22 14:21:33985 INFO org.apache.hadoop.ipc.Server:50020上的ipc服务器侦听器:正在启动
2015-05-22 14:21:35073 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器:localhost/127.0.0.1:9000。已尝试0次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1000毫秒)
2015-05-22 14:21:36073 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器:localhost/127.0.0.1:9000。已试过1次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1000毫秒)
2015-05-22 14:21:36391 INFO org.apache.hadoop.hdfs.server.common.Storage:数据节点版本:-56和名称节点布局版本:-60
2015-05-22 14:21:36443 INFO org.apache.hadoop.hdfs.server.common.Storage:Lock on/usr/local/hadoop_store/hdfs/datanode/in_use.Lock被nodename收购4902@bredgelinux-桌面
2015-05-22 14:21:36457致命org.apache.hadoop.hdfs.server.datanode.datanode:localhost的块池(datanode Uuid unassigned)服务初始化失败/127.0.0.1:9000。退出。
java.io.IOException:/usr/local/hadoop_store/hdfs/datanode:namenode clusterID=CID-654b4574-5929-4de9-ac12-f47de7f9fd75中不兼容的集群;数据节点群集ID=CID-f70f0a9a-da72-4c70-b453-35227ceca6ce
位于org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646)
位于org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320)
位于org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403)
位于org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422)
位于org.apache.hadoop.hdfs.server.datanode.datanode.initStorage(datanode.java:1311)
位于org.apache.hadoop.hdfs.server.datanode.datanode.initBlockPool(datanode.java:1276)
位于org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
位于org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connecttonandhandshake(BPServiceActor.java:220)
位于org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)
运行(Thread.java:745)
2015-05-22 14:21:36459 WARN org.apache.hadoop.hdfs.server.datanode.datanode:终止块池服务:块池(datanode Uuid unassigned)服务到本地主机/127.0.0.1:9000
2015-05-22 14:21:36461 INFO org.apache.hadoop.hdfs.server.datanode.datanode:删除的块池(datanode Uuid未分配)
2015-05-22 14:21:38461警告org.apache.hadoop.hdfs.server.datanode.datanode:正在退出datanode
2015-05-22 14:21:38474 INFO org.apache.hadoop.util.ExitUtil:正在退出,状态为0
2015-05-22 14:21:38476 INFO org.apache.hadoop.hdfs.server.datanode.datanode:SHUTDOWN\u MSG:
/************************************************************
SHUTDOWN_MSG:正在bredgelinux desktop/127.0.1.1上关闭DataNode
************************************************************/
java.net.ConnectException:从bredgelinux desktop/127.0.1.1调用localhost:54310连接失败异常:java.net.ConnectException:连接被拒绝

如果您有环回IP地址,则会发生此错误。请按照以下步骤更正此错误:

步骤1:从/
etc/hosts
中删除带有
127.0.1.1
的行

第2步:重新启动hadoop和hbase进程。

我想(因为我以前在数据节点日志中看到过类似的错误),您已经删除了datanode数据目录并重新启动了它


尝试关闭HDF(datanodes和namenode),删除namenode和datanode数据目录,启动群集并格式化namenode。

现在Hbase正在hadoop上运行。无法访问目录“datanode”和“namenode”。可能是因为这个原因,Hadoop无法访问
2015-05-22 14:21:33,980 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000 starting to offer service
2015-05-22 14:21:33,985 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-05-22 14:21:33,985 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2015-05-22 14:21:35,073 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-05-22 14:21:36,073 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-05-22 14:21:36,391 INFO org.apache.hadoop.hdfs.server.common.Storage: DataNode version: -56 and NameNode layout version: -60
2015-05-22 14:21:36,443 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop_store/hdfs/datanode/in_use.lock acquired by nodename 4902@bredgelinux-desktop
2015-05-22 14:21:36,457 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting. 
java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop_store/hdfs/datanode: namenode clusterID = CID-654b4574-5929-4de9-ac12-f47de7f9fd75; datanode clusterID = CID-f70f0a9a-da72-4c70-b453-35227ceca6ce
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)
    at java.lang.Thread.run(Thread.java:745)
2015-05-22 14:21:36,459 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
2015-05-22 14:21:36,461 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2015-05-22 14:21:38,461 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2015-05-22 14:21:38,474 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2015-05-22 14:21:38,476 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at bredgelinux-desktop/127.0.1.1
************************************************************/