Hadoop 将Hbase连接到HDFS时,Hbase外壳中的连接被拒绝

Hadoop 将Hbase连接到HDFS时,Hbase外壳中的连接被拒绝,hadoop,hbase,hdfs,hadoop2,cloudera-cdh,Hadoop,Hbase,Hdfs,Hadoop2,Cloudera Cdh,我正在尝试将我的HBase连接到HDFS。我的hdfs namenode(bin/hdfs namenode)和datnode(/bin/hdfs datanode)正在运行。我还可以启动我的Hbase(sudo./bin/start Hbase.sh)和本地区域服务器(sudo./bin/local-regionservers.sh start 1 2)。但是,当我尝试从Hbase shell执行命令时,会出现以下错误: cis655stu@cis655stu-VirtualBox:/teac

我正在尝试将我的HBase连接到HDFS。我的hdfs namenode(bin/hdfs namenode)和datnode(/bin/hdfs datanode)正在运行。我还可以启动我的Hbase(sudo./bin/start Hbase.sh)和本地区域服务器(sudo./bin/local-regionservers.sh start 1 2)。但是,当我尝试从Hbase shell执行命令时,会出现以下错误:

cis655stu@cis655stu-VirtualBox:/teaching/14f-cis655/proj-dtracing/hbase/hbase-0.99.0-SNAPSHOT$ ./bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.99.0-SNAPSHOT, rUnknown, Sat Aug  9 08:59:57 EDT 2014

hbase(main):001:0> list
TABLE                                                                                                    
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/teaching/14f-cis655/proj-dtracing/hbase/hbase-0.99.0-SNAPSHOT/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-01-19 13:33:07,179 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

ERROR: Connection refused

Here is some help for this command:
List all tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples:

  hbase> list
  hbase> list 'abc.*'
  hbase> list 'ns:abc.*'
  hbase> list 'ns:.*'
cis655stu@cis655stu-VirtualBox:/teaching/14f-cis655/proj dtracing/hbase/hbase-0.99.0-SNAPSHOT$。/bin/hbase shell
HBase外壳;输入“帮助”以获取支持的命令列表。
键入“exit”以离开HBase外壳
版本0.99.0-SNAPSHOT,rUnknown,美国东部时间2014年8月9日星期六08:59:57
hbase(主):001:0>列表
桌子
SLF4J:类路径包含多个SLF4J绑定。
SLF4J:在[jar:file:/teaching/14f-cis655/proj dtracing/hbase/hbase-0.99.0-SNAPSHOT/lib/SLF4J-log4j12-1.7.5.jar!/org/SLF4J/impl/StaticLoggerBinder.class]中找到绑定
SLF4J:在[jar:file:/teaching/14f-cis655/proj dtracing/hadoop-2.6.0/share/hadoop/common/lib/SLF4J-log4j12-1.7.5.jar!/org/SLF4J/impl/StaticLoggerBinder.class]中找到绑定
SLF4J:参见http://www.slf4j.org/codes.html#multiple_bindings 我需要一个解释。
SLF4J:实际绑定的类型为[org.SLF4J.impl.Log4jLoggerFactory]
2015-01-19 13:33:07179警告[main]util.NativeCodeLoader:无法为您的平台加载本机hadoop库。。。在适用的情况下使用内置java类
错误:连接被拒绝
以下是此命令的一些帮助:
列出hbase中的所有表。可以使用可选的正则表达式参数
用于过滤输出。示例:
hbase>列表
hbase>列出“abc.*”
hbase>列表“ns:abc.*”
hbase>列表“n:*”
下面是我的HBase和Hadoop配置文件:

HBase-site.xml

<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>

    <!--for psuedo-distributed execution-->
    <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
    </property>
    <property>
      <name>hbase.master.wait.on.regionservers.mintostart</name>
      <value>1</value>
    </property>
      <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/teaching/14f-cis655/tmp/zk-deploy</value>
      </property>

    <!--for enabling collection of traces
    -->
    <property>
      <name>hbase.trace.spanreceiver.classes</name>
      <value>org.htrace.impl.LocalFileSpanReceiver</value>
    </property>
    <property>
      <name>hbase.local-file-span-receiver.path</name>
      <value>/teaching/14f-cis655/tmp/server-htrace.out</value>
    </property>
    </configuration>
<configuration>
<property>
   <name>fs.default.name</name>
   <value>hdfs://localhost:9000</value>
</property>
</configuration>

hbase.rootdir
hdfs://localhost:9000/hbase
hbase.cluster.distributed
真的
hbase.master.wait.on.regionservers.mintostart
1.
hbase.zookeeper.property.dataDir
/教学/14f-cis655/tmp/zk部署
hbase.trace.spanreceiver.classes
org.htrace.impl.LocalFileSpanReceiver
hbase.local-file-span-receiver.path
/示教/14f-cis655/tmp/server-htrace.out
Hdfs-site.xml

<configuration>
<property>
   <name>dfs.replication</name>
   <value>1</value>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/yarn/yarn_data/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/yarn/yarn_data/hdfs/datanode</value>
 </property>
 <property>
    <name>hadoop.trace.spanreceiver.classes</name>
    <value>org.htrace.impl.LocalFileSpanReceiver</value>
  </property>
  <property>
    <name>hadoop.local-file-span-receiver.path</name>
    <value>/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/logs/htrace.out</value>
  </property>
</configuration>

dfs.replication
1.
dfs.namenode.name.dir
文件:/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/thread/thread\u data/hdfs/namenode
dfs.datanode.data.dir
文件:/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/thread/thread\u data/hdfs/datanode
hadoop.trace.spanreceiver.classes
org.htrace.impl.LocalFileSpanReceiver
hadoop.local-file-span-receiver.path
/教学/14f-cis655/proj-dtracing/hadoop-2.6.0/logs/htrace.out
Core-site.xml

<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>

    <!--for psuedo-distributed execution-->
    <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
    </property>
    <property>
      <name>hbase.master.wait.on.regionservers.mintostart</name>
      <value>1</value>
    </property>
      <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/teaching/14f-cis655/tmp/zk-deploy</value>
      </property>

    <!--for enabling collection of traces
    -->
    <property>
      <name>hbase.trace.spanreceiver.classes</name>
      <value>org.htrace.impl.LocalFileSpanReceiver</value>
    </property>
    <property>
      <name>hbase.local-file-span-receiver.path</name>
      <value>/teaching/14f-cis655/tmp/server-htrace.out</value>
    </property>
    </configuration>
<configuration>
<property>
   <name>fs.default.name</name>
   <value>hdfs://localhost:9000</value>
</property>
</configuration>

fs.default.name
hdfs://localhost:9000

请检查您的HDFS是否可从壳牌获得:

  $ hdfs dfs -ls /hbase
还要确保在hdfs env.sh文件中包含所有环境变量:

HADOOP_CONF_LIB_NATIVE_DIR="/hadoop/lib/native"
HADOOP_OPTS="-Djava.library.path=/hadoop/lib"
HADOOP_HOME=/hadoop
YARN_HOME=/hadoop
HBASE_HOME=/hbase
HADOOP_HDFS_HOME=/hadoop
HBASE_MANAGES_ZK=true
您是否使用同一操作系统用户运行Hadoop和HBase?如果您使用单独的用户,请检查是否允许HBase用户访问HDFS

确保在${HBASE_HOME}/conf目录中有hdfs site.xmlcore stie.xml(或符号链接)文件的副本

也>强> fs。缺省。名称< /强>选项被禁止用于纱线(但它仍必须工作),您必须考虑使用<强> fs。DeFultFS < /强>。p> 你用动物园管理员吗?因为您指定了hbase.zookeeper.property.dataDir选项,但没有hbase.zookeeper.quorum和其他重要选项。请阅读更多信息

请将下一个选项添加到hdfs site.xml以使HBase正常工作(由系统用户替换$HBase\u USER变量,该变量用于运行HBase):


hadoop.proxyuser.$HBASE\u USER.groups
*
hadoop.proxyuser.$HBASE\u USER.hosts
*
dfs.support.append
真的

您是否解决了此问题?如果是,请共享解决方案。尝试删除zookeeper写入数据的目录。然后重新启动HBaseIn在我的情况下,目录是/teaching/14f-cis655/tmp/zk deployYes,因此问题是zookeeper仲裁进程没有运行。在我删除了HBase保存其临时数据的目录后,它得到了修复。现在很好用。