无法在Cloudera VM中使用java(在Eclipse中)连接到hbase

无法在Cloudera VM中使用java(在Eclipse中)连接到hbase,java,eclipse,hadoop,hbase,Java,Eclipse,Hadoop,Hbase,我试图在ClouderaVM中使用Java(在Eclipse中)连接到Hbase,但遇到以下错误。我能够在命令行中运行相同的程序(通过将我的程序转换成jar) 我的java程序 `import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HColumnDescriptor;

我试图在ClouderaVM中使用Java(在Eclipse中)连接到Hbase,但遇到以下错误。我能够在命令行中运行相同的程序(通过将我的程序转换成jar)

我的java程序

    `import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.hbase.HBaseConfiguration;
    import org.apache.hadoop.hbase.HColumnDescriptor;
    import org.apache.hadoop.hbase.HTableDescriptor;
    import org.apache.hadoop.hbase.TableName;
    import org.apache.hadoop.hbase.client.*;
    import org.apache.hadoop.hbase.util.Bytes;
    //import org.apache.hadoop.mapred.MapTask;
    import java.io.FileWriter;
    import java.io.IOException;
   public class HbaseConnection {

      public static void main(String[] args) throws IOException {
        Configuration config = HBaseConfiguration.create();
        config.addResource("/usr/lib/hbase/conf/hbase-site.xml");
        HTable table = new HTable(config, "test_table");
        byte[] columnFamily = Bytes.toBytes("colf");
        byte[] idColumnName = Bytes.toBytes("id");
        byte[] groupIdColumnName = Bytes.toBytes("g_id");
        Put put = new Put(Bytes.toBytes("testkey"));
        put.add(columnFamily, idColumnName, Bytes.toBytes("test id"));
        put.add(columnFamily, groupIdColumnName, Bytes.toBytes("test group id"));
        table.put(put);
        table.close();

      }
    }`
我将hbase-site.xml保存在eclipse的源文件夹中 hbase-site.xml

  <property>
    <name>hbase.rest.port</name>
    <value>8070</value>
    <description>The port for the HBase REST server.</description>
  </property>

  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>

  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://quickstart.cloudera:8020/hbase</value>
  </property>

  <property>
    <name>hbase.regionserver.ipc.address</name>
    <value>0.0.0.0</value>
  </property>

  <property>
    <name>hbase.master.ipc.address</name>
    <value>0.0.0.0</value>
  </property>

  <property>
    <name>hbase.thrift.info.bindAddress</name>
    <value>0.0.0.0</value>
  </property>

hbase.rest.port
8070
HBase REST服务器的端口。
hbase.cluster.distributed
真的
hbase.rootdir
hdfs://quickstart.cloudera:8020/hbase
hbase.regionserver.ipc.address
0.0.0.0
hbase.master.ipc.address
0.0.0.0
hbase.thrift.info.bindAddress
0.0.0.0
在eclipse中运行程序时,我的错误率低于

log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.io.IOException: java.lang.reflect.InvocationTargetException
    at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:389)
    at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:366)
    at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:247)
    at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:188)
    at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:150)
    at com.aig.gds.hadoop.platform.idgen.hbase.HBaseTest.main(HBaseTest.java:34)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:387)
    ... 5 more
Caused by: java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.hdfs.DistributedFileSystem could not be instantiated
    at java.util.ServiceLoader.fail(ServiceLoader.java:224)
    at java.util.ServiceLoader.access$100(ServiceLoader.java:181)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
    at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
    at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2400)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2411)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
    at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:197)
    at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
    at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:69)
    at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:801)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:633)
    ... 10 more
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.addDeprecations([Lorg/apache/hadoop/conf/Configuration$DeprecationDelta;)V
    at org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:66)
    at org.apache.hadoop.hdfs.HdfsConfiguration.<clinit>(HdfsConfiguration.java:31)
    at org.apache.hadoop.hdfs.DistributedFileSystem.<clinit>(DistributedFileSystem.java:114)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at java.lang.Class.newInstance(Class.java:374)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:373)
    ... 26 more
log4j:WARN找不到记录器(org.apache.hadoop.metrics2.lib.MutableMetricsFactory)的追加器。
log4j:警告请正确初始化log4j系统。
log4j:请参阅http://logging.apache.org/log4j/1.2/faq.html#noconfig 更多信息。
线程“main”java.io.IOException中的异常:java.lang.reflect.InvocationTargetException
位于org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:389)
位于org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:366)
位于org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:247)
位于org.apache.hadoop.hbase.client.HTable.(HTable.java:188)
位于org.apache.hadoop.hbase.client.HTable.(HTable.java:150)
位于com.aig.gds.hadoop.platform.idgen.hbase.HBaseTest.main(HBaseTest.java:34)
原因:java.lang.reflect.InvocationTargetException
位于sun.reflect.NativeConstructorAccessorImpl.newInstance0(本机方法)
位于sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
在sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
位于java.lang.reflect.Constructor.newInstance(Constructor.java:526)
位于org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:387)
... 还有5个
原因:java.util.ServiceConfigurationError:org.apache.hadoop.fs.FileSystem:Provider org.apache.hadoop.hdfs.DistributedFileSystem无法实例化
在java.util.ServiceLoader.fail处(ServiceLoader.java:224)
在java.util.ServiceLoader.access$100(ServiceLoader.java:181)
位于java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
位于java.util.ServiceLoader$1.next(ServiceLoader.java:445)
位于org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2400)
位于org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2411)
位于org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
位于org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
位于org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
位于org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
位于org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
位于org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
位于org.apache.hadoop.hbase.util.DynamicClassLoader.(DynamicClassLoader.java:104)
位于org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:197)
位于org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
位于org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:69)
位于org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:83)
位于org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.retrieveClusterId(HConnectionManager.java:801)
位于org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.(HConnectionManager.java:633)
... 10多
原因:java.lang.NoSuchMethodError:org.apache.hadoop.conf.Configuration.addDepressions([Lorg/apache/hadoop/conf/Configuration$DepressionDelta;)V
位于org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:66)
位于org.apache.hadoop.hdfs.HdfsConfiguration(HdfsConfiguration.java:31)
位于org.apache.hadoop.hdfs.DistributedFileSystem(DistributedFileSystem.java:114)
位于sun.reflect.NativeConstructorAccessorImpl.newInstance0(本机方法)
位于sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
在sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
位于java.lang.reflect.Constructor.newInstance(Constructor.java:526)
位于java.lang.Class.newInstance(Class.java:374)
位于java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:373)
…还有26个

提前感谢。

问题的根本原因在于堆栈跟踪:

NoSuchMethodError: org.apache.hadoop.conf.Configuration.addDeprecations
这意味着您的
hadoop common-*jar
版本与
hadoop hdfs-*jar
版本不同步,或者您的类路径中可能混合了不同的版本

请注意,hadoop 2.3.0及更高版本中存在AddDepressions:

但在2.2.0及之前版本中缺失: