Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/linux/24.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Hbase客户端连接失败_Java_Linux_Network Programming_Hbase - Fatal编程技术网

Java Hbase客户端连接失败

Java Hbase客户端连接失败,java,linux,network-programming,hbase,Java,Linux,Network Programming,Hbase,我正在尝试在远程hbase服务器和简单的web java应用程序之间建立一个简单的连接 hbase主机已启动,我可以使用端口60010上的web ui访问它 我将hbase-site.xml重新设置为默认值 这是我尝试连接的代码 public void test(){ Configuration conf = HBaseConfiguration.create(); conf.clear(); conf.set("hbase.zookeeper.quorum", "<

我正在尝试在远程hbase服务器和简单的web java应用程序之间建立一个简单的连接

hbase主机已启动,我可以使用端口60010上的web ui访问它

我将hbase-site.xml重新设置为默认值

这是我尝试连接的代码

public void test(){
    Configuration conf = HBaseConfiguration.create();
    conf.clear();
    conf.set("hbase.zookeeper.quorum", "<server_ip>");
    conf.set("hbase.zookeeper.property.clientPort", "2181");


    try {
        Connection connection = ConnectionFactory.createConnection(conf);

        HBaseAdmin hbaseAdmin = new HBaseAdmin(conf);

        // create a table in HBase if it doesn't exist
        String barsTableName = "Sample";
        String family = "ColumnFam";
        if (!hbaseAdmin.tableExists(barsTableName)) {
            HTableDescriptor desc = new HTableDescriptor(barsTableName);

            desc.addFamily(new HColumnDescriptor(family));
            hbaseAdmin.createTable(desc);
            Logger.info("bars table created");
        }


        Table table = connection.getTable(TableName.valueOf(barsTableName));

        Put put = new Put(Bytes.toBytes(1));
        put.add(Bytes.toBytes(family), Bytes.toBytes("descrip"), Bytes.toBytes("MaValue"));
        table.put(put);

        Get get = new Get(Bytes.toBytes(1));
        org.apache.hadoop.hbase.client.Result r = table.get(get);
        byte [] value = r.getValue(Bytes.toBytes("ColumnFam"),Bytes.toBytes("descrip"));
        String valueStr = Bytes.toString(value);
        System.out.println("GET: " + valueStr);
                connection.close();
    } catch (Exception e) {
        e.printStackTrace();
    }

}

您能否在终端中执行hbase hbck,并确保获得“发现0个不一致”


如果假设您没有找到“0个不一致”,则您的hbase不稳定/不一致。看起来像你的。桌子是用螺丝固定的

你检查过这个了吗?是的,但是没有帮助。你觉得有什么特别的吗?
[error] org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=32, exceptions:
[error] Fri Oct 09 10:57:45 GMT+01:00 2015, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68417: row 'Sample,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=vm-77446.loc
aldomain,39333,1444384405265, seqNum=0
[error]
[error]         at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:264)
[error]         at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:199)
[error]         at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
[error]         at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
[error]         at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:287)
[error]         at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)
[error]         at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)
[error]         at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:134)
[error]         at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)
[error]         at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
[error]         at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
[error]         at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:281)
[error]         at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:291)
[error]         at controllers.Application.index(Application.java:39)
[error]         at router.Routes$$anonfun$routes$1$$anonfun$applyOrElse$1$$anonfun$apply$1.apply(Routes.scala:95)
[error]         at router.Routes$$anonfun$routes$1$$anonfun$applyOrElse$1$$anonfun$apply$1.apply(Routes.scala:95)
[error]         at play.core.routing.HandlerInvokerFactory$$anon$4.resultCall(HandlerInvoker.scala:136)
[error]         at play.core.routing.HandlerInvokerFactory$JavaActionInvokerFactory$$anon$14$$anon$3$$anon$1.invocation(HandlerInvoker.scala:127)
[error]         at play.core.j.JavaAction$$anon$1.call(JavaAction.scala:70)
[error]         at play.http.DefaultHttpRequestHandler$1.call(DefaultHttpRequestHandler.java:20)
[error]         at play.core.j.JavaAction$$anonfun$7.apply(JavaAction.scala:94)
[error]         at play.core.j.JavaAction$$anonfun$7.apply(JavaAction.scala:94)
[error]         at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
[error]         at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
[error]         at play.core.j.HttpExecutionContext$$anon$2.run(HttpExecutionContext.scala:40)
[error]         at play.api.libs.iteratee.Execution$trampoline$.execute(Execution.scala:70)
[error]         at play.core.j.HttpExecutionContext.execute(HttpExecutionContext.scala:32)
[error]         at scala.concurrent.impl.Future$.apply(Future.scala:31)
[error]         at scala.concurrent.Future$.apply(Future.scala:492)
[error]         at play.core.j.JavaAction.apply(JavaAction.scala:94)
[error]         at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:105)
[error]         at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4$$anonfun$apply$5.apply(Action.scala:105)
[error]         at play.utils.Threads$.withContextClassLoader(Threads.scala:21)
[error]         at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:104)
[error]         at play.api.mvc.Action$$anonfun$apply$1$$anonfun$apply$4.apply(Action.scala:103)
[error]         at scala.Option.map(Option.scala:146)
[error]         at play.api.mvc.Action$$anonfun$apply$1.apply(Action.scala:103)
[error]         at play.api.mvc.Action$$anonfun$apply$1.apply(Action.scala:96)
[error]         at play.api.libs.iteratee.Iteratee$$anonfun$mapM$1.apply(Iteratee.scala:524)
[error]         at play.api.libs.iteratee.Iteratee$$anonfun$mapM$1.apply(Iteratee.scala:524)
[error]         at play.api.libs.iteratee.Iteratee$$anonfun$flatMapM$1.apply(Iteratee.scala:560)
[error]         at play.api.libs.iteratee.Iteratee$$anonfun$flatMapM$1.apply(Iteratee.scala:560)
[error]         at play.api.libs.iteratee.Iteratee$$anonfun$flatMap$1$$anonfun$apply$13.apply(Iteratee.scala:536)
[error]         at play.api.libs.iteratee.Iteratee$$anonfun$flatMap$1$$anonfun$apply$13.apply(Iteratee.scala:536)
[error]         at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
[error]         at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
[error]         at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
[error]         at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
[error]         at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
[error]         at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
[error]         at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
[error]         at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[error] Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68417: row 'Sample,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=vm-77446.localdomain,39333,1444384405265, s
eqNum=0
[error]         at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
[error]         at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:294)
[error]         at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:275)
[error]         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[error]         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[error]         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[error]         at java.lang.Thread.run(Thread.java:745)
[error] Caused by: org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=vm-77446.locald
omain/<my_server_internet_ip>:39333]
[error]         at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
[error]         at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
[error]         at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:403)
[error]         at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:709)
[error]         at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:880)
[error]         at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:849)
[error]         at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1173)
[error]         at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
[error]         at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
[error]         at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:31751)
[error]         at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:332)
[error]         at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:187)
[error]         at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
[error]         at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
[error]         ... 6 more
127.0.0.1       localhost
#127.0.0.1       vm-77446.localdomain vm-77446
#127.0.1.1      vm-77446.localdomain vm-77446


# The following lines are desirable for IPv6 capable hosts
::1                localhost ip6-localhost ip6-loopback
<an_ip6_ip>        ip6-allnodes
<an_other_ip6_ip>  ip6-allrouters

<local_ip>    vm-77446.localdomain vm-77446
#<internet_ip>  vm-77446.localdomain vm-77446
Summary:
  Sample is okay.
    Number of regions: 1
    Deployed on:  vm-77446.localdomain,40953,1444404662326
  hbase:meta is okay.
    Number of regions: 1
    Deployed on:  vm-77446.localdomain,40953,1444404662326
  hbase:namespace is okay.
    Number of regions: 1
    Deployed on:  vm-77446.localdomain,40953,1444404662326
0 inconsistencies detected.
Status: OK
2015-10-12 11:31:49,972 INFO  [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService
2015-10-12 11:31:49,972 INFO  [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1504d3a06f10013
2015-10-12 11:31:49,973 INFO  [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-10-12 11:31:49,973 INFO  [main] zookeeper.ZooKeeper: Session: 0x1504d3a06f10013 closed
2015-10-12 11:31:49,973 INFO  [main-EventThread] zookeeper.ClientCnxn: EventThread shut down