Hbase CallTimeoutException

Hbase CallTimeoutException,hbase,phoenix,Hbase,Phoenix,我正在运行Hbase 1.1.2/Hadoop 2.7.1/phoenix-4.8.0。我试图在表上运行扫描,但RPC超时 我已将Hbase RPC超时更改为15分钟,我可以在UI中确认该超时。。。 我还在hbase-site.xml中将hbase.client.scanner.timeout.period更改为15分钟 <property> <name>hbase.client.scanner.timeout.period</name>

我正在运行Hbase 1.1.2/Hadoop 2.7.1/phoenix-4.8.0。我试图在表上运行扫描,但RPC超时

我已将Hbase RPC超时更改为15分钟,我可以在UI中确认该超时。。。 我还在hbase-site.xml中将hbase.client.scanner.timeout.period更改为15分钟

  <property>
    <name>hbase.client.scanner.timeout.period</name>
    <value>900000</value> <!-- 900 000, 15 minutes -->
  </property>
  <property>
    <name>hbase.rpc.timeout</name>
    <value>900000</value> <!-- 15 minutes -->
  </property>

hbase.client.scanner.timeout.period
900000
hbase.rpc.timeout
900000
。。。但我的当事人在60岁后仍然超时

[09/05/17 12:06:51:051 CST] localhost-startStop-1-SendThread(192.168.16.7:2181)  INFO zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session 0x25a6a6966293434 has expired, closing socket connection
[09/05/17 12:06:49:049 CST] phoenix-1-thread-436426  WARN client.ScannerCallable: Ignore, probably already closed
java.io.IOException: Call to hadoopslave3/192.168.16.5:16020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=41428707, waitTime=63776, operationTimeout=60000 expired.
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1284)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1252)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
    at org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:355)
    at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:195)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:140)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
    at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
    at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258)
    at org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241)
    at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534)
    at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
    at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)
    at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126)
    at org.apache.phoenix.iterate.ChunkedResultIterator$SingleChunkResultIterator.next(ChunkedResultIterator.java:179)
    at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:139)
    at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:97)
    at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:69)
    at org.apache.phoenix.iterate.SpoolingResultIterator$SpoolingResultIteratorFactory.newIterator(SpoolingResultIterator.java:92)
    at org.apache.phoenix.iterate.ChunkedResultIterator.<init>(ChunkedResultIterator.java:113)
    at org.apache.phoenix.iterate.ChunkedResultIterator.<init>(ChunkedResultIterator.java:55)
    at org.apache.phoenix.iterate.ChunkedResultIterator$ChunkedResultIteratorFactory.newIterator(ChunkedResultIterator.java:91)
    at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:114)
    at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=41428707, waitTime=63776, operationTimeout=60000 expired.
    at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    ... 30 more
[09/05/17 12:06:49:049 CST] phoenix-1-thread-436435  WARN client.ScannerCallable: Ignore, probably already closed
java.io.IOException: Call to hadoopslave3/192.168.16.5:16020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=41428712, waitTime=64527, operationTimeout=60000 expired.
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1284)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1252)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
    at org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:355)
    at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:195)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:140)
    at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
    at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
    at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258)
    at org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241)
    at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534)
    at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
    at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)
    at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126)
    at org.apache.phoenix.iterate.ChunkedResultIterator$SingleChunkResultIterator.next(ChunkedResultIterator.java:179)
    at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:139)
    at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:97)
    at org.apache.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:69)
    at org.apache.phoenix.iterate.SpoolingResultIterator$SpoolingResultIteratorFactory.newIterator(SpoolingResultIterator.java:92)
    at org.apache.phoenix.iterate.ChunkedResultIterator.<init>(ChunkedResultIterator.java:113)
    at org.apache.phoenix.iterate.ChunkedResultIterator.<init>(ChunkedResultIterator.java:55)
    at org.apache.phoenix.iterate.ChunkedResultIterator$ChunkedResultIteratorFactory.newIterator(ChunkedResultIterator.java:91)
    at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:114)
    at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=41428712, waitTime=64527, operationTimeout=60000 expired.
    at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
[09/05/17 12:06:51:051 CST]localhost-startStop-1-SendThread(192.168.16.7:2181)INFO zookeeper.ClientCnxn:无法重新连接到zookeeper服务,会话0x25A6966293434已过期,正在关闭套接字连接
[09/05/17 12:06:49:049 CST]phoenix-1-thread-436426警告客户端。可扫描:忽略,可能已关闭
java.io.IOException:调用hadoopslave3/192.168.16.5:16020失败,本地异常:org.apache.hadoop.hbase.ipc.CallTimeoutException:调用id=41428707,waitTime=63776,操作超时=60000过期。
位于org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1284)
位于org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1252)
位于org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
位于org.apache.hadoop.hbase.ipc.AbstractRpcClient$blockingRpcChannel实现.callBlockingMethod(AbstractRpcClient.java:287)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
位于org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:355)
位于org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:195)
位于org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:140)
位于org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
位于org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
位于org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
位于org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258)
位于org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241)
位于org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534)
位于org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
位于org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)
位于org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126)
位于org.apache.phoenix.iterate.chunkedresulterator$SingleChunkResultIterator.next(chunkedresulterator.java:179)
位于org.apache.phoenix.iterate.spoolingrulterator.(spoolingrulterator.java:139)
位于org.apache.phoenix.iterate.spoolingrulterator.(spoolingrulterator.java:97)
位于org.apache.phoenix.iterate.spoolingrulterator.(spoolingrulterator.java:69)
位于org.apache.phoenix.iterate.spoolingrulterator$spoolingrulteratorfactory.newIterator(spoolingrulterator.java:92)
位于org.apache.phoenix.iterate.chunkedresulterator.(chunkedresulterator.java:113)
位于org.apache.phoenix.iterate.chunkedresulterator.(chunkedresulterator.java:55)
位于org.apache.phoenix.iterate.chunkedresulterator$chunkedresulteratorfactory.newIterator(chunkedresulterator.java:91)
位于org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:114)
位于org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
在java.util.concurrent.FutureTask.run(FutureTask.java:262)处
位于org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
运行(Thread.java:745)
原因:org.apache.hadoop.hbase.ipc.CallTimeoutException:Call id=41428707,waitTime=63776,operationTimeout=60000过期。
在org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)上
位于org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
... 30多
[09/05/17 12:06:49:049 CST]phoenix-1-thread-436435警告客户端。可扫描:忽略,可能已关闭
java.io.IOException:调用hadoopslave3/192.168.16.5:16020失败,本地异常:org.apache.hadoop.hbase.ipc.CallTimeoutException:调用id=41428712,waitTime=64527,操作超时=60000过期。
位于org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1284)
位于org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1252)
位于org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
位于org.apache.hadoop.hbase.ipc.AbstractRpcClient$blockingRpcChannel实现.callBlockingMethod(AbstractRpcClient.java:287)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
位于org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:355)
位于org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:195)
位于org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:140)
位于org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
位于org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
位于org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
位于org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258)
位于org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241)
位于org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientSca
<bean id="phoenixJdbcTemplate"
          class="org.springframework.jdbc.core.JdbcTemplate">
        <constructor-arg ref="phoenixDataSource"/>
        <qualifier value="phoenixJdbcTemplate"></qualifier>
    </bean>


    <!--<bean id="alertPonenixImpl" class="com.eazy.eqm.dubbo.data.provider.service.BasePhoenix">-->
        <!--<property name="jdbcTemplate" ref="phoenixJdbcTemplate"/>-->
    <!--</bean>-->


    <bean id="phoenixDataSource" class="org.apache.commons.dbcp.BasicDataSource">
        <property name="driverClassName" value="org.apache.phoenix.jdbc.PhoenixDriver"/>
        <property name="url"><value>jdbc:phoenix:192.168.16.7:2181</value></property>
        <!--<property name="username" value=""/>-->
        <!--<property name="password" value=""/>-->
        <!--<property name="initialSize" value="20"/>-->
        <!--<property name="maxActive" value="0"/>-->
        <property name="defaultAutoCommit" value="true"/>
    </bean>