Hadoop HBase中的反向扫描错误

Hadoop HBase中的反向扫描错误,hadoop,hbase,Hadoop,Hbase,在HBase表上执行反向扫描时出现异常。查找上一行时出现一些问题。如有任何建议,将不胜感激。错误日志如下所示: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions: Mon May 02 10:59:29 CEST 2016, RpcRetryingCaller{globalStartTime=1462179569123, pause=100, retries

在HBase表上执行反向扫描时出现异常。查找上一行时出现一些问题。如有任何建议,将不胜感激。错误日志如下所示:

org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions:
Mon May 02 10:59:29 CEST 2016, RpcRetryingCaller{globalStartTime=1462179569123, pause=100, retries=35}, java.io.IOException: java.io.IOException: Could not seekToPreviousRow StoreFileScanner[HFileScanner for reader reader=file:/data/hbase-1.1.2/data/hbase/data/default/table/c8cdadcd1247e04720972ab5a25597a7/outlinks/3eac358ffb9d43018221fbddf9274ffd, compression=none, cacheConf=blockCache=LruBlockCache{blockCount=149348, currentSize=9919772624, freeSize=2866589744, maxSize=12786362368, heapSize=9919772624, minSize=12147044352, minFactor=0.95, multiSize=6073522176, multiFactor=0.5, singleSize=3036761088, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, firstKey=Danmark2010-01-26T21:02:50Z/outlinks:.dk/1459765153334/Put, lastKey=Motorveje i Danmark2010-08-24T14:03:07Z/outlinks:\xC3\x98ver\xC3\xB8d/1459766037971/Put, avgKeyLen=70, avgValueLen=20, entries=49195292, length=4896832843, cur=Hj\xC3\xA6lp:Sandkassen2010-11-02T21:40:44Z/outlinks:Adriaterhav/1459771842796/Put/vlen=20/seqid=0] to key Hj\xC3\xA6lp:Sandkassen2010-11-02T21:34:14Z/outlinks:\xC4\x8Crnomelj/1459771842779/Put/vlen=20/seqid=0
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:457)
    at org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.next(ReversedKeyValueHeap.java:136)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:596)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5486)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5637)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5424)
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395)
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: On-disk size without header provided is 196736, but block header contains 65582. Block offset: -1, data starts with: DATABLK*\x00\x01\x00.\x00\x01\x00\x1A\x00\x00\x00\x00\x8D\xA08\xE2\x01\x00\x00@\x00\x00\x01\x00
    at org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:500)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:85)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:673)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
    ... 13 more
源代码是:

Table WHtable=connection.getTable(TableName.valueOf("table"));
Scan lowerClosestRowScan=new Scan();
lowerClosestRowScan.addFamily(Bytes.toBytes("outlinks"));
lowerClosestRowScan.setStartRow(Bytes.toBytes("A row"));
lowerClosestRowScan.setReversed(true);//to fetch last
ResultScanner lowerClosestRowScanner=WHtable.getScanner(lowerClosestRowScan);

您的hbase版本是什么?Hi@Whitefret它是hbase-1.1.2。找到了关于它的票证,但尚未解决:/。但是,它可能会在hbase 1.1.3中修复。。。请参阅第一个链接中的相关帖子,谢谢@Whitefret,我会想办法解决的。你的hbase版本是什么?Hi@Whitefret它是hbase-1.1.2。找到了一张关于它的票证,但尚未解决:/。但是,它可能会在hbase 1.1.3中修复。。。请参阅第一个链接中的相关帖子,谢谢@怀特弗雷特,我会想办法解决的。