Hbase Geomesa BBOX查询未返回所有结果
我在玩Geomesa(使用HBase)对OSM节点数据的BBOX查询。我发现对于特定区域,geomesa没有返回边界框中的所有节点 例如,我触发了3个查询:Hbase Geomesa BBOX查询未返回所有结果,hbase,geomesa,Hbase,Geomesa,我在玩Geomesa(使用HBase)对OSM节点数据的BBOX查询。我发现对于特定区域,geomesa没有返回边界框中的所有节点 例如,我触发了3个查询: BBOX(-122.0,47.4,-122.01,47.5)-输出具有5477个独特功能 BBOX(-122.0,47.5,-122.01,47.6)-输出具有9879个独特功能 BBOX(-122.0,47.4,-122.01,47.6)-输出具有13374个独特功能 查看这些边界框,我认为查询1+查询2的特征应该等于查询3。但实际上,它
19/09/27 14:57:34 INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=38583 ms ago, cancelled=false, msg=java.io.FileNotFoundException: File not present on S3
at com.amazon.ws.emr.hadoop.fs.s3.S3FSInputStream.read(S3FSInputStream.java:133)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:738)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1493)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1770)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1596)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:454)
at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:651)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:601)
at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:302)
at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:201)
at org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:391)
at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:224)
at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2208)
at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:6112)
at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:6086)
at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2841)
at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2821)
at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2803)
at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2797)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2697)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3012)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2380)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
19/09/27 14:57:34信息RpcRetryingCaller:调用异常,尝试次数=10次,重试次数=35次,启动时间=38583毫秒,取消时间=false,msg=java.io.FileNotFoundException:文件在S3上不存在
位于com.amazon.ws.emr.hadoop.fs.s3.S3FSInputStream.read(S3FSInputStream.java:133)
位于java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
在java.io.BufferedInputStream.read处(BufferedInputStream.java:345)
读取(DataInputStream.java:149)
位于org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:738)
位于org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1493)
位于org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1770)
位于org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1596)
位于org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:454)
在org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.LoadDataBlockWithCanInfo上(HFileBlockIndex.java:269)
在org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:651)
在org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:601)
位于org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:302)
位于org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:201)
位于org.apache.hadoop.hbase.regionserver.StoreScanner.seekscanner(StoreScanner.java:391)
位于org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:224)
位于org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2208)
位于org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:6112)
位于org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl(HRegion.java:6086)
位于org.apache.hadoop.hbase.regionserver.HRegion.InstanceRegionScanner(HRegion.java:2841)
位于org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2821)
位于org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2803)
位于org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2797)
位于org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2697)
位于org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3012)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
位于org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2380)
位于org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
位于org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
位于org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
编辑:提供有关S3异常的其他信息,此建议不再适用
我会尝试禁用“松散边界框”,如前所述。如果这不能解决差异,请在上提交一份错误报告,最好采用可重复的步骤
谢谢,这看起来像是S3一致性问题。尝试运行:
emrfs同步-ms3://
然后重新运行查询。S3和DynamoDB表非常常见,它们用于管理HBase的S3一致性模型,使其失去同步。作为cron作业运行此sync命令有助于避免此问题,或在发生此问题时自动解决此问题 我看到一个类似的例外,如下所列。为什么会发生这种异常?dev.locationtech.org/mhonarc/lists/geomesa-dev/msg01027.html请查看描述中附带的例外情况@emilio lahr vivaz我无法提供可复制的步骤。该错误仅在我的目录中重现。当我创建了一个新目录并再次加载数据时,我在相关的边界框中没有看到类似的问题。