Java 尝试将数据插入HBase时无限期暂停

Java 尝试将数据插入HBase时无限期暂停,java,hadoop,garbage-collection,hbase,Java,Hadoop,Garbage Collection,Hbase,在尝试将数据插入HBase时,我发现在执行了大量写入操作(~100000000次)之后,插入操作只是挂起(在UI中为0次写入/秒)。主服务器和区域服务器保持运行,JavaHBase客户端进程似乎也处于活动状态 我所能看到的只是以下日志条目。这可能是什么原因?怎么能修好呢 hbase-master.log 2015-11-17 02:16:52,007 DEBUG [htable-pool279-t1] ipc.AbstractRpcClient: Use SIMPLE authenticatio

在尝试将数据插入HBase时,我发现在执行了大量写入操作(~100000000次)之后,插入操作只是挂起(在UI中为0次写入/秒)。主服务器和区域服务器保持运行,JavaHBase客户端进程似乎也处于活动状态

我所能看到的只是以下日志条目。这可能是什么原因?怎么能修好呢

hbase-master.log

2015-11-17 02:16:52,007 DEBUG [htable-pool279-t1] ipc.AbstractRpcClient: Use SIMPLE authentication for service ClientService, sasl=false
2015-11-17 02:16:52,007 DEBUG [htable-pool279-t1] ipc.AbstractRpcClient: Connecting to server2.example.com/<IP>:16020
2015-11-17 02:17:17,966 DEBUG [ProcedureExecutorTimeout] procedure2.ProcedureExecutor$CompletedProcedureCleaner: No completed procedures to cleanup.
2015-11-17 02:17:47,966 DEBUG [ProcedureExecutorTimeout] procedure2.ProcedureExecutor$CompletedProcedureCleaner: No completed procedures to cleanup.
2015-11-17 02:18:17,966 DEBUG [ProcedureExecutorTimeout] procedure2.ProcedureExecutor$CompletedProcedureCleaner: No completed procedures to cleanup.
2015-11-17 02:18:47,966 DEBUG [ProcedureExecutorTimeout] procedure2.ProcedureExecutor$CompletedProcedureCleaner: No completed procedures to cleanup.
region-server.log

2015-11-17 01:24:08,833 INFO  [MemStoreFlusher.1] regionserver.HStore: Added hdfs://server0.example.com:9000/hbase/data/default/MYTABLE/323da4642dc634afdbbc1fef03568a04/node_info/49b6b7c247c64cf49b75414f30cc1646, entries=302884, sequenceid=91538227, filesize=31.6 M
2015-11-17 01:24:08,833 INFO  [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~67.84 MB/71133312, currentsize=0 B/0 for region MYTABLE,,1447717440847.323da4642dc634afdbbc1fef03568a04. in 579ms, sequenceid=91538227, compaction requested=true
2015-11-17 01:24:53,525 INFO  [regionserver/server1.example.com/<IP>:16020.logRoller] wal.FSHLog: Rolled WAL /hbase/WALs/server1.example.com,16020,1447666006438/server1.example.com%2C16020%2C1447666006438.default.1447741493458 with entries=126566, filesize=26.02 MB; new WAL /hbase/WALs/server1.example.com,16020,1447666006438/server1.example.com%2C16020%2C1447666006438.default.1447745093512
2015-11-17 01:24:53,526 INFO  [regionserver/server1.example.com/<IP>:16020.logRoller] wal.FSHLog: Archiving hdfs://server0.example.com:9000/hbase/WALs/server1.example.com,16020,1447666006438/server1.example.com%2C16020%2C1447666006438.default.1447741081730 to hdfs://server0.example.com:9000/hbase/oldWALs/server1.example.com%2C16020%2C1447666006438.default.144
7741081730
2015-11-17 01:24:53,527 INFO  [regionserver/server1.example.com/9.12.235.213:16020.logRoller] wal.FSHLog: Archiving hdfs://server0.example.com:9000/hbase/WALs/server1.example.com,16020,1447666006438/server1.example.com%2C16020%2C1447666006438.def
ault.1447741493458 to hdfs://server0.example.com:9000/hbase/oldWALs/server1.example.com%2C16020%2C1447666006438.default.1447741493458
2015-11-17 01:26:47,759 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=7.78 MB, freeSize=6.21 GB, max=6.21 GB, blockCount=10, accesses=1953643, hits=30700, hitRatio=1.57%, , cachingAccesses=30779, cachingHits=30171, cachingHitsRatio=98.02%, evictions=7919, evicted=598, evictedPerRun=0.07551458477973938
2015-11-17 01:31:47,759 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=7.78 MB, freeSize=6.21 GB, max=6.21 GB, blockCount=10, accesses=1953643, hits=30700, hitRatio=1.57%, , cachingAccesses=30779, cachingHits=30171, cachingHitsRatio=98.02%, evictions=7949, evicted=598, evictedPerRun=0.07522958517074585
2015-11-17 01:36:47,759 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=7.78 MB, freeSize=6.21 GB, max=6.21 GB, blockCount=10, accesses=1953643, hits=30700, hitRatio=1.57%, , cachingAccesses=30779, cachingHits=30171, cachingHitsRatio=98.02%, evictions=7979, evicted=598, evictedPerRun=0.07494673877954483

看来这个问题与好的旧HBase无关。插入数据时出现了可怕且无法解释的停顿,因为我使用opencsv CSVReader读取插入代码中大小>5GB的csv文件。(问题类似于)。我刚刚将插入代码更改为使用BufferedReader读取文件,延迟消失了

2015-11-17 01:24:08,833 INFO  [MemStoreFlusher.1] regionserver.HStore: Added hdfs://server0.example.com:9000/hbase/data/default/MYTABLE/323da4642dc634afdbbc1fef03568a04/node_info/49b6b7c247c64cf49b75414f30cc1646, entries=302884, sequenceid=91538227, filesize=31.6 M
2015-11-17 01:24:08,833 INFO  [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~67.84 MB/71133312, currentsize=0 B/0 for region MYTABLE,,1447717440847.323da4642dc634afdbbc1fef03568a04. in 579ms, sequenceid=91538227, compaction requested=true
2015-11-17 01:24:53,525 INFO  [regionserver/server1.example.com/<IP>:16020.logRoller] wal.FSHLog: Rolled WAL /hbase/WALs/server1.example.com,16020,1447666006438/server1.example.com%2C16020%2C1447666006438.default.1447741493458 with entries=126566, filesize=26.02 MB; new WAL /hbase/WALs/server1.example.com,16020,1447666006438/server1.example.com%2C16020%2C1447666006438.default.1447745093512
2015-11-17 01:24:53,526 INFO  [regionserver/server1.example.com/<IP>:16020.logRoller] wal.FSHLog: Archiving hdfs://server0.example.com:9000/hbase/WALs/server1.example.com,16020,1447666006438/server1.example.com%2C16020%2C1447666006438.default.1447741081730 to hdfs://server0.example.com:9000/hbase/oldWALs/server1.example.com%2C16020%2C1447666006438.default.144
7741081730
2015-11-17 01:24:53,527 INFO  [regionserver/server1.example.com/9.12.235.213:16020.logRoller] wal.FSHLog: Archiving hdfs://server0.example.com:9000/hbase/WALs/server1.example.com,16020,1447666006438/server1.example.com%2C16020%2C1447666006438.def
ault.1447741493458 to hdfs://server0.example.com:9000/hbase/oldWALs/server1.example.com%2C16020%2C1447666006438.default.1447741493458
2015-11-17 01:26:47,759 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=7.78 MB, freeSize=6.21 GB, max=6.21 GB, blockCount=10, accesses=1953643, hits=30700, hitRatio=1.57%, , cachingAccesses=30779, cachingHits=30171, cachingHitsRatio=98.02%, evictions=7919, evicted=598, evictedPerRun=0.07551458477973938
2015-11-17 01:31:47,759 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=7.78 MB, freeSize=6.21 GB, max=6.21 GB, blockCount=10, accesses=1953643, hits=30700, hitRatio=1.57%, , cachingAccesses=30779, cachingHits=30171, cachingHitsRatio=98.02%, evictions=7949, evicted=598, evictedPerRun=0.07522958517074585
2015-11-17 01:36:47,759 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=7.78 MB, freeSize=6.21 GB, max=6.21 GB, blockCount=10, accesses=1953643, hits=30700, hitRatio=1.57%, , cachingAccesses=30779, cachingHits=30171, cachingHitsRatio=98.02%, evictions=7979, evicted=598, evictedPerRun=0.07494673877954483
2015-11-17T00:25:48.005-0600: [GC (Allocation Failure) 2015-11-17T00:25:48.005-0600: [ParNew: 307577K->1134K(309696K), 0.0038813 secs] 1265221K->962084K(1735184K), 0.0039946 secs] [Times: user=0.06 sys=0.00, real=0.00 secs]
2015-11-17T00:25:49.891-0600: [GC (Allocation Failure) 2015-11-17T00:25:49.891-0600: [ParNew: 276462K->1354K(309696K), 0.0025507 secs] 1237412K->962304K(1735184K), 0.0026612 secs] [Times: user=0.04 sys=0.00, real=0.00 secs]
2015-11-17T00:25:51.727-0600: [GC (Allocation Failure) 2015-11-17T00:25:51.727-0600: [ParNew: 276682K->34368K(309696K), 0.0050290 secs] 1237632K->997653K(1735184K), 0.0051420 secs] [Times: user=0.10 sys=0.00, real=0.01 secs]
2015-11-17T00:25:53.553-0600: [GC (Allocation Failure) 2015-11-17T00:25:53.553-0600: [ParNew: 309696K->1598K(309696K), 0.0040175 secs] 1272981K->966446K(1735184K), 0.0041283 secs] [Times: user=0.07 sys=0.00, real=0.00 secs]
2015-11-17T00:25:55.334-0600: [GC (Allocation Failure) 2015-11-17T00:25:55.334-0600: [ParNew: 276926K->34368K(309696K), 0.0048517 secs]1241774K->1001404K(1735184K), 0.0049688 secs] [Times: user=0.10 sys=0.00, real=0.00 secs]