HBase Master赢得';开始

HBase Master赢得';开始,hbase,apache-zookeeper,cloudera-cdh,Hbase,Apache Zookeeper,Cloudera Cdh,我在CDH群集5.7.0中运行HBase。经过几个月的运行,没有任何问题,hbase服务停止,现在无法启动hbase主机(1台主机和4台区域服务器) 当我尝试在某个点启动时,机器会挂起,我在主日志中看到的最后一件事是: 2016-10-24 12:17:15,150 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recover lease on dfs file hdfs://namenode:8020/hbase/MasterProcWALs/s

我在CDH群集5.7.0中运行HBase。经过几个月的运行,没有任何问题,hbase服务停止,现在无法启动hbase主机(1台主机和4台区域服务器)

当我尝试在某个点启动时,机器会挂起,我在主日志中看到的最后一件事是:

2016-10-24 12:17:15,150 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recover lease on dfs file hdfs://namenode:8020/hbase/MasterProcWALs/state-00000000000000005528.log
2016-10-24 12:17:15,152 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recovered lease, attempt=0 on file=hdfs://namenode:8020/hbase/MasterProcWALs/state-00000000000000005528.log after 2ms
2016-10-24 12:17:15,177 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recover lease on dfs file hdfs://namenode:8020/hbase/MasterProcWALs/state-00000000000000005529.log
2016-10-24 12:17:15,179 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recovered lease, attempt=0 on file=hdfs://namenode:8020/hbase/MasterProcWALs/state-00000000000000005529.log after 2ms
2016-10-24 12:17:15,394 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recover lease on dfs file hdfs://namenode:8020/hbase/MasterProcWALs/state-00000000000000005530.log
2016-10-24 12:17:15,397 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recovered lease, attempt=0 on file=hdfs://namenode:8020/hbase/MasterProcWALs/state-00000000000000005530.log after 3ms
2016-10-24 12:17:15,405 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recover lease on dfs file hdfs://namenode:8020/hbase/MasterProcWALs/state-00000000000000005531.log
2016-10-24 12:17:15,409 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recovered lease, attempt=0 on file=hdfs://namenode:8020/hbase/MasterProcWALs/state-00000000000000005531.log after 3ms
2016-10-24 12:17:15,414 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O error constructing remote block reader.
java.net.SocketException: No buffer space available
    at sun.nio.ch.Net.connect0(Native Method)
    at sun.nio.ch.Net.connect(Net.java:465)
    at sun.nio.ch.Net.connect(Net.java:457)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3499)
    at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:838)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:753)
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:374)
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662)
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:889)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:942)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:742)
    at java.io.FilterInputStream.read(FilterInputStream.java:83)
    at com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:232)
    at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
    at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
    at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
    at org.apache.hadoop.hbase.protobuf.generated.ProcedureProtos$ProcedureWALHeader.parseDelimitedFrom(ProcedureProtos.java:3870)
    at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.readHeader(ProcedureWALFormat.java:138)
    at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFile.open(ProcedureWALFile.java:76)
    at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.initOldLog(WALProcedureStore.java:1006)
    at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.initOldLogs(WALProcedureStore.java:969)
    at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:300)
    at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.start(ProcedureExecutor.java:509)
    at org.apache.hadoop.hbase.master.HMaster.startProcedureExecutor(HMaster.java:1175)
    at org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1097)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:681)
    at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:187)
    at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1756)
    at java.lang.Thread.run(Thread.java:745)
2016-10-24 12:17:15,427 WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /xxx.xxx.xxx.xxx:50010 for block, add to deadNodes and continue. java.net.SocketException: No buffer space available
java.net.SocketException: No buffer space available
    at sun.nio.ch.Net.connect0(Native Method)
    at sun.nio.ch.Net.connect(Net.java:465)
    at sun.nio.ch.Net.connect(Net.java:457)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3499)
    at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:838)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:753)
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:374)
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662)
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:889)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:942)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:742)
    at java.io.FilterInputStream.read(FilterInputStream.java:83)
    at com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:232)
    at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
    at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
    at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
    at org.apache.hadoop.hbase.protobuf.generated.ProcedureProtos$ProcedureWALHeader.parseDelimitedFrom(ProcedureProtos.java:3870)
    at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFormat.readHeader(ProcedureWALFormat.java:138)
    at org.apache.hadoop.hbase.procedure2.store.wal.ProcedureWALFile.open(ProcedureWALFile.java:76)
    at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.initOldLog(WALProcedureStore.java:1006)
    at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.initOldLogs(WALProcedureStore.java:969)
    at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:300)
    at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.start(ProcedureExecutor.java:509)
    at org.apache.hadoop.hbase.master.HMaster.startProcedureExecutor(HMaster.java:1175)
    at org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1097)
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:681)
    at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:187)
    at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1756)
    at java.lang.Thread.run(Thread.java:745)
2016-10-24 12:17:15,436 INFO org.apache.hadoop.hdfs.DFSClient: Successfully connected to /xxx.xxx.xxx.xxx:50010 for BP-813663273-xxx.xxx.xxx.xxx-1460963038761:blk_1079056868_5316127
2016-10-24 12:17:15,442 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recover lease on dfs file hdfs://namenode:8020/hbase/MasterProcWALs/state-00000000000000005532.log
2016-10-24 12:17:15,444 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recovered lease, attempt=0 on file=hdfs://namenode:8020/hbase/MasterProcWALs/state-00000000000000005532.log after 2ms
2016-10-24 12:17:15,669 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recover lease on dfs file hdfs://namenode:8020/hbase/MasterProcWALs/state-00000000000000005533.log
2016-10-24 12:17:15,672 INFO org.apache.hadoop.hbase.util.FSHDFSUtils: Recovered lease, attempt=0 on file=hdfs://namenode:8020/hbase/MasterProcWALs/state-00000000000000005533.log after 2ms 
恐怕WALProcedureStore中有些东西损坏了,但我不知道如何继续挖掘以找到问题。有什么线索吗?我可以在不加载先前损坏状态的情况下重新启动主机吗

编辑

我刚刚看到,我认为这是发生在我身上的同一个问题。我是否可以安全地删除
/hbase/MasterProcWALs
中的所有内容,而不删除hbase中存储的旧数据

没有


感谢您,WAL或预写日志是一种HBase机制,能够在一切崩溃时恢复对数据的修改。基本上,对HBase的每一次写入操作都会事先记录在WAL中,如果系统崩溃并且数据仍然没有持久化,HBase将能够从WAL中重新创建这些写入操作

这有助于我更好地了解整个过程:

长城是灾难发生时需要的生命线。与MySQL中的BIN日志类似,它记录对数据的所有更改。这在主存储器发生故障时非常重要。因此,如果服务器崩溃,它可以有效地重播该日志,使所有内容都恢复到崩溃前服务器应该处于的位置。这还意味着,如果将记录写入WAL失败,则整个操作必须视为失败

让我们看一下如何在HBase中实现这一点的高级视图。首先,客户端启动一个修改数据的操作。这当前是对put(put)、delete(delete)和incrementColumnValue()的调用(这里有时缩写为“incr”)。每个修改都被包装到一个KeyValue对象实例中,并使用RPC调用通过线路发送。这些调用是(理想情况下是成批的)到为受影响区域提供服务的HRegionServer。一旦到达有效负载,所述键值将路由到负责受影响行的HRegion。数据将写入WAL,然后放入保存记录的实际存储区的MemStore。这也大致描述了HBase的写入路径

最终,当MemStore达到一定大小或在特定时间后,数据会异步持久化到文件系统。在这段时间内,数据会存储在内存中。如果承载该内存的HRegionServer崩溃,数据将丢失……但对于本文主题的存在,WAL

这里的问题是,由于WAL最终拥有数千个日志。每次主服务器尝试激活时,它都有那么多不同的日志要恢复、租用和读取……这最终导致使用namenode。它耗尽了tcp缓冲区空间,所有内容都崩溃了

为了能够启动主机,我必须手动删除
/hbase/MasterProcWALs
/hbase/WALs
下的日志。执行此操作后,主机能够变为活动状态,hbase集群重新联机

编辑:


正如Ankit Singhai指出的那样,删除
/hbase/WALs
中的日志将导致数据丢失。只删除
/hbase/MasterProcWALs
中的日志应该可以。

您可以尝试此解决方案删除/hbase/WALs将导致数据丢失,这对于上述错误来说也是不必要的。删除/hbase/MasterProcWALs应该修复这一问题ster启动问题。感谢这个有用的答案,但是同意@AnkitSinghal,您应该区分hbase master用于存储DDL的
/hbase/MasterProcWALs
,以及区域服务器用于存储数据的
/hbase/WALs