Hadoop HA active NN不断崩溃,自动故障切换不会';行不通

Hadoop HA active NN不断崩溃,自动故障切换不会';行不通,hadoop,high-availability,Hadoop,High Availability,我正在使用Hadoop2.2.0HA。 这是我的配置 core-site.xml <property> <name>ha.zookeeper.quorum</name> <value>zk01.bi.lietou.inc:2181,zk02.bi.lietou.inc:2181,zk03.bi.lietou.inc:2181</value> </property> <property> &

我正在使用Hadoop2.2.0HA。 这是我的配置

core-site.xml

<property>
    <name>ha.zookeeper.quorum</name>
    <value>zk01.bi.lietou.inc:2181,zk02.bi.lietou.inc:2181,zk03.bi.lietou.inc:2181</value>
</property>
<property>
    <name>ipc.client.connect.timeout</name>
    <value>20000</value>
</property>
129关闭后,133仍处于待机状态。 备用NN日志

2015-09-26 22:09:27,651 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
java.io.IOException: Failed on local exception: java.io.EOFException; Host Details : local host is: "lynx-bi-30-133.liepin.inc/192.168.30.133"; destination host is: "lynx001-bi-30-129.liepin.inc":2020;
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
    at org.apache.hadoop.ipc.Client.call(Client.java:1351)
    at org.apache.hadoop.ipc.Client.call(Client.java:1300)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy11.rollEditLog(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:139)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:268)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:310)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:279)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:292)
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:392)
    at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:995)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)
就在这个错误之前,它开始溢出

2015-09-26 22:03:00,941 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2015-09-26 22:03:00,941 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 2020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 192.168.30.131:35882Call#7495335 Retry#0: error: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2015-09-26 22:03:01,135 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2015-09-26 22:03:01,135 INFO org.apache.hadoop.ipc.Server: IPC Server handler 45 on 2020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 192.168.30.131:35886 Call#7495346 Retry#0: error: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2015-09-26 22:03:06,050 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2015-09-26 22:03:06,050 INFO org.apache.hadoop.ipc.Server: IPC Server handler 19 on 2020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 192.168.30.131:35891 Call#1 Retry#0: error: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
JN日志

2015-09-26 22:09:44,395 WARN org.apache.hadoop.ipc.Server: IPC Server Responder, call org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.startLogSegment from 192.168.30.129:30015 Call#157803 Retry#0: output error
2015-09-26 22:09:45,400 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 8485 caught an exception
java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:265)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:474)
    at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2577)
    at org.apache.hadoop.ipc.Server.access$2200(Server.java:122)
    at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1011)
    at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1076)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2104)

我尝试将ipc超时时间增加到60秒,但没有效果。

我认为它使用的是dfs.qjournal.start-segment.timeout.ms。默认值为20000

但是,您可能还需要调整其他配置,如dfs.qjournal.write-txns.timeout.ms

但是,修复基础设施问题比更改这些默认值要好。。 似乎有许多属性定义了NameNodes如何管理其到JouralManager的各种类型的连接和超时

在我的例子中,我向hdfs-site.xml添加了以下自定义属性

<property>
    <name>dfs.nameservices</name>
    <value>lynxcluster</value>
</property>
<property>
    <name>dfs.ha.namenodes.lynxcluster</name>
    <value>nn1,nn2</value>
</property>
<property>
    <name>dfs.namenode.rpc-address.lynxcluster.nn1</name>
    <value>192.168.30.133:2020</value>
</property>
<property>
    <name>dfs.namenode.rpc-address.lynxcluster.nn2</name>
    <value>192.168.30.129:2020</value>
</property>
<property>
    <name>dfs.namenode.http-address.lynxcluster.nn1</name>
    <value>192.168.30.133:2070</value>
</property>
<property>
    <name>dfs.namenode.http-address.lynxcluster.nn2</name>
    <value>192.168.30.129:2070</value>
</property>
<property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://192.168.30.134:8485;192.168.30.135:8485;192.168.30.136:8485/mycluster</value>
</property>
<property>
    <name>dfs.client.failover.proxy.provider.lynxcluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
</property>
<property>
    <name>dfs.qjournal.write-txns.timeout.ms</name>
    <value>6000000</value>
</property>
dfs.qjournal.start-segment.timeout.ms=90000
dfs.qjournal.select-input-streams.timeout.ms=90000
dfs.qjournal.write-txns.timeout.ms=90000

我还向core-site.xml添加了以下属性

<property>
    <name>ha.zookeeper.quorum</name>
    <value>zk01.bi.lietou.inc:2181,zk02.bi.lietou.inc:2181,zk03.bi.lietou.inc:2181</value>
</property>
<property>
    <name>ipc.client.connect.timeout</name>
    <value>20000</value>
</property>
ipc.client.connect.timeout=90000

到目前为止,这似乎已经缓解了问题。

我相信它使用的是dfs.qjournal.start-segment.timeout.ms。默认值为20000

但是,您可能还需要调整其他配置,如dfs.qjournal.write-txns.timeout.ms

但是,修复基础设施问题比更改这些默认值要好。。 似乎有许多属性定义了NameNodes如何管理其到JouralManager的各种类型的连接和超时

在我的例子中,我向hdfs-site.xml添加了以下自定义属性

<property>
    <name>dfs.nameservices</name>
    <value>lynxcluster</value>
</property>
<property>
    <name>dfs.ha.namenodes.lynxcluster</name>
    <value>nn1,nn2</value>
</property>
<property>
    <name>dfs.namenode.rpc-address.lynxcluster.nn1</name>
    <value>192.168.30.133:2020</value>
</property>
<property>
    <name>dfs.namenode.rpc-address.lynxcluster.nn2</name>
    <value>192.168.30.129:2020</value>
</property>
<property>
    <name>dfs.namenode.http-address.lynxcluster.nn1</name>
    <value>192.168.30.133:2070</value>
</property>
<property>
    <name>dfs.namenode.http-address.lynxcluster.nn2</name>
    <value>192.168.30.129:2070</value>
</property>
<property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://192.168.30.134:8485;192.168.30.135:8485;192.168.30.136:8485/mycluster</value>
</property>
<property>
    <name>dfs.client.failover.proxy.provider.lynxcluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
</property>
<property>
    <name>dfs.qjournal.write-txns.timeout.ms</name>
    <value>6000000</value>
</property>
dfs.qjournal.start-segment.timeout.ms=90000
dfs.qjournal.select-input-streams.timeout.ms=90000
dfs.qjournal.write-txns.timeout.ms=90000

我还向core-site.xml添加了以下属性

<property>
    <name>ha.zookeeper.quorum</name>
    <value>zk01.bi.lietou.inc:2181,zk02.bi.lietou.inc:2181,zk03.bi.lietou.inc:2181</value>
</property>
<property>
    <name>ipc.client.connect.timeout</name>
    <value>20000</value>
</property>
ipc.client.connect.timeout=90000

到目前为止,这似乎缓解了问题。

您找到解决方案了吗?您找到解决方案了吗?