Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/database/8.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop/HBASE-Can';t使用HDFS高可用性(故障切换)配置regionserver_Hbase_Hadoop2_Failover - Fatal编程技术网

Hadoop/HBASE-Can';t使用HDFS高可用性(故障切换)配置regionserver

Hadoop/HBASE-Can';t使用HDFS高可用性(故障切换)配置regionserver,hbase,hadoop2,failover,Hbase,Hadoop2,Failover,我正在尝试构建一个具有故障切换功能的Hadoop体系结构。 我的问题是我无法正确配置带有HDFS HA的RegionServer。RegionServer日志中有以下错误 java.io.IOException: Port 9000 specified in URI hdfs://HAcluster:9000 but host 'HAcluster' is a logical (HA) namenode and does not use port information. at org.apac

我正在尝试构建一个具有故障切换功能的Hadoop体系结构。 我的问题是我无法正确配置带有HDFS HA的RegionServer。RegionServer日志中有以下错误

java.io.IOException: Port 9000 specified in URI hdfs://HAcluster:9000 but host 'HAcluster' is a logical (HA) namenode and does not use port information.
at org.apache.hadoop.hdfs.NameNodeProxies.getFailoverProxyProviderClass(NameNodeProxies.java:396)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:134)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:510)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166)
at org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:2508)
at org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:2492)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:62)
at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2543)
在发布之前,我在谷歌上做了研究。我觉得没什么用,所以我做了一些尝试: -我尝试更改HBASE版本。我下载了最后一个(0.98.17-hadoop2)。无效 -我尝试从头开始,这意味着:格式化HDF、删除Zookeeper元数据、删除znodes等等。。。 -我试着替换hdfs://HAcluster/hbase 通过hdfs://MASTER1:9000/hbase 在每个有HBASE的服务器上。没有效果

所以我有点不知所措,因为即使没有逻辑集群,我仍然会出错

PS:所有其余的都按预期工作:datanode/nodemanager连接到活动的namenode/resourcemanager(通过web应用程序检查) HBASE主机运行也正常,备份主机也被考虑在内(与webapp一起检查) 这也是为什么我不明白我有这个错误


我希望我为您提供了正确理解我的问题所需的所有要素

问题已解决。它位于核心站点.xml中。。。在fs.defaultFS属性中,它是hdfs://HAcluster:9000 而不是hdfs://HAcluster...after 更改fs.defaultFS是否需要格式namenode?问题已解决。它位于核心站点.xml中。。。在fs.defaultFS属性中,它是hdfs://HAcluster:9000 而不是hdfs://HAcluster...after 更改fs.defaultFS是否需要格式namenode?
<configuration>
<property>
    <name>dfs.replication</name>
    <value>1</value>
    <description>the value is the number of the copy of the file in the file system</description>
</property>
<!-- High Availability Hadoop -->
<property>
    <name>dfs.nameservices</name>
    <value>HAcluster</value> <!-- HAcluster is consisted of SUNRAY009IV06 = MASTER 1 and SUNRAY009IV07 = MASTER 2 -->
    <final>true</final>
    <description>The name of your cluster which consists of Master 1 and Master 2</description>
</property>
<property>
    <name>dfs.ha.namenodes.HAcluster</name>
    <value>SUNRAY009IV06,SUNRAY009IV07</value> <!--SUNRAY009IV06 = MASTER 1, SUNRAY009IV07 = MASTER 2 -->
    <final>true</final>
    <description>The namenodes in your cluster</description>
</property>
<property>
    <name>dfs.namenode.rpc-address.HAcluster.SUNRAY009IV06</name>
    <value>SUNRAY009IV06:9000</value> <!--SUNRAY009IV06 = MASTER 1 -->
    <description>the RPC adress of your Master 1</description>
</property>
<property>
    <name>dfs.namenode.rpc-address.HAcluster.SUNRAY009IV07</name>
    <value>SUNRAY009IV07:9000</value> <!--SUNRAY009IV07 = MASTER 2 -->
    <description>the RPC adress of your Master 2</description>
</property>
<property>
    <name>dfs.namenode.http-address.HAcluster.SUNRAY009IV06</name>
    <value>SUNRAY009IV06:50070</value> <!--SUNRAY009IV06 = MASTER 1 -->
    <description>the HTTP adress of your Master 1</description>
</property>
<property>
    <name>dfs.namenode.http-address.HAcluster.SUNRAY009IV07</name>
    <value>SUNRAY009IV07:50070</value> <!--SUNRAY009IV07 = MASTER 2 -->
    <description>the HTTP adress of your Master 2</description>
</property>
<property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://SUNRAY009IV06:8485;SUNRAY009IV07:8485;SUNRAY009IV08:8485/HAcluster</value>
    <!--SUNRAY009IV06 = MASTER 1, SUNRAY009IV07 = MASTER 2, SUNRAY009IV08 = SLAVE 1 -->
    <description>the location of the shared storage directory</description>
</property>
<property>
    <name>dfs.client.failover.proxy.provider.HAcluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    <description>the Java class that HDFS clients use to contact the Active NameNode</description>
</property>
<property> 
    <name>dfs.permissions</name>
    <value>false</value>
    <description>disable hdfs permissions</description>
</property>
<property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
    <description>The backup is defined as automatic</description>
</property>
<property>
    <name>ha.zookeeper.quorum</name>
    <value>SUNRAY009IV09:2181,SUNRAY009IV11:2181,SUNRAY009IV13:2181</value>
    <description>The list of your Zookeeper servers in your Hadoop architecture</description>
    <!--SUNRAY009IV09 = ZOOKEEPER 1, SUNRAY009IV11 = ZOOKEEPER 2, SUNRAY009IV13 = ZOOKEEPER 3 -->
</property>
<property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
    <description> method which will be used to fence the Active NameNode during a failover. 
    sshfence = SSH to the Active NameNode and kill the process</description>
</property>
<property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/home/hadoopuser/.ssh/id_rsa</value>
    <description>List of SSH private key files</description>
</property>
<property>
    <name>dfs.ha.fencing.ssh.connect-timeout</name>
    <value>3000</value>
    <description>timeout</description>
</property>
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>HAyarn</value>
    <!--HAyarn is consisted of SUNRAY009IV06 = MASTER 1 and SUNRAY009IV07 = MASTER 2 -->
    <description>The name of the Resource Manager</description>
</property>
<property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
    <description>to enable YARN logs</description>
</property>
<property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/tmp/logs</value>
    <description>Where to store logs in HDFS</description>
</property>
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    <description>shuffle service that needs to be set for Map Reduce to run</description>
</property>
<property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    <description>mapreduce_shuffle service to implement</description>
</property>
<property>
    <name>mapreduce.jobhistory.address</name>
    <value>HAyarn:8031</value>
    <!--HAyarn is consisted of SUNRAY009IV06 = MASTER 1 and SUNRAY009IV07 = MASTER 2 -->
    <description>host is the hostname of the resource manager and  the port is the port on which the NodeManagers contact the Resource Manage</description>
</property>

<!-- High Availability YARN -->
<property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
</property>
<property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>HAyarn</value>
</property>
<property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
</property>
<property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>SUNRAY009IV06</value>
    <!--SUNRAY009IV06 = MASTER 1, SUNRAY009IV07 = MASTER 2-->
    <description>The hostname of MASTER 1</description>
</property>
<property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>SUNRAY009IV07</value>
    <!--SUNRAY009IV06 = MASTER 1, SUNRAY009IV07 = MASTER 2-->
    <description>The hostnameof MASTER 2</description>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>SUNRAY009IV06:8088</value>
    <!--SUNRAY009IV06 = MASTER 1, SUNRAY009IV07 = MASTER 2-->
    <description>The Web application address of MASTER 1</description>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>SUNRAY009IV07:8088</value>
    <!--SUNRAY009IV06 = MASTER 1, SUNRAY009IV07 = MASTER 2-->
    <description>The Web application address of MASTER 2</description>
</property>
<property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>SUNRAY009IV09:2181,SUNRAY009IV11:2181,SUNRAY009IV13:2181</value>
    <description>The list of your Zookeeper servers in your Hadoop architecture</description>
    <!--SUNRAY009IV09 = ZOOKEEPER 1, SUNRAY009IV11 = ZOOKEEPER 2, SUNRAY009IV13 = ZOOKEEPER 3 -->
</property>
<property>
    <name>yarn.client.failover-proxy-provider.HAyarn</name>
    <value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value>
    <description>the class used for the YARN failover</description>
</property>
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://HAcluster/hbase</value> <!--HAcluster is consisted of SUNRAY009IV06 = MASTER 1 and SUNRAY009IV07 = MASTER 2 -->
    <description>The directory shared by RegionServers (slaves)</description>
</property>
<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>The mode the cluster will be in</description>
</property>
<property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
    <description>Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect.</description>
</property>
<property>
    <name>hbase.zookeeper.quorum</name>
    <value>SUNRAY009IV09,SUNRAY009IV11,SUNRAY009IV13</value>
    <descrption>The list of your Zookeeper servers in your Hadoop architecture</descrption>
    <!--SUNRAY009IV09 = ZOOKEEPER 1, SUNRAY009IV11 = ZOOKEEPER 2, SUNRAY009IV13 = ZOOKEEPER 3 -->
</property>
<property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/home/zookeeper</value>
    <description>Property from ZooKeeper's config zoo.cfg. The directory where the snapshot is stored.</description>
</property>
<property>
    <name>zookeeper.znode.parent</name>
    <value>/hbase</value>
    <description>The root znode that will contain all the znodes created/used byHBase</description>
</property>
#Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=false