Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop HBase主站点带有“停止”;“拒绝串通”;错误_Hadoop_Hbase_Cloudera - Fatal编程技术网

Hadoop HBase主站点带有“停止”;“拒绝串通”;错误

Hadoop HBase主站点带有“停止”;“拒绝串通”;错误,hadoop,hbase,cloudera,Hadoop,Hbase,Cloudera,这是在伪分布式和分布式模式下发生的。 当我尝试启动HBase时,最初所有3个服务都会启动—主服务、区域服务和quorumpeer服务。然而,不到一分钟,主人就停了下来。在日志中,这是跟踪- 2013-05-06 20:10:25,525 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 0 time(s). 2013-05-06 2

这是在伪分布式和分布式模式下发生的。 当我尝试启动HBase时,最初所有3个服务都会启动—主服务、区域服务和quorumpeer服务。然而,不到一分钟,主人就停了下来。在日志中,这是跟踪-

2013-05-06 20:10:25,525 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 0 time(s).
2013-05-06 20:10:26,528 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 1 time(s).
2013-05-06 20:10:27,530 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 2 time(s).
2013-05-06 20:10:28,533 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 3 time(s).
2013-05-06 20:10:29,535 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 4 time(s).
2013-05-06 20:10:30,538 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 5 time(s).
2013-05-06 20:10:31,540 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 6 time(s).
2013-05-06 20:10:32,543 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 7 time(s).
2013-05-06 20:10:33,544 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 8 time(s).
2013-05-06 20:10:34,547 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: <master/master_ip>:9000. Already tried 9 time(s).
2013-05-06 20:10:34,550 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to <master/master_ip>:9000 failed on connection exception: java.net.ConnectException: Connection refused
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1179)
        at org.apache.hadoop.ipc.Client.call(Client.java:1155)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
        at $Proxy9.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:132)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:259)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:220)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1611)
        at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:68)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1645)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1627)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:183)
        at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:363)
        at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:86)
        at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:368)
        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:301)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:519)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:484)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:468)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:575)
        at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:212)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1292)
        at org.apache.hadoop.ipc.Client.call(Client.java:1121)
        ... 18 more
2013-05-06 20:10:25525 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器::9000。已尝试了0次。
2013-05-06 20:10:26528 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器::9000。已尝试1次。
2013-05-06 20:10:27530 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器::9000。已尝试了2次。
2013-05-06 20:10:28533 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器::9000。已尝试了3次。
2013-05-06 20:10:29535 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器::9000。已尝试了4次。
2013-05-06 20:10:30538 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器::9000。已尝试了5次。
2013-05-06 20:10:31540 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器::9000。已尝试了6次。
2013-05-06 20:10:32543 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器::9000。已尝试了7次。
2013-05-06 20:10:33544 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器::9000。已尝试了8次。
2013-05-06 20:10:34547 INFO org.apache.hadoop.ipc.Client:正在重试连接到服务器::9000。已尝试了9次。
2013-05-06 20:10:34550致命org.apache.hadoop.hbase.master.HMaster:未处理的异常。开始关机。
java.net.ConnectException:调用:9000失败,连接异常:java.net.ConnectException:连接被拒绝
位于org.apache.hadoop.ipc.Client.wrapException(Client.java:1179)
位于org.apache.hadoop.ipc.Client.call(Client.java:1155)
位于org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
位于$Proxy9.getProtocolVersion(未知来源)
位于org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
位于org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
位于org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:132)
位于org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:259)
位于org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:220)
位于org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
位于org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1611)
位于org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:68)
位于org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1645)
位于org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1627)
位于org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
位于org.apache.hadoop.fs.Path.getFileSystem(Path.java:183)
位于org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:363)
位于org.apache.hadoop.hbase.master.MasterFileSystem.(MasterFileSystem.java:86)
位于org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:368)
位于org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:301)
原因:java.net.ConnectException:连接被拒绝
在sun.nio.ch.socketchannel.checkConnect(本机方法)
位于sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
位于org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
位于org.apache.hadoop.net.NetUtils.connect(NetUtils.java:519)
位于org.apache.hadoop.net.NetUtils.connect(NetUtils.java:484)
位于org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:468)
位于org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:575)
位于org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:212)
位于org.apache.hadoop.ipc.Client.getConnection(Client.java:1292)
位于org.apache.hadoop.ipc.Client.call(Client.java:1121)
... 还有18个
我已经采取了一些措施来解决这个问题,但没有任何成功 -从分布式模式降级为伪分布式模式。同样的问题。 -尝试独立模式-没有运气 -对hadoop和hbase使用相同的用户(hadoop)。为hadoop安装无密码ssh。-同样的问题。 -编辑/etc/hosts文件,并将localhost/servername以及127.0.0.1更改为引用SO和其他源的实际IP地址。还是同一个问题。 -重新启动服务器

这里是conf文件

hbase-site.xml

<configuration>
<property>
  <name>hbase.rootdir</name>
  <value>hdfs://<master>:9000/hbase</value>
        <description>The directory shared by regionservers.</description>
</property>

<property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
</property>

<property>
        <name>hbase.zookeeper.quorum</name>
        <value><master></value>
</property>

<property>
        <name>hbase.master</name>
        <value><master>:60000</value>
        <description>The host and port that the HBase master runs at.</description>
</property>

<property>
        <name>dfs.replication</name>
        <value>1</value>
        <description>The replication count for HLog and HFile storage. Should not be greater than HDFS datanode count.</description>
</property>

</configuration>

hbase.rootdir
hdfs://:9000/hbase
区域服务器共享的目录。
hbase.cluster.distributed
符合事实的
hbase.zookeeper.quorum
hbase.master
:60000
HBase主机运行的主机和端口。
dfs.replication
1.
HLog和HFile存储的复制计数。不应大于HDFS数据节点计数。
/etc/hosts文件

127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 .

我做错了什么

Hadoop版本-Hadoop 0.20.2-cdh3u5
HBase版本-版本0.90.6-cdh3u5

通过查看您的配置文件,我假设您在配置文件中使用的是实际主机名。如果是这样,请将主机名和机器的IP一起添加到/etc/hosts文件中。还要确保它与Hadoop的core-site.xml中的主机名匹配。正确的名称解析对于正确的HBase功能至关重要

如果您仍然面临任何问题,请正确遵循上述步骤。我已经试着详细地解释了这个过程,如果您仔细地遵循所有步骤,希望您能够运行它


通过查看您的配置文件,我假设您在配置文件中使用的是实际的主机名。如果是这样,请将主机名和机器的IP一起添加到/etc/hosts文件中。也使
127.0.0.1   localhost
255.255.255.255 broadcasthost
::1             localhost 
fe80::1%lo0 localhost
172.20.x.x  my.hostname.com
bin=`dirname "$0"`
bin=`cd "$bin">/dev/null; pwd`

  echo "usage: $(basename $0) <example-name>"
  exit 1;
fi

MVN="mvn"
if [ "$MAVEN_HOME" != "" ]; then
  MVN=${MAVEN_HOME}/bin/mvn
fi

CLASSPATH="${HBASE_CONF_DIR}"

if [ -d "${bin}/../target/classes" ]; then
  CLASSPATH=${CLASSPATH}:${bin}/../target/classes
fi

cpfile="${bin}/../target/cached_classpath.txt"
if [ ! -f "${cpfile}" ]; then
  ${MVN} -f "${bin}/../pom.xml" dependency:build-classpath -Dmdep.outputFile="${cpfile}" &> /dev/null
fi
CLASSPATH=`hbase classpath`:${CLASSPATH}:`cat "${cpfile}"`

JAVA_HOME=your path to java home
JAVA=$JAVA_HOME/bin/java
JAVA_HEAP_MAX=-Xmx512m

echo "Classpath is $CLASSPATH"
"$JAVA" $JAVA_HEAP_MAX -classpath "$CLASSPATH" "$@"