Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 无法打开SecondaryName节点的web UI状态页_Hadoop - Fatal编程技术网

Hadoop 无法打开SecondaryName节点的web UI状态页

Hadoop 无法打开SecondaryName节点的web UI状态页,hadoop,Hadoop,我已经建立了一个包含3台机器的小型Hadoop群集: 机器(Hadoop1)同时运行NameNode和Jobtracker 计算机(Hadoop2)正在运行SecondaryNameNode 机器(Hadoop3)正在运行DataNode和TaskTracker 当我检查日志文件时,一切正常。 但是,当我试图在Hadoop2机器上键入localhost:50090来检查SecondaryNameNode的工作状态时,它显示: Unable to connect ....can't establi

我已经建立了一个包含3台机器的小型Hadoop群集:

  • 机器(Hadoop1)同时运行NameNode和Jobtracker
  • 计算机(Hadoop2)正在运行SecondaryNameNode
  • 机器(Hadoop3)正在运行DataNode和TaskTracker
  • 当我检查日志文件时,一切正常。 但是,当我试图在Hadoop2机器上键入localhost:50090来检查SecondaryNameNode的工作状态时,它显示:

    Unable to connect ....can't establish a connection to the server at localhost:50090.
    
    有人遇到过这样的问题吗

    SNN上hdfs-site.xml中的内容:

    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>
    
    <property>
    <name>dfs.http.address</name>
    <value>Hadoop1:50070</value>
    </property>
    
    <property>
    <name>dfs.secondary.http.address</name>
    <value>Hadoop2:50090</value>
    </property>
    </configuration>
    
    2013-04-23 19:47:00,820 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
    2013-04-23 19:47:00,987 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Downloaded file fsimage size 654 bytes.
    2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Downloaded file edits size 4 bytes.
    2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 64-bit
    2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB
    2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^21 = 2097152 entries
    2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
    2013-04-23 19:47:00,998 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
    2013-04-23 19:47:00,998 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
    2013-04-23 19:47:00,998 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
    2013-04-23 19:47:00,998 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
    2013-04-23 19:47:00,999 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
    2013-04-23 19:47:00,999 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
    2013-04-23 19:47:00,999 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 7
    2013-04-23 19:47:01,000 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
    2013-04-23 19:47:01,000 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /app/hadoop/tmp/dfs/namesecondary/current/edits of size 4 edits # 0 loaded in 0 seconds.
    2013-04-23 19:47:01,001 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
    2013-04-23 19:47:01,049 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 saved in 0 seconds.
    2013-04-23 19:47:01,334 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 saved in 0 seconds.
    2013-04-23 19:47:01,570 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Posted URL Hadoop1:50070putimage=1&port=50090&machine=Hadoop3&token=-32:145975115:0:1366717621000:1366714020860
    2013-04-23 19:47:01,771 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint done. New Image Size: 654
    

    尝试为SNN上的hdfs-site.xml中的dfs.secondary.http.address赋值。另外,我假设您的机器之间没有启用防火墙,对吗?如果您可以显示您的日志,这会有所帮助。我看到有时用户为SNN输入了不正确的端口号,这与他们的日志不同,因此会导致连接错误。

    dfs.secondary.http.address已在SNN上的hdfs-site.xml文件中设置,我刚刚发布了hdfs-site.xml文件的内容,如上所示。我还发布了SNN的日志,这表明它运行良好,检查点已经成功完成。