Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 2.9多节点_Hadoop - Fatal编程技术网

Hadoop 2.9多节点

Hadoop 2.9多节点,hadoop,Hadoop,我有3台服务器Centos 7防火墙和selinux已禁用 chadoop1主控、chadoop2从控和chadoop3从控 当我启动服务时,我在jps上看到,节点没有启动,不显示DataNode和NodeManager 除从属节点外,节点上的所有配置都是rsync 我尝试重新格式化,显示OK,但同样的问题 我的目录是:/opt/hadoop 配置: hdfs-site.xml <configuration> <property> <

我有3台服务器Centos 7防火墙和selinux已禁用 chadoop1主控、chadoop2从控和chadoop3从控

当我启动服务时,我在jps上看到,节点没有启动,不显示DataNode和NodeManager

除从属节点外,节点上的所有配置都是rsync

我尝试重新格式化,显示OK,但同样的问题

我的目录是:/opt/hadoop

配置:

hdfs-site.xml

<configuration>
    <property>
            <name>dfs.data.dir</name>
            <value>/opt/hadoop/dfs/name/data</value>
            <final>true</final>
    </property>
    <property>
            <name>dfs.name.dir</name>
            <value>/opt/hadoop/dfs/name</value>
            <final>true</final>
    </property>
    <property>
            <name>dfs.replication</name>
            <value>2</value>
    </property>
启动服务

[hadoop@chadoop1 hadoop]$ start-dfs.sh
 Starting namenodes on [localhost]
 localhost: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop- 
 namenode-chadoop1.out
 chadoop4: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop- 
 datanode-chadoop4.out
 chadoop3: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-                    
 datanode-chadoop3.out
 Starting secondary namenodes [0.0.0.0]
 0.0.0.0: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop- 
 hadoop-secondarynamenode-chadoop1.out

 [hadoop@chadoop1 hadoop]$ jps
 5603 Jps
 5492 SecondaryNameNode
 5291 NameNode
 [hadoop@chadoop1 hadoop]$ start-yarn.sh
 starting yarn daemons
 starting resourcemanager, logging to /opt/hadoop/logs/yarn-hadoop-               
 resourcemanager-chadoop1.out
 chadoop3: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop- 
 nodemanager-chadoop3.out
 chadoop4: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop- 
 nodemanager-chadoop4.out
 [hadoop@chadoop1 hadoop]$ jps
 5492 SecondaryNameNode
 5658 ResourceManager
 5914 Jps
 5291 NameNode
除从属节点外,节点上的所有配置都是rsync

所有配置必须位于所有节点上

也就是说,datanodes需要知道NameNode在网络上的位置,因此如果服务器实际上应该是从服务器,则进程不能位于localhost上。因此,必须输入实际主机名

纱线服务也是如此

我在jps上看到,不显示DataNode和NodeManager

从显示的输出来看,您似乎只在主计算机上启动了服务,而不是那些服务所在的两个从属计算机

启动脚本只控制一台机器,而不是集群,jps将只显示本地机器的Java进程


顺便说一句,ApacheAmbari使安装和管理Hadoop集群变得更加容易

请在问题中添加/opt/hadoop/logs/hadoop-hadoop-datanode-*.out文件。注意:切勿在XML中使用localhost。始终使用外部主机或IP名称感谢您提供本地主机信息。hadoop-hadoop-datanode-chadoop1.log结束文件2018-10-18 09:03:35211 INFO org.apache.hadoop.hdfs.server.datanode.datanode:get finalize命令用于块池BP-1848867256-127.0.0.1-153978980$2018-10-18 11:14:03411错误org.apache.hadoop.hdfs.server.datanode.datanode:RECEIVED信号15:SIGTERM 2018-10-18 11:14:03787 INFOorg.apache.hadoop.hdfs.server.datanode.datanode:SHUTDOWN\u MSG:SHUTDOWN\u MSG:SHUTDOWN在localhost上关闭datanode/127.0.0.1收到信号15:SIGTERM意味着它从某个外部进程关闭,不一定是任何错误。您可以查看其他两台机器的日志以查找错误
<configuration>

<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    <description>MapReduce framework name</description>
</property>

<property>
  <name>mapreduce.jobhistory.address</name>
  <value>localhost:10020</value>
  <description>Default port is 10020.</description>
</property>

<property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>localhost:19888</value>
  <description>Default port is 19888.</description>
</property>

<property>
  <name>mapreduce.jobhistory.intermediate-done-dir</name>
  <value>/mr-history/tmp</value>
  <description>Directory where history files are written by MapReduce jobs.</description>
</property>

<property>
  <name>mapreduce.jobhistory.done-dir</name>
  <value>/mr-history/done</value>
  <description>Directory where history files are managed by the MR JobHistory Server.</description>
</property>
<configuration>

<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    <description>MapReduce framework name</description>
</property>

<property>
  <name>mapreduce.jobhistory.address</name>
  <value>localhost:10020</value>
  <description>Default port is 10020.</description>
</property>

<property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>localhost:19888</value>
  <description>Default port is 19888.</description>
</property>

<property>
  <name>mapreduce.jobhistory.intermediate-done-dir</name>
  <value>/mr-history/tmp</value>
  <description>Directory where history files are written by MapReduce jobs.</description>
</property>

<property>
  <name>mapreduce.jobhistory.done-dir</name>
  <value>/mr-history/done</value>
  <description>Directory where history files are managed by the MR JobHistory Server.</description>
</property>
chadoop3
chadoop4
[hadoop@chadoop1 hadoop]$ start-dfs.sh
 Starting namenodes on [localhost]
 localhost: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop- 
 namenode-chadoop1.out
 chadoop4: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop- 
 datanode-chadoop4.out
 chadoop3: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-                    
 datanode-chadoop3.out
 Starting secondary namenodes [0.0.0.0]
 0.0.0.0: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop- 
 hadoop-secondarynamenode-chadoop1.out

 [hadoop@chadoop1 hadoop]$ jps
 5603 Jps
 5492 SecondaryNameNode
 5291 NameNode
 [hadoop@chadoop1 hadoop]$ start-yarn.sh
 starting yarn daemons
 starting resourcemanager, logging to /opt/hadoop/logs/yarn-hadoop-               
 resourcemanager-chadoop1.out
 chadoop3: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop- 
 nodemanager-chadoop3.out
 chadoop4: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop- 
 nodemanager-chadoop4.out
 [hadoop@chadoop1 hadoop]$ jps
 5492 SecondaryNameNode
 5658 ResourceManager
 5914 Jps
 5291 NameNode