Hadoop namenode未在masternode上启动?

Hadoop namenode未在masternode上启动?,hadoop,hdfs,cluster-computing,Hadoop,Hdfs,Cluster Computing,15/09/10 14:45:34警告util.NativeCodeLoader:无法加载 适用于您的平台的本机hadoop库。。。使用内置java类 如适用 不包含有效的主机:港务局::9000 在[]上启动namenodes 192.168.0.81:启动namenode,登录到/usr/local/hadoop/logs/hadoop-spark-namenode-progen-System-Product-Name.out 192.168.0.81:启动datanode,登录到/usr/

15/09/10 14:45:34警告util.NativeCodeLoader:无法加载 适用于您的平台的本机hadoop库。。。使用内置java类 如适用

不包含有效的主机:港务局::9000

在[]上启动namenodes

192.168.0.81:启动namenode,登录到/usr/local/hadoop/logs/hadoop-spark-namenode-progen-System-Product-Name.out

192.168.0.81:启动datanode,登录到/usr/local/hadoop/logs/hadoop-spark-datanode-progen-System-Product-Name.out

正在启动辅助名称节点[0.0.0.0]spark@0.0.0.0的密码:

0.0.0.0:启动secondarynamenode,记录到/usr/local/hadoop/logs/hadoop-spark-secondarynamenode-dell.out

15/09/10 14:45:58警告util.NativeCodeLoader:无法加载 适用于您的平台的本机hadoop库。。。使用内置java类 如适用

核心站点.xml

<property>
  <name>fs.default.name</name>
  <value>hdfs://192.168.0.26:9000</value>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
</property> 
<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>
<property>
  <name>mapreduce.job.tracer</name>
  <value>192.168.0.26:9001</value>
</property>

fs.default.name
hdfs://192.168.0.26:9000
hdfs site.xml

<property>
  <name>fs.default.name</name>
  <value>hdfs://192.168.0.26:9000</value>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
</property> 
<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>
<property>
  <name>mapreduce.job.tracer</name>
  <value>192.168.0.26:9001</value>
</property>

dfs.namenode.name.dir
文件:/usr/local/hadoop\u tmp/hdfs/namenode
dfs.datanode.data.dir
文件:/usr/local/hadoop\u tmp/hdfs/datanode
dfs.replication
2.
mapred site.xml

<property>
  <name>fs.default.name</name>
  <value>hdfs://192.168.0.26:9000</value>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
</property> 
<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>
<property>
  <name>mapreduce.job.tracer</name>
  <value>192.168.0.26:9001</value>
</property>

mapreduce.job.tracer
192.168.0.26:9001

您在hadoop中使用的是哪个发行版和供应商?我正在使用hadoop-2.6.0,我找到了解决方案&这对我来说很有用,只需将所有I.p替换为别名即可。