Java Hadoop:启动容器失败错误
我刚刚安装了一个多节点hadoop集群,其中有一台namenode机器和两个slavenodes。但是,当我运行mapreduce任务时,我不断遇到以下错误: 容器_1453020503065_0030_01_000009的容器启动失败Java Hadoop:启动容器失败错误,java,hadoop,Java,Hadoop,我刚刚安装了一个多节点hadoop集群,其中有一台namenode机器和两个slavenodes。但是,当我运行mapreduce任务时,我不断遇到以下错误: 容器_1453020503065_0030_01_000009的容器启动失败 :java.lang.IllegalArgumentException:java.net.UnknownHostException: HOME 这里HOME和shubhranshu-OptiPlex-9020是从机的主机名。我已将它们的IP地址和主机名放在/e
:java.lang.IllegalArgumentException:java.net.UnknownHostException: HOME
这里HOME和shubhranshu-OptiPlex-9020是从机的主机名。我已将它们的IP地址和主机名放在/etc/hosts文件中。
我的/etc/hosts文件如下所示:
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2
127.0.0.1 localhost amrit
#127.0.1.1 amrit
10.0.3.107 amrit
10.0.3.108 HOME
10.0.3.109 shubhranshu-OptiPlex-9020
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
请告诉我是否需要添加更多内容。谢谢大家! 按如下方式修改/etc/hosts文件:
127.0.0.1 localhost
10.0.3.107 HadoopMaster amrit
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2
127.0.0.1 localhost
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1 HOME
10.0.3.109 HadoopSlave2
127.0.0.1 localhost
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2 shubhranshu-OptiPlex-9020
还修改10.0.3.108机器的/etc/hosts,如下所示:
127.0.0.1 localhost
10.0.3.107 HadoopMaster amrit
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2
127.0.0.1 localhost
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1 HOME
10.0.3.109 HadoopSlave2
127.0.0.1 localhost
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2 shubhranshu-OptiPlex-9020
并对10.0.3.109机器中的/etc/hosts进行如下修改:
127.0.0.1 localhost
10.0.3.107 HadoopMaster amrit
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2
127.0.0.1 localhost
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1 HOME
10.0.3.109 HadoopSlave2
127.0.0.1 localhost
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2 shubhranshu-OptiPlex-9020
按如下方式修改/etc/hosts文件:
127.0.0.1 localhost
10.0.3.107 HadoopMaster amrit
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2
127.0.0.1 localhost
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1 HOME
10.0.3.109 HadoopSlave2
127.0.0.1 localhost
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2 shubhranshu-OptiPlex-9020
还修改10.0.3.108机器的/etc/hosts,如下所示:
127.0.0.1 localhost
10.0.3.107 HadoopMaster amrit
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2
127.0.0.1 localhost
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1 HOME
10.0.3.109 HadoopSlave2
127.0.0.1 localhost
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2 shubhranshu-OptiPlex-9020
并对10.0.3.109机器中的/etc/hosts进行如下修改:
127.0.0.1 localhost
10.0.3.107 HadoopMaster amrit
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2
127.0.0.1 localhost
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1 HOME
10.0.3.109 HadoopSlave2
127.0.0.1 localhost
10.0.3.107 HadoopMaster
10.0.3.108 HadoopSlave1
10.0.3.109 HadoopSlave2 shubhranshu-OptiPlex-9020
您是否在群集的所有主机之间同步了/etc/hosts文件?是否有充分的理由使用重复的主机名?在所有主机之间,前3行是同步的。您是否在群集的所有主机之间同步了/etc/hosts文件?是否有充分的理由使用重复的主机名?在所有主机之间,前3行是同步的。