Hadoop Namenode未在CentOS 7上启动

Hadoop Namenode未在CentOS 7上启动,hadoop,hortonworks-data-platform,ambari,Hadoop,Hortonworks Data Platform,Ambari,我已经使用Hortonworks数据平台安装了Hadoop。我有三台运行CentOS 7的机器。这三台计算机中的一台正在运行amabari服务器和ambari客户端实例。另外两个仅运行amabari客户端 在NameNode启动任务之前,所有安装过程都进行得很顺利,这引发了一个错误。NameNode正在amabari服务器的同一台计算机上运行 这是错误日志 Traceback (most recent call last): File "/var/lib/ambari-agent/cache

我已经使用Hortonworks数据平台安装了Hadoop。我有三台运行CentOS 7的机器。这三台计算机中的一台正在运行amabari服务器和ambari客户端实例。另外两个仅运行amabari客户端

在NameNode启动任务之前,所有安装过程都进行得很顺利,这引发了一个错误。NameNode正在amabari服务器的同一台计算机上运行

这是错误日志

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 401, in <module>
    NameNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 102, in start
    namenode(action="start", hdfs_binary=hdfs_binary, upgrade_type=upgrade_type, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 146, in namenode
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 267, in service
    Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-hadoop.out
我为用户hdfs设置了更大的软限制和硬限制,但它不起作用。我已经格式化了namenode,但它也不起作用。因此,我尝试重新安装服务器和客户端,但仍然无法正常工作


谢谢你的建议。

在拔了一些头发后,我已经找到了一个解决办法,但还不一定了解原因。这似乎与DNS有关。当我将主机名添加到主机文件时,它解决了这个问题,而不是依赖于当前主机的DNS。e、 g

172.16.1.34 hostname.domain hostname

这很奇怪,因为DNS对主机来说工作得很好。我在一个代理后面工作。

在/var/log/hadoop/hdfs/hadoop-hdfs-namenode-hadoop.log中是否列出了任何错误?我看到了完全相同的问题-你有任何线索吗?
172.16.1.34 hostname.domain hostname