Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
为什么我能';是否在Hadoop 1.2.1安装中启动NameNode?_Hadoop_Hdfs_Bigdata - Fatal编程技术网

为什么我能';是否在Hadoop 1.2.1安装中启动NameNode?

为什么我能';是否在Hadoop 1.2.1安装中启动NameNode?,hadoop,hdfs,bigdata,Hadoop,Hdfs,Bigdata,我对ApacheHadoop非常陌生,我正在学习Udemy上的一个视频课程 本课程基于Hadoop 1.2.1,是不是太旧了?我最好从另一门基于最新版本的课程开始学习,还是可以 因此,我在Ubuntu 12.04系统上安装了Hadoop 1.2.1,并在伪分发模式下对其进行了配置 根据本教程,我已在以下配置文件中使用以下设置完成此操作: 1) conf/core site.xml: <configuration> <property> <na

我对ApacheHadoop非常陌生,我正在学习Udemy上的一个视频课程

本课程基于Hadoop 1.2.1,是不是太旧了?我最好从另一门基于最新版本的课程开始学习,还是可以

因此,我在Ubuntu 12.04系统上安装了Hadoop 1.2.1,并在伪分发模式下对其进行了配置

根据本教程,我已在以下配置文件中使用以下设置完成此操作:

1) conf/core site.xml

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>
<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>localhost:9001</value>
    </property>
</configuration>
hadoop namenode –format
因此,我通过SSH连接到本地系统

然后我进入Hadoop bin目录,/home/andrea/Hadoop/Hadoop-1.2.1/bin/,在这里我执行这个命令,它必须执行name节点的格式(确切的意思是什么?)

这就是得到的输出:

andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ ./hadoop namenode –format
16/01/17 12:55:25 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = andrea-virtual-machine/127.0.1.1
STARTUP_MSG:   args = [–format]
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
Usage: java NameNode [-format [-force ] [-nonInteractive]] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-recover [ -force ] ]
16/01/17 12:55:25 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at andrea-virtual-machine/127.0.1.1
************************************************************/
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ 
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ start-all.sh
starting namenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-namenode-andrea-virtual-machine.out
localhost: starting datanode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-datanode-andrea-virtual-machine.out
localhost: starting secondarynamenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-secondarynamenode-andrea-virtual-machine.out
starting jobtracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-jobtracker-andrea-virtual-machine.out
localhost: starting tasktracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-tasktracker-andrea-virtual-machine.out
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ 
然后,我尝试启动执行此命令的所有节点:

./start–all.sh
现在我得到:

andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ ./start-all.sh 
starting namenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-namenode-andrea-virtual-machine.out
localhost: starting datanode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-datanode-andrea-virtual-machine.out
localhost: starting secondarynamenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-secondarynamenode-andrea-virtual-machine.out
starting jobtracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-jobtracker-andrea-virtual-machine.out
localhost: starting tasktracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-tasktracker-andrea-virtual-machine.out
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ 
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ jps
8041 SecondaryNameNode
8310 TaskTracker
8406 Jps
8139 JobTracker
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ 
现在,我尝试在浏览器中打开以下URL:

http//localhost:50070/
无法打开它(找不到页面)

以及:

已正确打开并重定向到此jsp页面:

http://localhost:50030/jobtracker.jsp
因此,在shell中,我执行jps命令,该命令列出了用户正在运行的所有Java进程:

andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ jps
6247 Jps
5720 DataNode
5872 SecondaryNameNode
6116 TaskTracker
5965 JobTracker
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ 
如您所见,名称节点似乎没有启动

在下面的教程中,我要说:

如果未列出NameNode或DataNode,则 由属性设置的namenode或datanode的根目录 “dfs.name.dir”变得一团糟。默认情况下,它指向/tmp 操作系统不时更改的目录。因此,HDFS 当操作系统进行某些更改后出现时,会感到困惑并命名节点 没有开始

所以要解决这个问题,就要提供这个解决方案(这对我来说是行不通的)

首先通过stop all.sh脚本停止所有节点

然后我必须显式设置'dfs.name.dir'和'dfs.data.dir'

因此,我在Hadoop路径中创建了一个dfs目录,并在该目录中创建了两个目录(在同一级别):dataname(想法是在其中创建两个文件夹,用于datanode demon和namenode demon)

所以我有这样的想法:

andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/dfs$ tree
.
├── data
└── name
然后我将此配置用于hdfs site.xml,其中我显式设置了前两个目录:

<configuration>
    <property>
        <name>dfs.data.dir</name>
        <value>/home/andrea/hadoop/hadoop-1.2.1/dfs/data/</value>
    </property>

    <property>
        <name>dfs.name.dir</name>
        <value>/home/andrea/hadoop/hadoop-1.2.1/dfs/name/</value>
    </property>

    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>
我得到这个输出:

andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/dfs$ hadoop namenode –format16/01/17 13:14:53 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = andrea-virtual-machine/127.0.1.1
STARTUP_MSG:   args = [–format]
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
Usage: java NameNode [-format [-force ] [-nonInteractive]] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-recover [ -force ] ]
16/01/17 13:14:53 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at andrea-virtual-machine/127.0.1.1
************************************************************/
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/dfs$ 
因此,我通过:start all.sh再次启动所有节点,这是获得的输出:

andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ ./hadoop namenode –format
16/01/17 12:55:25 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = andrea-virtual-machine/127.0.1.1
STARTUP_MSG:   args = [–format]
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
Usage: java NameNode [-format [-force ] [-nonInteractive]] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-recover [ -force ] ]
16/01/17 12:55:25 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at andrea-virtual-machine/127.0.1.1
************************************************************/
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ 
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ start-all.sh
starting namenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-namenode-andrea-virtual-machine.out
localhost: starting datanode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-datanode-andrea-virtual-machine.out
localhost: starting secondarynamenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-secondarynamenode-andrea-virtual-machine.out
starting jobtracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-jobtracker-andrea-virtual-machine.out
localhost: starting tasktracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-tasktracker-andrea-virtual-machine.out
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ 
然后执行jps命令,查看是否所有节点都正确启动,但我得到的结果如下:

andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ ./start-all.sh 
starting namenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-namenode-andrea-virtual-machine.out
localhost: starting datanode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-datanode-andrea-virtual-machine.out
localhost: starting secondarynamenode, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-secondarynamenode-andrea-virtual-machine.out
starting jobtracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-jobtracker-andrea-virtual-machine.out
localhost: starting tasktracker, logging to /home/andrea/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-andrea-tasktracker-andrea-virtual-machine.out
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ 
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ jps
8041 SecondaryNameNode
8310 TaskTracker
8406 Jps
8139 JobTracker
andrea@andrea-virtual-machine:~/hadoop/hadoop-1.2.1/bin$ 
情况恶化了,因为现在我有两个节点没有启动:NameNodeDataNode

我错过了什么?如何尝试解决此问题并启动所有节点


Tnx

您是否会尝试将IPTABLES.的.打开一次,并在导出java路径时重新格式化。

如果您在hdfs-site.xml中配置了, 格式化名称节点时

<property>
        <name>dfs.name.dir</name>
        <value>/home/andrea/hadoop/hadoop-1.2.1/dfs/name/</value>
 </property>
如果名称节点格式成功,则显示消息。根据您的日志,我无法看到那些成功的日志。检查可能存在的权限问题。 如果未启动,请尝试使用其他命令:

hadoop-daemon.sh start namenode

希望它能工作…

hadoop版本命令正在工作???在hadoop.env文件中添加java home能否将主机文件粘贴到此处--vi/etc/hostsCan发布namenode日志