Python Hadoop:通过Ubuntu 12.04中的NameNode格式化HDFS文件系统

Python Hadoop:通过Ubuntu 12.04中的NameNode格式化HDFS文件系统,python,database,linux,ubuntu,hadoop,Python,Database,Linux,Ubuntu,Hadoop,我遵循这个教程 注意:是的,我知道我确实将hadoop安装到了/usr/local/hadoop/hadoop/,但教程没有 当我跑步时: hduser@ubuntu:~$ /usr/local/hadoop/hadoop/bin/hadoop namenode -format 我明白了 而不是 hduser@ubuntu:/usr/local/hadoop$ hadoop/bin/hadoop namenode -format 10/05/08 16:59:56 INFO namenode.

我遵循这个教程

注意:是的,我知道我确实将hadoop安装到了/usr/local/hadoop/hadoop/,但教程没有

当我跑步时:

hduser@ubuntu:~$ /usr/local/hadoop/hadoop/bin/hadoop namenode -format
我明白了

而不是

hduser@ubuntu:/usr/local/hadoop$ hadoop/bin/hadoop namenode -format
10/05/08 16:59:56 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ubuntu/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches    /branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
10/05/08 16:59:56 INFO namenode.FSNamesystem: fsOwner=hduser,hadoop
10/05/08 16:59:56 INFO namenode.FSNamesystem: supergroup=supergroup
10/05/08 16:59:56 INFO namenode.FSNamesystem: isPermissionEnabled=true
10/05/08 16:59:56 INFO common.Storage: Image file of size 96 saved in 0 seconds.
10/05/08 16:59:57 INFO common.Storage: Storage directory .../hadoop-hduser/dfs/name has    been successfully formatted.
10/05/08 16:59:57 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
在/usr/local/hadoop/hadoop/bin/hadoop行320的代码中:

JAVA_PLATFORM=`CLASSPATH=${CLASSPATH} ${JAVA} -Xmx32m ${HADOOP_JAVA_PLATFORM_OPTS}           
org.apache.hadoop.util.PlatformName | sed -e "s/ /_/g"`
第390行是:

    exec "$JAVA" -Dproc_$COMMAND $JAVA_HEAP_MAX $HADOOP_OPTS -classpath "$CLASSPATH" $CLASS "$@"

知道如何解决这个问题吗?

我有以下文件:/usr/lib/hadoop-0.20/bin/hadoop-config.sh(Cloudera安装)

在这里,我可以看到它在以下位置搜索java:

# attempt to find java
if [ -z "$JAVA_HOME" ]; then
  for candidate in \
    /usr/lib/jvm/java-6-sun \
    /usr/lib/jvm/java-1.6.0-sun-1.6.0.*/jre/ \
    /usr/lib/jvm/java-1.6.0-sun-1.6.0.* \
    /usr/lib/j2sdk1.6-sun \
    /usr/java/jdk1.6* \
    /usr/java/jre1.6* \
    /Library/Java/Home \
    /usr/java/default \
    /usr/lib/jvm/default-java ; do
    if [ -e $candidate/bin/java ]; then
      export JAVA_HOME=$candidate
      break
    fi
  done
您的JAVA_主页设置是否正确?您可以手动设置它,然后再次尝试运行它吗

[编辑:基于评论]

  • 要检查是否设置了JAVA_HOME:
    echo$JAVA_HOME
  • 找出jvm所在的位置,通常位于:
    /usr/lib/jvm/java-6-sun/
  • 然后去设置它。编辑bashrc和bash_配置文件:
    vi~/.bashrc
    vi~/.bash_配置文件
  • 添加以下内容:
    export JAVA_HOME=/usr/lib/jvm/JAVA-6-sun/
  • 请注意,路径应该基于找到jvm的位置

  • 通过将此行添加到hadoop文件中的
    hadoop/bin
    ,将
    JAVA\u HOME
    重置到拥有JAVA的目录:


    我刚刚手动编辑了/usr/local/hadoop/hadoop/bin/hadoop中的代码,以便查看正确的JDK。@user1047260:检查环境变量-echo$JAVA\u HOME。如果不是的话,你将不得不设置it@user1047260:添加到我的reply@user1047260:并且不要直接编辑hadoop脚本。这样做不是个好主意
    # attempt to find java
    if [ -z "$JAVA_HOME" ]; then
      for candidate in \
        /usr/lib/jvm/java-6-sun \
        /usr/lib/jvm/java-1.6.0-sun-1.6.0.*/jre/ \
        /usr/lib/jvm/java-1.6.0-sun-1.6.0.* \
        /usr/lib/j2sdk1.6-sun \
        /usr/java/jdk1.6* \
        /usr/java/jre1.6* \
        /Library/Java/Home \
        /usr/java/default \
        /usr/lib/jvm/default-java ; do
        if [ -e $candidate/bin/java ]; then
          export JAVA_HOME=$candidate
          break
        fi
      done
    
    export JAVA_HOME=/home/hduser/jdk1.7.0_07