Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ubuntu JPS提供空输出,在64位Windows操作系统上运行的32位VM上,没有一个Hadoop守护进程以start-all.sh、Hadoop Psudo分布式模式启动_Ubuntu_Hadoop_Mapreduce_Hdfs_Yarn - Fatal编程技术网

Ubuntu JPS提供空输出,在64位Windows操作系统上运行的32位VM上,没有一个Hadoop守护进程以start-all.sh、Hadoop Psudo分布式模式启动

Ubuntu JPS提供空输出,在64位Windows操作系统上运行的32位VM上,没有一个Hadoop守护进程以start-all.sh、Hadoop Psudo分布式模式启动,ubuntu,hadoop,mapreduce,hdfs,yarn,Ubuntu,Hadoop,Mapreduce,Hdfs,Yarn,我正在尝试在64位操作系统之上运行的32位虚拟机上设置Hadoop2.7.1、Java OpenJDK 7。我已经配置了这里提到的所有文件 即使在运行start-dfs.sh或start-all.sh之后,也不会启动任何守护进程 下面是start,jps命令的输出 hduser@ubuntu:~$ start-dfs.sh 16/04/22 00:33:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for

我正在尝试在64位操作系统之上运行的32位虚拟机上设置Hadoop2.7.1、Java OpenJDK 7。我已经配置了这里提到的所有文件

即使在运行start-dfs.sh或start-all.sh之后,也不会启动任何守护进程

下面是start,jps命令的输出

hduser@ubuntu:~$ start-dfs.sh
16/04/22 00:33:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-ubuntu.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-ubuntu.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-ubuntu.out
16/04/22 00:33:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hduser@ubuntu:~$  jps
12147 Jps
hduser@ubuntu:~$ 
我似乎不明白原因。就警告而言,正如少数其他答案所指出的,它可以被忽略或压制

我进一步看到了上面提到的调试文件的内容,其内容如下

hduser@ubuntu:~$ cat /usr/local/hadoop/logs/hadoop-hduser-namenode-ubuntu.out
OpenJDK Client VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14869
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
hduser@ubuntu:~$ 
正如建议的那样,我做了更改,但这只会抑制日志中的错误

hduser@ubuntu:~$ cat /usr/local/hadoop/logs/hadoop-hduser-namenode-ubuntu.out
ulimit -a for user hduser
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 14869
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14869
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

检查您的配置文件。确保.xml文件(尤其是核心站点.xml)的内容与相同。很少有网站有过时的教程,它们在core-site.xml文件中提到“fs.default.name”而不是“fs.defaultFS”

发生此问题是因为 1.您正在使用虚拟机 2.64位主机上的32位虚拟机 3.hadoop中的默认本机库是为32位构建的

这里有一个看似可行的解决方案,但对我来说并不奏效

但是,当我在Ubuntu机器上直接使用这些步骤()配置hadoop而不使用虚拟机时,它工作正常:)

所以,如果你面临这个问题,试着在ubuntu物理机器上运行

hduser@ubuntu:~$ cat /usr/local/hadoop/logs/hadoop-hduser-namenode-ubuntu.out
ulimit -a for user hduser
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 14869
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14869
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited