Java Hadoop单节点配置:映射减少期间分配内存错误

Java Hadoop单节点配置:映射减少期间分配内存错误,java,linux,hadoop,mapreduce,vagrant,Java,Linux,Hadoop,Mapreduce,Vagrant,我试图在Ubuntu 14.04 LTS(GNU/Linux 3.13.0-29-generic x86_64)上安装hadoop单节点(v2.4.0)(所有配置我都使用vagrant和puppet)。 Namenode已格式化,hadoop服务已使用start-dfs.sh启动。 用于配置hadoop的文件为,虚拟机保留4Gb内存。 当我使用命令运行示例map reduce时: hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-example

我试图在Ubuntu 14.04 LTS(GNU/Linux 3.13.0-29-generic x86_64)上安装hadoop单节点(v2.4.0)(所有配置我都使用vagrant和puppet)。 Namenode已格式化,hadoop服务已使用start-dfs.sh启动。 用于配置hadoop的文件为,虚拟机保留4Gb内存。 当我使用命令运行示例map reduce时:

hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar grep indput output 'dfs[a-z]+'
我收到以下错误:

OpenJDK 64位服务器虚拟机警告:信息: os::提交_内存(0x00000000e0d3e000104861696,0)失败; 错误=“无法分配内存”(errno=12)

这是map reduce输出的转储:

vagrant@vagrant:/usr/local/hadoop$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar grep indput output 'dfs[a-z]+'
14/06/28 17:19:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/06/28 17:19:11 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
14/06/28 17:19:11 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
14/06/28 17:19:11 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
14/06/28 17:19:11 INFO input.FileInputFormat: Total input paths to process : 25
14/06/28 17:19:12 INFO mapreduce.JobSubmitter: number of splits:25
14/06/28 17:19:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local41566469_0001
14/06/28 17:19:12 WARN conf.Configuration: file:/usr/local/hadoop/tmp/mapred/staging/vagrant41566469/.staging/job_local41566469_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/06/28 17:19:12 WARN conf.Configuration: file:/usr/local/hadoop/tmp/mapred/staging/vagrant41566469/.staging/job_local41566469_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/06/28 17:19:13 WARN conf.Configuration: file:/usr/local/hadoop/tmp/mapred/local/localRunner/vagrant/job_local41566469_0001/job_local41566469_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/06/28 17:19:13 WARN conf.Configuration: file:/usr/local/hadoop/tmp/mapred/local/localRunner/vagrant/job_local41566469_0001/job_local41566469_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/06/28 17:19:13 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
14/06/28 17:19:13 INFO mapreduce.Job: Running job: job_local41566469_0001
14/06/28 17:19:13 INFO mapred.LocalJobRunner: OutputCommitter set in config null
14/06/28 17:19:13 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/06/28 17:19:13 INFO mapred.LocalJobRunner: Waiting for map tasks
14/06/28 17:19:13 INFO mapred.LocalJobRunner: Starting task: attempt_local41566469_0001_m_000000_0
14/06/28 17:19:13 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/06/28 17:19:13 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/vagrant/indput/log4j.properties:0+11169
14/06/28 17:19:13 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000e0dad000, 104861696, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 104861696 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/local/hadoop-2.4.0/hs_err_pid2828.log
我怎样才能解决这个问题?我是否设置了一些配置属性

谢谢

再见


Gianluca

您使用的数据大小是多少?你能试着检查进程正在使用的mem大小吗?我使用etc/hadoop配置文件作为输入样本,所以有一些兆字节。如何检查mem大小(对不起,我不是linux专家;)?您应该在每个示例中运行类似于
ps aux | grep hadoop
的程序,以查看hadoop启动的真正命令,或者使用类似于
htop
的工具。PS:你提供的链接是deadHi,谢谢@eliasah的建议!该问题是在vagrant文件配置中出现错误,导致未设置虚拟机的保留内存:修复它,映射还原已正确完成。谢谢你的信号中断链接:现在应该开始工作了。@John,答案属于答案,而不是问题。如果愿意,您可以将答案发布为社区wiki。但请不要在问题标题中添加“已解决”。