Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/379.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
运行child:java.lang.OutOfMemoryError:java堆空间时出错_Java_Hadoop - Fatal编程技术网

运行child:java.lang.OutOfMemoryError:java堆空间时出错

运行child:java.lang.OutOfMemoryError:java堆空间时出错,java,hadoop,Java,Hadoop,我在网上读了很多书,但没有找到解决问题的办法。 我使用Hadoop 2.6.0 MapReduce的主要目标是运行SequenceFile并对键/值对进行一些分析 2015-01-29 10:09:50,554 INFO [main] org.apache.hadoop.mapred.MapTask: Starting flush of map output 2015-01-29 10:09:50,554 INFO [main] org.apache.hadoop.mapred.MapTask:

我在网上读了很多书,但没有找到解决问题的办法。 我使用Hadoop 2.6.0

MapReduce的主要目标是运行SequenceFile并对键/值对进行一些分析

2015-01-29 10:09:50,554 INFO [main] org.apache.hadoop.mapred.MapTask: Starting flush of map output
2015-01-29 10:09:50,554 INFO [main] org.apache.hadoop.mapred.MapTask: Spilling map output
2015-01-29 10:09:50,554 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufend = 23342; bufvoid = 104857600
2015-01-29 10:09:50,554 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 26214396(104857584); kvend = 26213840(104855360); length = 557/6553600
2015-01-29 10:09:50,570 INFO [main] org.apache.hadoop.mapred.MapTask: Finished spill 0
2015-01-29 10:09:50,577 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: Java heap space
    at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:66)
    at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:120)
    at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2359)
    at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2491)
    at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:72)
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
    at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
    at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
这里是STDOUT的输出

15/01/29 10:09:35 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
15/01/29 10:09:35 INFO compress.CodecPool: Got brand-new compressor [.gz]

15/01/29 10:09:36 INFO client.RMProxy: Connecting to ResourceManager at xxxxxxxxxxxxxxxxxxxxx:8040
15/01/29 10:09:37 INFO input.FileInputFormat: Total input paths to process : 1
15/01/29 10:09:37 INFO mapreduce.JobSubmitter: number of splits:1
15/01/29 10:09:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1422374835659_0059
15/01/29 10:09:37 INFO impl.YarnClientImpl: Submitted application application_1422374835659_0059
15/01/29 10:09:37 INFO mapreduce.Job: The url to track the job: http://xxxxxxxxxxxxxxxxxxxxx:8088/proxy/application_1422374835659_0059/
15/01/29 10:09:37 INFO mapreduce.Job: Running job: job_1422374835659_0059
15/01/29 10:09:44 INFO mapreduce.Job: Job job_1422374835659_0059 running in uber mode : false
15/01/29 10:09:44 INFO mapreduce.Job:  map 0% reduce 0%
15/01/29 10:09:50 INFO mapreduce.Job: Task Id : attempt_1422374835659_0059_m_000000_0, Status : FAILED
Error: Java heap space
15/01/29 10:09:58 INFO mapreduce.Job: Task Id : attempt_1422374835659_0059_m_000000_1, Status : FAILED
Error: Java heap space
15/01/29 10:10:04 INFO mapreduce.Job: Task Id : attempt_1422374835659_0059_m_000000_2, Status : FAILED
Error: Java heap space
15/01/29 10:10:10 INFO mapreduce.Job:  map 100% reduce 100%
15/01/29 10:10:11 INFO mapreduce.Job: Job job_1422374835659_0059 failed with state FAILED due to: Task failed task_1422374835659_0059_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

15/01/29 10:10:11 INFO mapreduce.Job: Counters: 12
    Job Counters 
        Failed map tasks=4
        Launched map tasks=4
        Other local map tasks=3
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=37910
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=18955
        Total vcore-seconds taken by all map tasks=18955
        Total megabyte-seconds taken by all map tasks=38819840
    Map-Reduce Framework
        CPU time spent (ms)=0
        Physical memory (bytes) snapshot=0
        Virtual memory (bytes) snapshot=0
我的配置几乎是默认配置,与Java堆大小无关

我也试过这个,没什么不同

<property>
        <name>mapred.child.java.opts</name>
        <value>-Xmx1024m</value>
</property>
我的MapReduce应用程序中的配置:

conf.setInt("mapreduce.map.memory.mb", 2048);
conf.setInt("mapreduce.reduce.memory.mb", 1024);
编辑1 29.01:

使用
-Xmx2048m
时,我收到了相同的错误

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class Dummy implements Tool {

    private Configuration conf;

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        int res = ToolRunner.run(conf, new Dummy(), args);
        System.exit(res);
    }

    @Override
    public void setConf(Configuration conf) {
        // Set some Job options
        conf.set("dfs.blocksize", "16m");

        // set heap size
        // conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx1024m");
        // conf.set("mapred.child.java.opts", "-Xmx200m");

        // request more memory be the ressourcemanager
        conf.setInt("mapreduce.map.memory.mb", 2048);
        conf.setInt("mapreduce.reduce.memory.mb", 1024);

        // IO space
        // conf.setInt("mapreduce.task.io.sort.mb", 256);

        // Since we have lots of small tasks we should reduce overhead
        // conf.setInt("mapreduce.job.jvm.numtasks", -1);

        this.conf = conf;
    }

    /**
     * configuration getter
     */
    @Override
    public Configuration getConf() {
        return conf;
    }

    @Override
    public int run(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

        // Configure the job
        Job job = Job.getInstance(conf, "Dummy");

        job.setJarByClass(Dummy.class);

        job.setInputFormatClass(SequenceFileInputFormat.class);
        job.setMapperClass(Map.class);

        // Set number of Reducers to number of actions + 1 for error log
        // job.setNumReduceTasks(action_count+2);
        job.setReducerClass(Reduce.class); // Global Aggregation

        // Set output
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        // Enable record skipping for failed Maps
        // SkipBadRecords.setMapperMaxSkipRecords(conf, Long.MAX_VALUE);

        // only create a output file it there is content
        // LazyOutputFormat.setOutputFormatClass(job, TextOutputFormat.class);

        // set input and output for job
        // FileInputFormat.addInputPath(job, repo.getRepository());
        FileInputFormat.setInputPaths(job, new Path("/test/test.seq"));
        FileOutputFormat.setOutputPath(job, new Path("/test/out"));

        // Execute Job
        int res = 0;
        // job.submit();
        res = job.waitForCompletion(true) ? 0 : 1;

        return res;

    }

    public static class Map extends Mapper<Text, Text, Text, Text> {
        @Override
        protected void map(Text key, Text value, Mapper<Text, Text, Text, Text>.Context context) {
            // TODO Auto-generated method stub
        }
    }

    public static class Reduce extends Reducer<Text, Text, Text, Text> {

        @Override
        protected void reduce(Text key, Iterable<Text> value, Reducer<Text, Text, Text, Text>.Context context) {
            // TODO Auto-generated method stub

        }
    }
}
使用
-Xmx3072m
我会出现以下错误:

Error: java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:197)
    at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:70)
    at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:120)
    at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2359)
    at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2491)
    at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:72)
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
    at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
    at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
对于
-Xmx4096m
,我遇到了一个完全不同的错误,我不明白他现在为什么要使用5GB的虚拟内存:

Container [pid=61687,containerID=container_1422374835659_0064_01_000002] is running beyond virtual memory limits. Current usage: 866.8 MB of 2 GB physical memory used; 5.0 GB of 4.2 GB virtual memory used. Killing container.
Dump of the process-tree for container_1422374835659_0064_01_000002 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 61687 61685 61687 61687 (bash) 0 0 12640256 304 /bin/bash -c /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx4096m -Djava.io.tmpdir=/home/hduser/tmp/nm-local-dir/usercache/hduser/appcache/application_1422374835659_0064/container_1422374835659_0064_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/usr/local/hadoop-2.6.0/logs/userlogs/application_1422374835659_0064/container_1422374835659_0064_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.97.83.13 33802 attempt_1422374835659_0064_m_000000_0 2 1>/usr/local/hadoop-2.6.0/logs/userlogs/application_1422374835659_0064/container_1422374835659_0064_01_000002/stdout 2>/usr/local/hadoop-2.6.0/logs/userlogs/application_1422374835659_0064/container_1422374835659_0064_01_000002/stderr  
    |- 61692 61687 61687 61687 (java) 629 149 5384613888 221601 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx4096m -Djava.io.tmpdir=/home/hduser/tmp/nm-local-dir/usercache/hduser/appcache/application_1422374835659_0064/container_1422374835659_0064_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/usr/local/hadoop-2.6.0/logs/userlogs/application_1422374835659_0064/container_1422374835659_0064_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.97.83.13 33802 attempt_1422374835659_0064_m_000000_0 2 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
编辑2 29.01

即使在map()函数中注释掉了所有内容,也会出现错误

SequenceFile(132.93 KB)中只有10个键/值对,因此一切正常

编辑3 30.01

这里是产生相同错误的最小源

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class Dummy implements Tool {

    private Configuration conf;

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        int res = ToolRunner.run(conf, new Dummy(), args);
        System.exit(res);
    }

    @Override
    public void setConf(Configuration conf) {
        // Set some Job options
        conf.set("dfs.blocksize", "16m");

        // set heap size
        // conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx1024m");
        // conf.set("mapred.child.java.opts", "-Xmx200m");

        // request more memory be the ressourcemanager
        conf.setInt("mapreduce.map.memory.mb", 2048);
        conf.setInt("mapreduce.reduce.memory.mb", 1024);

        // IO space
        // conf.setInt("mapreduce.task.io.sort.mb", 256);

        // Since we have lots of small tasks we should reduce overhead
        // conf.setInt("mapreduce.job.jvm.numtasks", -1);

        this.conf = conf;
    }

    /**
     * configuration getter
     */
    @Override
    public Configuration getConf() {
        return conf;
    }

    @Override
    public int run(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

        // Configure the job
        Job job = Job.getInstance(conf, "Dummy");

        job.setJarByClass(Dummy.class);

        job.setInputFormatClass(SequenceFileInputFormat.class);
        job.setMapperClass(Map.class);

        // Set number of Reducers to number of actions + 1 for error log
        // job.setNumReduceTasks(action_count+2);
        job.setReducerClass(Reduce.class); // Global Aggregation

        // Set output
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        // Enable record skipping for failed Maps
        // SkipBadRecords.setMapperMaxSkipRecords(conf, Long.MAX_VALUE);

        // only create a output file it there is content
        // LazyOutputFormat.setOutputFormatClass(job, TextOutputFormat.class);

        // set input and output for job
        // FileInputFormat.addInputPath(job, repo.getRepository());
        FileInputFormat.setInputPaths(job, new Path("/test/test.seq"));
        FileOutputFormat.setOutputPath(job, new Path("/test/out"));

        // Execute Job
        int res = 0;
        // job.submit();
        res = job.waitForCompletion(true) ? 0 : 1;

        return res;

    }

    public static class Map extends Mapper<Text, Text, Text, Text> {
        @Override
        protected void map(Text key, Text value, Mapper<Text, Text, Text, Text>.Context context) {
            // TODO Auto-generated method stub
        }
    }

    public static class Reduce extends Reducer<Text, Text, Text, Text> {

        @Override
        protected void reduce(Text key, Iterable<Text> value, Reducer<Text, Text, Text, Text>.Context context) {
            // TODO Auto-generated method stub

        }
    }
}
import java.io.IOException;
导入org.apache.hadoop.conf.Configuration;
导入org.apache.hadoop.fs.Path;
导入org.apache.hadoop.io.Text;
导入org.apache.hadoop.mapreduce.Mapper;
导入org.apache.hadoop.mapreduce.Job;
导入org.apache.hadoop.mapreduce.Reducer;
导入org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
导入org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
导入org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
导入org.apache.hadoop.util.Tool;
导入org.apache.hadoop.util.ToolRunner;
公共类虚拟实现工具{
私有配置配置;
公共静态void main(字符串[]args)引发异常{
Configuration conf=新配置();
int res=ToolRunner.run(conf,new Dummy(),args);
系统退出(res);
}
@凌驾
公共无效设置配置(配置配置配置){
//设置一些作业选项
conf.set(“dfs.blocksize”,“16m”);
//设置堆大小
//conf.set(“warn.app.mapreduce.am.command opts”,“-Xmx1024m”);
//conf.set(“mapred.child.java.opts”,“-Xmx200m”);
//通过ressourcemanager请求更多内存
conf.setInt(“mapreduce.map.memory.mb”,2048);
conf.setInt(“mapreduce.reduce.memory.mb”,1024);
//IO空间
//conf.setInt(“mapreduce.task.io.sort.mb”,256);
//因为我们有很多小任务,我们应该减少开销
//conf.setInt(“mapreduce.job.jvm.numtasks”,-1);
this.conf=conf;
}
/**
*组态吸气剂
*/
@凌驾
公共配置getConf(){
返回形态;
}
@凌驾
public int run(字符串[]args)抛出IOException、ClassNotFoundException、InterruptedException{
//配置作业
Job Job=Job.getInstance(conf,“Dummy”);
job.setJarByClass(Dummy.class);
作业.setInputFormatClass(SequenceFileInputFormat.class);
job.setMapperClass(Map.class);
//将错误日志的还原数设置为操作数+1
//作业.setNumReduceTasks(操作计数+2);
job.setReducerClass(Reduce.class);//全局聚合
//设定输出
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
//为失败的映射启用记录跳过
//setMapperMaxSkipRecords(conf,Long.MAX_值);
//只有在有内容时才创建输出文件
//setOutputFormatClass(作业,TextOutputFormat.class);
//设置作业的输入和输出
//addInputPath(作业,repo.getRepository());
setInputPaths(作业,新路径(“/test/test.seq”);
setOutputPath(作业,新路径(“/test/out”);
//执行作业
int res=0;
//job.submit();
res=作业等待完成(true)?0:1;
返回res;
}
公共静态类映射扩展映射器{
@凌驾
受保护的空映射(文本键、文本值、映射器.上下文){
//TODO自动生成的方法存根
}
}
公共静态类Reduce扩展Reducer{
@凌驾
受保护的void reduce(文本键、Iterable值、Reducer.Context){
//TODO自动生成的方法存根
}
}
}

我最近也遇到了同样的问题。 我正在使用OracleVM来研究hadoop。分配的基本内存为512 MB,我遇到了相同的错误:

java.lang.Exception:java.lang.OutOfMemoryError:java堆空间


然后我将其增加到1024MB,然后我就能够成功运行MR程序。

我将使用jvisualvm监控该进程,以确认堆的大小是您认为应该的大小,如果仍然出现错误,请给它更多内存。最大内存应该是你宁愿死也不愿意使用更多的大小。在我看来,你是在告诉Hadoop它可以随意使用2048 MB来进行映射操作,但是你告诉JVM它总共使用的内存不应该超过1024 MB。倒数第二位的代码将其阻塞到2048MB,但仍然不允许任何空间容纳非Hadoop的内容。你能把
-Xmx
至少增加到3072MB吗?我试过-Xmx2048m、-Xmx3072m和-Xmx4096m。请参阅主要帖子。我使用visualvm分析了mapreduce进程,发现堆大小最大为380MB,仅使用了150MB。要了解作业内存不足的原因,我们可能需要查看您的代码。也许您正在为每个map()创建内存密集型对象,而每个map()都可以在setup()中创建一次,然后重复使用每个map()?这可能应该是一个注释,但它可以回答这个问题。我编辑了它,我会让其他人决定:p