Exception Hadoop';文件系统关闭';使用线程池时出现异常

Exception Hadoop';文件系统关闭';使用线程池时出现异常,exception,hadoop,filesystems,mapreduce,threadpool,Exception,Hadoop,Filesystems,Mapreduce,Threadpool,我是hadoop的新手,在一个5节点集群上运行多个mapReduce作业。当运行多个线程时,我开始出现“文件系统关闭”异常。每次运行一个作业时,作业工作正常。错误出现在映射之后,正好在减少之前。看起来是这样的: java.lang.Exception: java.io.IOException: Filesystem closed at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:399) Caused b

我是hadoop的新手,在一个5节点集群上运行多个mapReduce作业。当运行多个线程时,我开始出现“文件系统关闭”异常。每次运行一个作业时,作业工作正常。错误出现在映射之后,正好在减少之前。看起来是这样的:

java.lang.Exception: java.io.IOException: Filesystem closed
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:399)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:552)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:648)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:706)
at java.io.DataInputStream.read(Unknown Source)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:209)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:173)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:167)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:526)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:338)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
job.setMapperClass(MultithreadedMapper.class);
int numThreads = 42;
MultithreadedMapper.setNumberOfThreads(numThreads);
这不会一直发生,如果我重新执行失败的作业,它会正常运行。不幸的是,这占用了太多的时间。我假设这与多个任务访问同一个输入文件有关,当一个任务完成时,它将关闭所有任务的输入文件。如果这是一个问题,我想知道的是如何覆盖它。我尝试在映射器中覆盖清理以重新打开路径,但这似乎很愚蠢,也不起作用

@Override 
public void cleanup(Context context){
        Job tempJob;
        try {
            tempJob = new Job();
            Path fs = ((FileSplit) context.getInputSplit()).getPath();
            FileInputFormat.addInputPath(tempJob, fs);
            System.out.println("Finished map task for " + context.getJobName());
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
我还想知道这是否是使用线程池执行hadoop mapReduce作业的根本问题。谢谢你的建议


编辑:当我提到工作和任务时,我可能有点不清楚。实际上,我使用自己的映射器和还原器运行多个作业。每个作业都将为我正在创建的特定表生成一列。说一个总数或一个计数。每个作业都有自己的线程,它们都在访问相同的输入文件。我遇到的问题是,当一些作业完成时,它们将抛出“文件系统关闭异常”。如果这可能会有所不同的话,我也在使用Thread。

作为一般规则,除非您有一个非常CPU密集型的作业,否则我不建议在同一个任务中使用多个线程,这会增加JVM中出现问题的可能性,并且重新运行任务的成本会高得多。你应该考虑增加地图任务的数量,当然每一个任务都会运行在一个单独的JVM中,但是它会更干净。 如果你真的想采用多线程方式,那么我怀疑你使用了错误类型的映射器,对于多线程应用程序,你应该使用一个
multi-threadedmapper
,它具有不同的
run
方法实现,应该是线程安全的。您可以这样使用它:

java.lang.Exception: java.io.IOException: Filesystem closed
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:399)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:552)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:648)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:706)
at java.io.DataInputStream.read(Unknown Source)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:209)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:173)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:167)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:526)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:338)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
job.setMapperClass(MultithreadedMapper.class);
int numThreads = 42;
MultithreadedMapper.setNumberOfThreads(numThreads);
您可以按如下方式指定线程数:

java.lang.Exception: java.io.IOException: Filesystem closed
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:399)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:552)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:648)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:706)
at java.io.DataInputStream.read(Unknown Source)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:209)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:173)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:167)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:526)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:338)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
job.setMapperClass(MultithreadedMapper.class);
int numThreads = 42;
MultithreadedMapper.setNumberOfThreads(numThreads);

作为一般规则,除非您有一个非常CPU密集的作业,否则我不建议在同一个任务中使用多个线程,这会增加JVM中出现问题的可能性,并且重新运行任务的成本会高得多。你应该考虑增加地图任务的数量,当然每一个任务都会运行在一个单独的JVM中,但是它会更干净。 如果你真的想采用多线程方式,那么我怀疑你使用了错误类型的映射器,对于多线程应用程序,你应该使用一个
multi-threadedmapper
,它具有不同的
run
方法实现,应该是线程安全的。您可以这样使用它:

java.lang.Exception: java.io.IOException: Filesystem closed
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:399)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:552)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:648)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:706)
at java.io.DataInputStream.read(Unknown Source)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:209)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:173)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:167)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:526)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:338)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
job.setMapperClass(MultithreadedMapper.class);
int numThreads = 42;
MultithreadedMapper.setNumberOfThreads(numThreads);
您可以按如下方式指定线程数:

java.lang.Exception: java.io.IOException: Filesystem closed
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:399)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:552)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:648)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:706)
at java.io.DataInputStream.read(Unknown Source)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:209)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:173)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:167)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:526)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:338)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
job.setMapperClass(MultithreadedMapper.class);
int numThreads = 42;
MultithreadedMapper.setNumberOfThreads(numThreads);

我可能混淆了工作和任务之间的区别。实际上,我使用自己的映射器和还原器运行多个作业。每个作业都将为我正在创建的特定表生成一列。说一个总数或一个计数。每个作业都有自己的线程,它们都在访问相同的输入文件。我遇到的问题是,当一些作业完成时,它们将抛出“文件系统关闭异常”。我不相信在这种情况下,我在同一个任务中运行多个线程。我会尽量澄清我之前的帖子。是的,这个例外也发生在我身上。两个作业同时访问同一文件。考虑这2种不同的作业在HDFS中打开相同的文件。其中一个完成其进程并关闭该文件。如果另一个尝试访问该文件,则会发生
文件系统
关闭异常。我可能混淆了作业和任务之间的区别。实际上,我使用自己的映射器和还原器运行多个作业。每个作业都将为我正在创建的特定表生成一列。说一个总数或一个计数。每个作业都有自己的线程,它们都在访问相同的输入文件。我遇到的问题是,当一些作业完成时,它们将抛出“文件系统关闭异常”。我不相信在这种情况下,我在同一个任务中运行多个线程。我会尽量澄清我之前的帖子。是的,这个例外也发生在我身上。两个作业同时访问同一文件。考虑这2种不同的作业在HDFS中打开相同的文件。其中一个完成其进程并关闭该文件。如果另一个尝试访问该文件,则会发生
文件系统
关闭异常。