Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java LeaseExpiredException:Hadoop mapReduce中的租约不匹配|如何解决?_Java_Hadoop_Mapreduce - Fatal编程技术网

Java LeaseExpiredException:Hadoop mapReduce中的租约不匹配|如何解决?

Java LeaseExpiredException:Hadoop mapReduce中的租约不匹配|如何解决?,java,hadoop,mapreduce,Java,Hadoop,Mapreduce,我在stackoverflow中看到了几个与此相关的问题,但它们并没有解决我的问题 当运行带有90 Mb文件的作业时,我得到了LeaseExpiredException 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 13/11/12 15:46:42 INFO

我在stackoverflow中看到了几个与此相关的问题,但它们并没有解决我的问题

当运行带有90 Mb文件的作业时,我得到了LeaseExpiredException

13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to process : 1
13/11/12 15:46:43 INFO mapred.JobClient: Running job: job_201310301645_25033
13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
13/11/12 15:46:56 INFO mapred.JobClient: Task Id : attempt_201310301645_25033_m_000000_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): Lease mismatch on /user/hdfs/in/map owned by DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by DFSClient_NONMAPREDUCE_-1561990512_1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
    at org.
attempt_201310301645_25033_m_000000_0: SLF4J: Class path contains multiple SLF4J bindings.
attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201310301645_25033_m_000000_0: SLF4J: Found binding in [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201310301645_25033_m_000000_0: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
13/11/12 15:47:02 INFO mapred.JobClient: Task Id : attempt_201310301645_25033_m_000000_1, Status : FAILED
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): Lease mismatch on /user/hdfs/in/map owned by DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by DFSClient_NONMAPREDUCE_-1662926329_1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
attempt_201310301645_25033_m_000000_1: SLF4J: Class path contains multiple SLF4J bindings.
attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201310301645_25033_m_000000_1: SLF4J: Found binding in [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
attempt_201310301645_25033_m_000000_1: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
attempt_201310301645_25033_m_000000_1: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201310301645_25033_m_000000_1: log4j:WARN Please initialize the log4j system properly.
attempt_201310301645_25033_m_000000_1: log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
13/11/12 15:47:10 INFO mapred.JobClient: Task Id : attempt_201310301645_25033_m_000001_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/hdfs/in/map: File is not open for writing. Holder DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
为什么会这样? 我的映射程序代码的第一部分是

public void map(Object key, Text value, Context context)
            throws IOException, InterruptedException {

Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);

 Path inputfile = new Path("in/map");
 BufferedWriter getdatabuffer = new BufferedWriter(new OutputStreamWriter(fs.create(inputfile)));
   getdatabuffer.write(value.toString());
   getdatabuffer.close();
Path Attribute = new Path("in/Attribute");
        int row =0;
        BufferedReader read = new BufferedReader(new InputStreamReader(fs.open(inputfile)));
        String str = null;
        while((str = read.readLine())!=null){

         row++;                     //total row count
         StringTokenizer st =new StringTokenizer(str," ");
         col = st.countTokens();
          }
        read.close();
...........
...........
.............
............
进一步计算基于上述“地图”文件

为什么会发生这种情况? 我认为它多次无法写入in/map

我不会删除任何文件。 如何摆脱这个

有什么建议吗

编辑:11月15日

当我在中检查时,/map不是在我的集群中创建的。 为什么它没有被创建? 我想这就是为什么它得到了一个
LeaseExpiredException

场景

我有一个1GB的输入文件。内容如下

file1.txt
0 0 6
3 4 8
5 9 3
12 4 6
8 7 8
9 8 1
6 12 0
10 8 0
8 5 1
14 8 1
我需要找出Atranspose*A,其中A[][]是文件中的输入数据

所以我的逻辑是:

任何数据进入
mapper
我将查找
Atranspose*A
,在Reducer中,我将计算从每个mapper计算出的所有Atran*A的总和。这样我就可以得到file1.txt的一个nspose*A


为此,我考虑将每个映射器数据写入一个文件,然后进入一个[][]数组并找到ATrn*a。

看起来可能有多个映射器试图在HDFS中写入同一个文件:

Path inputfile = new Path("in/map");
BufferedWriter getdatabuffer = new BufferedWriter(new OutputStreamWriter(fs.create(inputfile)));
getdatabuffer.write(value.toString());
getdatabuffer.close();
如果执行此代码的映射任务不止一个,那么您将遇到所看到的问题


您不应该在映射器代码中直接写入HDFS,您拥有的逻辑将覆盖每个映射器中每个输入值的文件-您能解释一下您想要实现什么吗?

Thx for ur reply Chris White对于每个输入值,我正在做同样的事情。我会解释:我面临着同样的问题。你能找到解决办法吗?按照Chris的说法,如果涉及多个映射器,我们就不能直接向HDFS写入。有什么解决办法吗?我有很多zip压缩文件,我正在尝试解压缩它们并将其写入HDFS。我们如何实施这一点?我是唯一的工作@Chris我也面临着类似的问题,但根据日志的解释,我无法理解。我还将日志移动到调试模式。这里是问题链接-请建议一些解决方法,我花了几天时间在这个问题上,但调试失败。