Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Mapreduce作业因IO异常而失败_Java_Hadoop_Hdfs - Fatal编程技术网

Java Mapreduce作业因IO异常而失败

Java Mapreduce作业因IO异常而失败,java,hadoop,hdfs,Java,Hadoop,Hdfs,我正在运行单节点hadoop环境。我有一个mapreduce作业,用于计算某些特定时间段内某些监视信息的平均值,例如每小时平均值。此作业将输出写入hdfs内的路径。在运行作业之前,这是一个清除ech时间。一个月来它一直运转良好。昨天,在运行作业时,我从jobclient收到一个异常,表示: 文件/user/root/out1/\u临时/\u尝试\u 2014011141113\u r\u0007\u r\u000000\u 0/hi/130-r-00000只能复制到0个节点,而不是1个节点 完整

我正在运行单节点hadoop环境。我有一个mapreduce作业,用于计算某些特定时间段内某些监视信息的平均值,例如每小时平均值。此作业将输出写入hdfs内的路径。在运行作业之前,这是一个清除ech时间。一个月来它一直运转良好。昨天,在运行作业时,我从jobclient收到一个异常,表示:

文件/user/root/out1/\u临时/\u尝试\u 2014011141113\u r\u0007\u r\u000000\u 0/hi/130-r-00000只能复制到0个节点,而不是1个节点

完整的stacktrace如下所示:



  ..........

14/01/17 12:00:09 INFO mapred.JobClient:  map 100% reduce 32%
14/01/17 12:00:12 INFO mapred.JobClient:  map 100% reduce 74%
14/01/17 12:00:17 INFO mapred.JobClient: Task Id : attempt_201401141113_0007_r_000000_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/out1/_temporary/_attempt_201401141113_0007_r_000000_0/hi/130-r-00000 could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

    at org.apache.hadoop.ipc.Client.call(Client.java:1070)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy2.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy2.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)



 $hadoop dfsadmin -report
Configured Capacity: 11353194496 (10.57 GB)
Present Capacity: 2354425856 (2.19 GB)
DFS Remaining: 1633726464 (1.52 GB)
DFS Used: 720699392 (687.31 MB)
DFS Used%: 30.61%
Under replicated blocks: 49
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: 192.168.1.149:50010
Decommission Status : Normal
Configured Capacity: 11353194496 (10.57 GB)
DFS Used: 720699392 (687.31 MB)
Non DFS Used: 8998768640 (8.38 GB)
DFS Remaining: 1633726464(1.52 GB)
DFS Used%: 6.35%
DFS Remaining%: 14.39%
Last contact: Fri Jan 17 04:36:55 GMT+05:30 2014


从谷歌最初的搜索结果来看,他谈到了存储空间问题。但我不这么认为,因为我的全部输入数据应该小于600MB,节点上大约有1.5GB可用空间。我运行了hadoop dfsadmin-report命令,它返回如下:



  ..........

14/01/17 12:00:09 INFO mapred.JobClient:  map 100% reduce 32%
14/01/17 12:00:12 INFO mapred.JobClient:  map 100% reduce 74%
14/01/17 12:00:17 INFO mapred.JobClient: Task Id : attempt_201401141113_0007_r_000000_0, Status : FAILED
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/out1/_temporary/_attempt_201401141113_0007_r_000000_0/hi/130-r-00000 could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

    at org.apache.hadoop.ipc.Client.call(Client.java:1070)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy2.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy2.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)



 $hadoop dfsadmin -report
Configured Capacity: 11353194496 (10.57 GB)
Present Capacity: 2354425856 (2.19 GB)
DFS Remaining: 1633726464 (1.52 GB)
DFS Used: 720699392 (687.31 MB)
DFS Used%: 30.61%
Under replicated blocks: 49
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: 192.168.1.149:50010
Decommission Status : Normal
Configured Capacity: 11353194496 (10.57 GB)
DFS Used: 720699392 (687.31 MB)
Non DFS Used: 8998768640 (8.38 GB)
DFS Remaining: 1633726464(1.52 GB)
DFS Used%: 6.35%
DFS Remaining%: 14.39%
Last contact: Fri Jan 17 04:36:55 GMT+05:30 2014



请给我一个解决方案。这可能是一个配置问题。我对hadoop的配置不太了解。请帮忙。

我想,也许你的问题实际上是空间问题。您有一个副本集,因此,如果您的输入为600 mb,则集群上需要1.2 gb

您仍然有300mb的空闲空间,这可能不足以在节点之间发送数据

我的建议是使用较小的数据集来检查这是否是问题所在,大约300mb或更少。然后,如果不这样解决,请尝试在conf/hdfs-site.xml上将副本设置为0

<property>
    <name>dfs.replication</name>
    <value>0</value>
</property>

dfs.replication
0

这可能无法解决您的问题,但您似乎使用了太多的副本。如果只有一个节点,则文件应该只有一个副本。此输出“Under replicated blocks:49”表示49个块复制不足,这将是一个问题,因为没有更多节点可将它们复制到。