Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/ssh/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java hadoop mapreduce给出了子错误_Java_Hadoop - Fatal编程技术网

Java hadoop mapreduce给出了子错误

Java hadoop mapreduce给出了子错误,java,hadoop,Java,Hadoop,我正在ubuntu 13.10上使用hadoop 1.2.1。 我正在运行排序问题,输入文件大小为25GB。但我得到了一个错误: 14/09/29 12:42:47 INFO mapred.JobClient: map 51% reduce 17% 14/09/29 12:44:08 INFO mapred.JobClient: Task Id : attempt_201409291048_0003_m_000208_0, Status : FAILED java.lang.Throwabl

我正在ubuntu 13.10上使用hadoop 1.2.1。 我正在运行排序问题,输入文件大小为25GB。但我得到了一个错误:

 14/09/29 12:42:47 INFO mapred.JobClient:  map 51% reduce 17%
14/09/29 12:44:08 INFO mapred.JobClient: Task Id : attempt_201409291048_0003_m_000208_0, Status : FAILED
java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

attempt_201409291048_0003_m_000208_0: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f4cfbad0000, 1683161088, 0) failed; error='Cannot allocate memory' (errno=12)
attempt_201409291048_0003_m_000208_0: #
attempt_201409291048_0003_m_000208_0: # There is insufficient memory for the Java Runtime Environment to continue.
attempt_201409291048_0003_m_000208_0: # Native memory allocation (malloc) failed to allocate 1683161088 bytes for committing reserved memory.
attempt_201409291048_0003_m_000208_0: # An error report file with more information is saved as:
attempt_201409291048_0003_m_000208_0: # /tmp/hadoop-hduser/mapred/local/taskTracker/hduser/jobcache/job_201409291048_0003/attempt_201409291048_0003_m_000208_0/work/hs_err_pid11760.log
14/09/29 12:44:10 INFO mapred.JobClient: Task Id : attempt_201409291048_0003_m_000209_0, Status : FAILED
java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

attempt_201409291048_0003_m_000209_0: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f76efad0000, 1683161088, 0) failed; error='Cannot allocate memory' (errno=12)
attempt_201409291048_0003_m_000209_0: #
attempt_201409291048_0003_m_000209_0: # There is insufficient memory for the Java Runtime Environment to continue.
attempt_201409291048_0003_m_000209_0: # Native memory allocation (malloc) failed to allocate 1683161088 bytes for committing reserved memory.
attempt_201409291048_0003_m_000209_0: # An error report file with more information is saved as:
attempt_201409291048_0003_m_000209_0: # /tmp/hadoop-hduser/mapred/local/taskTracker/hduser/jobcache/job_201409291048_0003/attempt_201409291048_0003_m_000209_0/work/hs_err_pid11761.log
14/09/29 12:44:14 INFO mapred.JobClient: Task Id : attempt_201409291048_0003_m_000208_1, Status : FAILED
java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

attempt_201409291048_0003_m_000208_1: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f0977ad0000, 1683161088, 0) failed; error='Cannot allocate memory' (errno=12)
attempt_201409291048_0003_m_000208_1: #
attempt_201409291048_0003_m_000208_1: # There is insufficient memory for the Java Runtime Environment to continue.
attempt_201409291048_0003_m_000208_1: # Native memory allocation (malloc) failed to allocate 1683161088 bytes for committing reserved memory.
attempt_201409291048_0003_m_000208_1: # An error report file with more information is saved as:
attempt_201409291048_0003_m_000208_1: # /tmp/hadoop-hduser/mapred/local/taskTracker/hduser/jobcache/job_201409291048_0003/attempt_201409291048_0003_m_000208_1/work/hs_err_pid11841.log
14/09/29 12:44:14 INFO mapred.JobClient: Task Id : attempt_201409291048_0003_m_000209_1, Status : FAILED
java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

attempt_201409291048_0003_m_000209_1: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f76ebad0000, 1683161088, 0) failed; error='Cannot allocate memory' (errno=12)
attempt_201409291048_0003_m_000209_1: #
attempt_201409291048_0003_m_000209_1: # There is insufficient memory for the Java Runtime Environment to continue.
attempt_201409291048_0003_m_000209_1: # Native memory allocation (malloc) failed to allocate 1683161088 bytes for committing reserved memory.
attempt_201409291048_0003_m_000209_1: # An error report file with more information is saved as:
attempt_201409291048_0003_m_000209_1: # /tmp/hadoop-hduser/mapred/local/taskTracker/hduser/jobcache/job_201409291048_0003/attempt_201409291048_0003_m_000209_1/work/hs_err_pid11857.log
14/09/29 12:44:20 INFO mapred.JobClient: Task Id : attempt_201409291048_0003_m_000208_2, Status : FAILED
java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

attempt_201409291048_0003_m_000208_2: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007fdfdfad0000, 1683161088, 0) failed; error='Cannot allocate memory' (errno=12)
attempt_201409291048_0003_m_000208_2: #
attempt_201409291048_0003_m_000208_2: # There is insufficient memory for the Java Runtime Environment to continue.
attempt_201409291048_0003_m_000208_2: # Native memory allocation (malloc) failed to allocate 1683161088 bytes for committing reserved memory.
attempt_201409291048_0003_m_000208_2: # An error report file with more information is saved as:
attempt_201409291048_0003_m_000208_2: # /tmp/hadoop-hduser/mapred/local/taskTracker/hduser/jobcache/job_201409291048_0003/attempt_201409291048_0003_m_000208_2/work/hs_err_pid11922.log
14/09/29 12:44:22 INFO mapred.JobClient: Task Id : attempt_201409291048_0003_m_000209_2, Status : FAILED
java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

attempt_201409291048_0003_m_000209_2: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f67ffad0000, 1683161088, 0) failed; error='Cannot allocate memory' (errno=12)
attempt_201409291048_0003_m_000209_2: #
attempt_201409291048_0003_m_000209_2: # There is insufficient memory for the Java Runtime Environment to continue.
attempt_201409291048_0003_m_000209_2: # Native memory allocation (malloc) failed to allocate 1683161088 bytes for committing reserved memory.
attempt_201409291048_0003_m_000209_2: # An error report file with more information is saved as:
attempt_201409291048_0003_m_000209_2: # /tmp/hadoop-hduser/mapred/local/taskTracker/hduser/jobcache/job_201409291048_0003/attempt_201409291048_0003_m_000209_2/work/hs_err_pid11938.log
14/09/29 12:44:30 INFO mapred.JobClient: Task Id : attempt_201409291048_0003_m_000402_0, Status : FAILED
java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

attempt_201409291048_0003_m_000402_0: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f310fad0000, 1683161088, 0) failed; error='Cannot allocate memory' (errno=12)
attempt_201409291048_0003_m_000402_0: #
attempt_201409291048_0003_m_000402_0: # There is insufficient memory for the Java Runtime Environment to continue.
attempt_201409291048_0003_m_000402_0: # Native memory allocation (malloc) failed to allocate 1683161088 bytes for committing reserved memory.
attempt_201409291048_0003_m_000402_0: # An error report file with more information is saved as:
attempt_201409291048_0003_m_000402_0: # /tmp/hadoop-hduser/mapred/local/taskTracker/hduser/jobcache/job_201409291048_0003/attempt_201409291048_0003_m_000402_0/work/hs_err_pid12083.log
14/09/29 12:44:34 INFO mapred.JobClient: Task Id : attempt_201409291048_0003_m_000402_1, Status : FAILED
java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

attempt_201409291048_0003_m_000402_1: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f356bad0000, 1683161088, 0) failed; error='Cannot allocate memory' (errno=12)
attempt_201409291048_0003_m_000402_1: #
attempt_201409291048_0003_m_000402_1: # There is insufficient memory for the Java Runtime Environment to continue.
attempt_201409291048_0003_m_000402_1: # Native memory allocation (malloc) failed to allocate 1683161088 bytes for committing reserved memory.
attempt_201409291048_0003_m_000402_1: # An error report file with more information is saved as:
attempt_201409291048_0003_m_000402_1: # /tmp/hadoop-hduser/mapred/local/taskTracker/hduser/jobcache/job_201409291048_0003/attempt_201409291048_0003_m_000402_1/work/hs_err_pid12102.log
14/09/29 12:44:40 INFO mapred.JobClient: Job complete: job_201409291048_0003
14/09/29 12:44:43 INFO mapred.JobClient: Counters: 24
14/09/29 12:44:43 INFO mapred.JobClient:   Job Counters 
14/09/29 12:44:43 INFO mapred.JobClient:     Launched reduce tasks=1
14/09/29 12:44:43 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=4441493
14/09/29 12:44:43 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/09/29 12:44:43 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/09/29 12:44:43 INFO mapred.JobClient:     Launched map tasks=216
14/09/29 12:44:43 INFO mapred.JobClient:     Data-local map tasks=216
14/09/29 12:44:43 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=2193941
14/09/29 12:44:43 INFO mapred.JobClient:     Failed map tasks=1
14/09/29 12:44:43 INFO mapred.JobClient:   File Input Format Counters 
14/09/29 12:44:43 INFO mapred.JobClient:     Bytes Read=13960068994
14/09/29 12:44:43 INFO mapred.JobClient:   FileSystemCounters
14/09/29 12:44:43 INFO mapred.JobClient:     HDFS_BYTES_READ=13962408717
14/09/29 12:44:43 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=13942458439
14/09/29 12:44:43 INFO mapred.JobClient:   Map-Reduce Framework
14/09/29 12:44:43 INFO mapred.JobClient:     Map output materialized bytes=13930872325
14/09/29 12:44:43 INFO mapred.JobClient:     Map input records=1323773
14/09/29 12:44:43 INFO mapred.JobClient:     Spilled Records=1323773
14/09/29 12:44:43 INFO mapred.JobClient:     Map output bytes=13923429356
14/09/29 12:44:43 INFO mapred.JobClient:     Total committed heap usage (bytes)=47269806080
14/09/29 12:44:43 INFO mapred.JobClient:     CPU time spent (ms)=866620
14/09/29 12:44:43 INFO mapred.JobClient:     Map input bytes=13958643740
14/09/29 12:44:43 INFO mapred.JobClient:     SPLIT_RAW_BYTES=22464
14/09/29 12:44:43 INFO mapred.JobClient:     Combine input records=0
14/09/29 12:44:43 INFO mapred.JobClient:     Combine output records=0
14/09/29 12:44:43 INFO mapred.JobClient:     Physical memory (bytes) snapshot=40872820736
14/09/29 12:44:43 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=11696959963136
14/09/29 12:44:43 INFO mapred.JobClient:     Map output records=1323773
14/09/29 12:44:43 INFO mapred.JobClient: Job Failed: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201409291048_0003_m_000208
java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
    at org.apache.hadoop.examples.Sort.run(Sort.java:176)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.examples.Sort.main(Sort.java:187)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
    at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
排序示例可以很好地处理10GB输入文件。 我已经尝试增加JVM参数和MaxPermSize。但问题依然存在。
如果您有任何建议,我们将不胜感激。

这个问题可能有很多原因,您需要检查您是否达到了工作数量的限制? 1) 由于日志目录中的空间不足,无法创建日志

权限问题。 2) ulimit导致内存分配不足的阈值。 3) 在运行时,无法分配配置的内存 生孩子 4) 映射站点中的子参数配置中存在错误 5) 无法写入临时输出(由于空间或权限问题) 希望对你有帮助


不要在mapred-site.xml中提供任何自定义设置。在作业提交期间,请使用默认的Mapreduce设置,而不是自行设置