Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/xml/12.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Hadoop中内存消耗和容器数的调整问题_Java_Xml_Hadoop_Memory Management - Fatal编程技术网

Java Hadoop中内存消耗和容器数的调整问题

Java Hadoop中内存消耗和容器数的调整问题,java,xml,hadoop,memory-management,Java,Xml,Hadoop,Memory Management,我正在尝试配置Hadoop集群。默认配置工作正常,但资源消耗很低。例如,从机的CPU总使用率略高于40%。我想更高效地使用集群,请参阅以下页面: docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk\u手动安装\u book/content/rpm-chap1-11.html 我有一个小型集群(1个主集群和5个从集群),每个节点都有以下硬件: 4芯 8内存Gb 1个500Gb硬盘 Ubuntu 14.04也是如此,Hadoop版本是2.5

我正在尝试配置Hadoop集群。默认配置工作正常,但资源消耗很低。例如,从机的CPU总使用率略高于40%。我想更高效地使用集群,请参阅以下页面:

docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk\u手动安装\u book/content/rpm-chap1-11.html

我有一个小型集群(1个主集群和5个从集群),每个节点都有以下硬件:

  • 4芯
  • 8内存Gb
  • 1个500Gb硬盘
Ubuntu 14.04也是如此,Hadoop版本是2.5.0

我尝试计算这些值,并得出:

总可用RAM=8192-2024=6168(系统为2Gb) 最小容器大小=512(从上述链接中的表中提取) 每个容器的RAM=771->最大值(最小容器大小,(总可用RAM)/容器)) 容器数量:8->MIN(2*芯,总可用RAM/MIN\u容器大小)

233.6

这个配置和一些相同的变体不起作用,我得到了这样的错误

14/12/09 17:17:34 INFO mapreduce.Job: Task Id : attempt_1418155570046_0004_m_000060_1, Status : FAILED
Container [pid=5808,containerID=container_1418155570046_0004_01_000081] is running beyond physical memory limits. Current usage: 827.2 MB of 771 MB physical memory used; 1.6 GB of 1.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1418155570046_0004_01_000081 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 5815 5808 5808 5808 (java) 580 27 1706360832 211455 /usr/lib/jvm/default-java/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx1024M -Djava.io.tmpdir=/usr/local/hadoop/hadoop_store/tmp/nm-local-dir/usercache/hduser/appcache/application_1418155570046_0004/container_1418155570046_0004_01_000081/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/usr/local/hadoop/logs/userlogs/application_1418155570046_0004/container_1418155570046_0004_01_000081 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.4.1.103 57356 attempt_1418155570046_0004_m_000060_1 81 
    |- 5808 4341 5808 5808 (bash) 0 0 14008320 303 /bin/bash -c /usr/lib/jvm/default-java/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx1024M -Djava.io.tmpdir=/usr/local/hadoop/hadoop_store/tmp/nm-local-dir/usercache/hduser/appcache/application_1418155570046_0004/container_1418155570046_0004_01_000081/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/usr/local/hadoop/logs/userlogs/application_1418155570046_0004/container_1418155570046_0004_01_000081 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.4.1.103 57356 attempt_1418155570046_0004_m_000060_1 81 1>/usr/local/hadoop/logs/userlogs/application_1418155570046_0004/container_1418155570046_0004_01_000081/stdout 2>/usr/local/hadoop/logs/userlogs/application_1418155570046_0004/container_1418155570046_0004_01_000081/stderr  

Container killed on request. Exit code is 143
或者我也会得到Java堆错误

相同的应用程序在默认配置下没有失败

那是怎么回事


谢谢您的帮助。

我不明白,如果您的容器所需的内存不足以容纳771 mb,为什么不增加它呢?如果内存是您的瓶颈,那么请优化内存使用或购买更多ram。@ThomasJungblut,谢谢您的评论。如果我尝试更改
mapreduce.map.memory.mb
值,例如最多更改1024或2048,我会收到其他错误(Java堆错误)。我的问题是要知道,在可用硬件的情况下,配置参数的更合适值是什么。对不起,我的解释不清楚。
14/12/09 17:17:34 INFO mapreduce.Job: Task Id : attempt_1418155570046_0004_m_000060_1, Status : FAILED
Container [pid=5808,containerID=container_1418155570046_0004_01_000081] is running beyond physical memory limits. Current usage: 827.2 MB of 771 MB physical memory used; 1.6 GB of 1.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1418155570046_0004_01_000081 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 5815 5808 5808 5808 (java) 580 27 1706360832 211455 /usr/lib/jvm/default-java/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx1024M -Djava.io.tmpdir=/usr/local/hadoop/hadoop_store/tmp/nm-local-dir/usercache/hduser/appcache/application_1418155570046_0004/container_1418155570046_0004_01_000081/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/usr/local/hadoop/logs/userlogs/application_1418155570046_0004/container_1418155570046_0004_01_000081 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.4.1.103 57356 attempt_1418155570046_0004_m_000060_1 81 
    |- 5808 4341 5808 5808 (bash) 0 0 14008320 303 /bin/bash -c /usr/lib/jvm/default-java/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx1024M -Djava.io.tmpdir=/usr/local/hadoop/hadoop_store/tmp/nm-local-dir/usercache/hduser/appcache/application_1418155570046_0004/container_1418155570046_0004_01_000081/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/usr/local/hadoop/logs/userlogs/application_1418155570046_0004/container_1418155570046_0004_01_000081 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.4.1.103 57356 attempt_1418155570046_0004_m_000060_1 81 1>/usr/local/hadoop/logs/userlogs/application_1418155570046_0004/container_1418155570046_0004_01_000081/stdout 2>/usr/local/hadoop/logs/userlogs/application_1418155570046_0004/container_1418155570046_0004_01_000081/stderr  

Container killed on request. Exit code is 143