Hadoop 纱线:最大平行贴图任务计数
Hadoop权威指南中提到了以下内容Hadoop 纱线:最大平行贴图任务计数,hadoop,mapreduce,yarn,Hadoop,Mapreduce,Yarn,Hadoop权威指南中提到了以下内容 "What qualifies as a small job? By default one that has less than 10 mappers, only one reducer, and the input size is less than the size of one HDFS block. " 但在纱线上执行之前,如何计算作业中映射器的数量? 在MR1中,映射器的数量取决于输入拆分的数量。纱线也是如此吗? 在纱线容器是灵活的。那么,有没
"What qualifies as a small job? By default one that has less than 10 mappers, only one reducer, and the input size is less than the size of one HDFS block. "
但在纱线上执行之前,如何计算作业中映射器的数量?
在MR1中,映射器的数量取决于输入拆分的数量。纱线也是如此吗?
在纱线容器是灵活的。那么,有没有办法计算在给定集群上可以并行运行的最大map任务数(某种严格的上限,因为它会让我大致了解我可以并行处理多少数据?)
但在纱线上执行之前,如何计算作业中映射器的数量?在MR1中,映射器的数量取决于输入拆分的数量。纱线也是如此吗
是的,如果您使用的是基于MapReduce的框架,那么在Thread中,映射器的数量取决于输入拆分
在纱线容器是灵活的。那么,有没有办法计算在给定集群上可以并行运行的最大map任务数(某种严格的上限,因为它会让我大致了解我可以并行处理多少数据?)
可以在纱线集群上并行运行的映射任务的数量取决于可以在集群上并行启动和运行的容器的数量。这最终取决于您将如何在集群中配置MapReduce,这在本指南中有明确的解释
对于大多数工作负载,工作负载系数可以设置为2.0。考虑CPU绑定工作负载的更高设置。
yarn.nodemanager.resource.memory-mb( Available Memory on a node for containers )= Total System memory – Reserved memory( like 10-20% of memory for Linux and its daemon services) - HDFS Data node ( 1024 MB) – (resources for task buffers, such as the HDFS Sort I/O buffer) – (Memory allocated for DataNode( default 1024 MB), NodeManager, RegionServer etc.)
Hadoop在设计上是一个以磁盘I/O为中心的平台。专用于DataNode的独立物理驱动器(“磁盘轴”)的数量限制了节点可以支持的并发处理量。因此,分配给NodeManager的vCore数量应为以下两者中的较小者:
[(total vcores) – (number of vcores reserved for non-YARN use)] or [ 2 x (number of physical disks used for DataNode storage)]
所以
注==>
mapreduce.map.memory.mb is combination of both mapreduce.map.java.opts.max.heap + some head room (safety value)
mapreduce.[map | reduce].java.opts.max.heap
的设置分别指定为mapper和reducer堆大小分配的默认内存。
mapreduce.[map | reduce].memory.mb
设置指定为其容器分配的内存,分配的值应允许超出任务堆大小的开销。Cloudera建议对mapreduce.[map | reduce].java.opts.max.heap
设置应用系数1.2。最佳值取决于实际任务。Cloudera还建议将mapreduce.map.memory.mb设置为1–2 GB,并将mapreduce.reduce.memory.mb设置为映射器值的两倍。ApplicationMaster堆大小默认为1 GB,如果作业包含许多并发任务,则可以增加堆大小
参考-
mapreduce.[map | reduce].java.opts.max.heap
属性指定的堆大小。那么我们为什么不添加它(mapreduce.map.java.opts.max.heap)使用map任务计算公式中的mapreduce.map.memory.mb(如上所述)。因为mapreduce.map.memory.mb
是这两者的组合mapreduce.map.java.opts.max.heap
+一些头部空间(安全值)。
[(total vcores) – (number of vcores reserved for non-YARN use)] or [ 2 x (number of physical disks used for DataNode storage)]
yarn.nodemanager.resource.cpu-vcores = min{ ((total vcores) – (number of vcores reserved for non-YARN use)), (2 x (number of physical disks used for DataNode storage))}
Available vcores on a node for containers = total no. of vcores – for operating system( For calculating vcore demand, consider the number of concurrent processes or tasks each service runs as an initial guide. For OS we take 2 ) – Yarn node Manager( Def. is 1) – HDFS data node( Def. is 1).
mapreduce.map.memory.mb is combination of both mapreduce.map.java.opts.max.heap + some head room (safety value)