Hadoop纱线作业在贴图0%和减少0%时受阻

Hadoop纱线作业在贴图0%和减少0%时受阻,hadoop,mapreduce,cloudera,yarn,Hadoop,Mapreduce,Cloudera,Yarn,我正在尝试运行一个非常简单的作业来测试我的hadoop设置,所以我尝试了单词计数示例,该示例被卡住了0%,所以我尝试了其他一些简单的作业,每个作业都被卡住了 52191_0003/ 14/07/14 23:55:51 INFO mapreduce.Job: Running job: job_1405376352191_0003 14/07/14 23:55:57 INFO mapreduce.Job: Job job_1405376352191_0003 running in uber mode

我正在尝试运行一个非常简单的作业来测试我的hadoop设置,所以我尝试了单词计数示例,该示例被卡住了0%,所以我尝试了其他一些简单的作业,每个作业都被卡住了

52191_0003/
14/07/14 23:55:51 INFO mapreduce.Job: Running job: job_1405376352191_0003
14/07/14 23:55:57 INFO mapreduce.Job: Job job_1405376352191_0003 running in uber mode : false
14/07/14 23:55:57 INFO mapreduce.Job:  map 0% reduce 0%
我使用的是hadoop版本——hadoop 2.3.0-cdh5.0.2

我在谷歌上做了一个快速的研究,发现它的数量在增加

yarn.scheduler.minimum-allocation-mb
yarn.nodemanager.resource.memory-mb
我有单节点集群,在我的Macbook上运行,有双核和8GB内存

my-site.xml文件-

<configuration>

<!-- Site specific YARN configuration properties -->
  <property>
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>resourcemanager.company.com</value>
  </property>   
  <property>
    <description>Classpath for typical applications.</description>
    <name>yarn.application.classpath</name>
    <value>
        $HADOOP_CONF_DIR,
        $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
        $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
        $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
        $HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*
    </value>
  </property>

  <property>
    <name>yarn.nodemanager.local-dirs</name>
    <value>file:///data/1/yarn/local,file:///data/2/yarn/local,file:///data/3/yarn/local</value>
  </property>
  <property>
    <name>yarn.nodemanager.log-dirs</name>
    <value>file:///data/1/yarn/logs,file:///data/2/yarn/logs,file:///data/3/yarn/logs</value>
  </property>
  <property>
  </property>
    <name>yarn.log.aggregation.enable</name>
    <value>true</value> 
  <property>
    <description>Where to aggregate logs</description>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>hdfs://var/log/hadoop-yarn/apps</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    <description>shuffle service that needs to be set for Map Reduce to run </description>
  </property>
   <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  </property>

  <property>
        <name>yarn.app.mapreduce.am.resource.mb</name>
        <value>8092</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.command-opts</name>
        <value>-Xmx768m</value>
    </property>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
        <description>Execution framework.</description>
    </property>
    <property>
        <name>mapreduce.map.cpu.vcores</name>
        <value>4</value>
        <description>The number of virtual cores required for each map task.</description>
    </property>
    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>8092</value>
        <description>Larger resource limit for maps.</description>
    </property>
    <property>
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx768m</value>
        <description>Heap-size for child jvms of maps.</description>
    </property>
    <property>
        <name>mapreduce.jobtracker.address</name>
        <value>jobtracker.alexjf.net:8021</value>
    </property>

 <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>2048</value>
    <description>Minimum limit of memory to allocate to each container request at the Resource Manager.</description>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>8092</value>
    <description>Maximum limit of memory to allocate to each container request at the Resource Manager.</description>
  </property>
  <property>
    <name>yarn.scheduler.minimum-allocation-vcores</name>
    <value>2</value>
    <description>The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this won't take effect, and the specified value will get allocated the minimum.</description>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-vcores</name>
    <value>10</value>
    <description>The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this won't take effect, and will get capped to this value.</description>
  </property>
  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>2048</value>
    <description>Physical memory, in MB, to be made available to running containers</description>
  </property>
  <property>
    <name>yarn.nodemanager.resource.cpu-vcores</name>
    <value>4</value>
    <description>Number of CPU cores that can be allocated for containers.</description>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    <description>shuffle service that needs to be set for Map Reduce to run </description>
  </property>
   <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>

</configuration>

warn.resourcemanager.hostname
resourcemanager.company.com
典型应用程序的类路径。
.application.classpath
$HADOOP\u CONF\u DIR,
$HADOOP\u COMMON\u HOME/*,$HADOOP\u COMMON\u HOME/lib/*,
$HADOOP\u HDFS\u HOME/*,$HADOOP\u HDFS\u HOME/lib/*,
$HADOOP\u MAPRED\u HOME/*,$HADOOP\u MAPRED\u HOME/lib/*,
$HADOOP\u-Thread\u-HOME/*,$HADOOP\u-Thread\u-HOME/lib/*
纱线.nodemanager.local-dirs
file:///data/1/yarn/local,file:///data/2/yarn/local,file:///data/3/yarn/local
纱线.nodemanager.log-dirs
file:///data/1/yarn/logs,file:///data/2/yarn/logs,file:///data/3/yarn/logs
warn.log.aggregation.enable
真的
在何处聚合日志
warn.nodemanager.remote-app-log-dir
hdfs://var/log/hadoop-yarn/apps
纱线.节点管理器.辅助服务
mapreduce_shuffle
需要设置以运行Map Reduce的洗牌服务
warn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
warn.app.mapreduce.am.resource.mb
8092
warn.app.mapreduce.am.command-opts
-Xmx768m
mapreduce.framework.name
纱线
执行框架。
mapreduce.map.cpu.vcores
4.
每个映射任务所需的虚拟核心数。
mapreduce.map.memory.mb
8092
地图的资源限制更大。
mapreduce.map.java.opts
-Xmx768m
映射的子JVM的堆大小。
mapreduce.jobtracker.address
jobtracker.alexjf.net:8021
warn.scheduler.minimum-allocation-mb
2048
在资源管理器上分配给每个容器请求的最小内存限制。
warn.scheduler.maximum-allocation-mb
8092
在资源管理器上分配给每个容器请求的最大内存限制。
纱线.调度器.最小分配-vcores
2.
RM上每个容器请求的最小分配,以虚拟CPU核为单位。低于此值的请求将不会生效,并且指定的值将被分配到最小值。
纱线.scheduler.maximum-allocation-vcores
10
RM上每个容器请求的最大分配,以虚拟CPU核为单位。高于此值的请求将不会生效,并将被限制为此值。
warn.nodemanager.resource.memory-mb
2048
运行容器可用的物理内存(MB)
纱线.nodemanager.resource.cpu-vcores
4.
可分配给容器的CPU内核数。
纱线.节点管理器.辅助服务
mapreduce_shuffle
需要设置以运行Map Reduce的洗牌服务
warn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
my mapred-site.xml

  <property>    
    <name>mapreduce.framework.name</name>    
    <value>yarn</value>  
  </property>

mapreduce.framework.name
纱线
只有1个属性。 尝试了几种排列和组合,但无法消除错误

工作日志

 23:55:55,694 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2014-07-14 23:55:55,697 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
2014-07-14 23:55:55,699 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
2014-07-14 23:55:55,769 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: 8092
2014-07-14 23:55:55,769 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: root.abhishekchoudhary
2014-07-14 23:55:55,775 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500
2014-07-14 23:55:55,777 INFO [main] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: yarn.client.max-nodemanagers-proxies : 500
2014-07-14 23:55:55,787 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1405376352191_0003Job Transitioned from INITED to SETUP
2014-07-14 23:55:55,789 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP
2014-07-14 23:55:55,800 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1405376352191_0003Job Transitioned from SETUP to RUNNING
2014-07-14 23:55:55,823 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1405376352191_0003_m_000000 Task Transitioned from NEW to SCHEDULED
2014-07-14 23:55:55,824 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1405376352191_0003_m_000001 Task Transitioned from NEW to SCHEDULED
2014-07-14 23:55:55,824 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1405376352191_0003_m_000002 Task Transitioned from NEW to SCHEDULED
2014-07-14 23:55:55,825 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1405376352191_0003_m_000003 Task Transitioned from NEW to SCHEDULED
2014-07-14 23:55:55,826 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1405376352191_0003_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2014-07-14 23:55:55,827 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1405376352191_0003_m_000001_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2014-07-14 23:55:55,827 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1405376352191_0003_m_000002_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2014-07-14 23:55:55,827 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1405376352191_0003_m_000003_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2014-07-14 23:55:55,828 INFO [Thread-49] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceReqt:8092
2014-07-14 23:55:55,858 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1405376352191_0003, File: hdfs://localhost/tmp/hadoop-yarn/staging/abhishekchoudhary/.staging/job_1405376352191_0003/job_1405376352191_0003_1.jhist
2014-07-14 23:55:56,773 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
2014-07-14 23:55:56,799 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1405376352191_0003: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:0, vCores:0> knownNMs=1
23:55:55694 WARN[main]org.apache.hadoop.conf.Configuration:job.xml:试图覆盖最终参数:mapreduce.job.end-notification.max.retry.interval;忽略。
2014-07-14 23:55:55697警告[main]org.apache.hadoop.conf.Configuration:job.xml:试图覆盖最终参数:mapreduce.job.end-notification.max.truments;忽略。
2014-07-14 23:55:55699 INFO[main]org.apache.hadoop.warn.client.RMProxy:连接到ResourceManager,地址为/0.0.0.0:8030
2014-07-14 23:55:55769 INFO[main]org.apache.hadoop.mapreduce.v2.app.rm.rmContainerLocator:maxContainerCapability:8092
2014-07-14 23:55:55769 INFO[main]org.apache.hadoop.mapreduce.v2.app.rm.rmContainerLocator:队列:root.abhishekchoudhary
2014-07-14 23:55:55775 INFO[main]org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:线程池大小的上限为500
2014-07-14 23:55:55777 INFO[main]org.apache.hadoop.warn.client.api.impl.ContainerManagementProtocolProxy:warn.client.max-nodemanagers-proxy:500
2014-07-14 23:55:55787信息[AsyncDispatcher事件处理程序]org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:job_1405376352191_0003作业已从初始化转换为设置
2014-07-14 23:55:55789信息[CommitterEventProcessor#0]org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:处理事件事件类型:作业设置
2014-07-14 23:55:55800信息[AsyncDispatcher事件处理程序]org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:job1405376352191_0003作业已从安装转换为运行
2014-07-14 23:55:55823信息[AsyncDispatcher事件处理程序]org.apache.hadoop.mapreduce.v2.app.job.impl.taskinpl:task\u 1405376352191\u 0003\u m\u000000任务从新任务转换为计划任务
2014-07-14 23:55:55824信息[AsyncDispatcher事件处理程序]org.apache.hadoop.mapreduce.v2.app.job.impl.taskinpl:task\u 1405376352191\u 0003\u m\u000001任务从新任务转换为计划任务
2014-07-14 23:55:55824信息[AsyncDispatcher事件处理程序]org.apache.hadoop.mapreduce.v2.app.job.impl.taskinpl:task\u 1405376352191\u 0003\u m\u000002任务从新任务转换为计划任务
2014-07-14 23:55:55825信息[AsyncDispatcher事件处理程序]org.apache.hadoop.mapreduce.v2.app.job.impl.taskinpl:task\u 1405376352191\u 0003\u m\u000003任务从新任务转换为计划任务
2014-07-14 23:55:55826信息[AsyncDispatcher事件处理程序]组织。
<property>
  <name>yarn.resourcemanager.hostname</name>
  <value>MASTER ADDRESS</value>
</property>
<property>
  <name>yarn.resourcemanager.resource-tracker.address</name>
  <value>${yarn.resourcemanager.hostname}:8025</value>
</property>
<property>
  <name>yarn.resourcemanager.scheduler.address</name>
  <value>${yarn.resourcemanager.hostname}:8030</value>
</property>
<property>
  <name>yarn.resourcemanager.address</name>
  <value>${yarn.resourcemanager.hostname}:8040</value>
</property>
<property>
  <name>yarn.resourcemanager.webapp.address</name>
  <value>${yarn.resourcemanager.hostname}:8088</value>
</property>
<property>
  <name>yarn.resourcemanager.admin.address</name>
  <value>${yarn.resourcemanager.hostname}:8033</value>
</property>