Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 2.3.0 wordcount将永远运行_Hadoop_Mapreduce - Fatal编程技术网

Hadoop 2.3.0 wordcount将永远运行

Hadoop 2.3.0 wordcount将永远运行,hadoop,mapreduce,Hadoop,Mapreduce,我试图通过运行wordcount作业来测试hadoop安装。我的问题是,这项工作停留在公认的状态,似乎永远在运行。我正在使用hadoop 2.3.0,并尝试按照这个问题的答案来解决这个问题,但它对我不起作用 这就是我所拥有的: C:\hadoop-2.3.0>yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar wordcount /data/test.txt /data/output 15/03/15 15

我试图通过运行wordcount作业来测试hadoop安装。我的问题是,这项工作停留在公认的状态,似乎永远在运行。我正在使用hadoop 2.3.0,并尝试按照这个问题的答案来解决这个问题,但它对我不起作用

这就是我所拥有的:

C:\hadoop-2.3.0>yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar wordcount /data/test.txt /data/output
15/03/15 15:36:07 INFO client.RMProxy: Connecting to ResourceManager at/0.0.0.0:8032
15/03/15 15:36:09 INFO input.FileInputFormat: Total input paths to process : 1
15/03/15 15:36:10 INFO mapreduce.JobSubmitter: number of splits:1
15/03/15 15:36:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_14 26430101974_0001
15/03/15 15:36:11 INFO impl.YarnClientImpl: Submitted application application_14 26430101974_0001
15/03/15 15:36:11 INFO mapreduce.Job: The url to track the job: http://Agata-PC:8088/proxy/application_1426430101974_0001/
15/03/15 15:36:11 INFO mapreduce.Job: Running job: job_1426430101974_0001
这是我的
映射站点.xml

<configuration>
<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
</property>
 <property>
    <name>mapred.job.tracker</name>
    <value>127.0.0.1:9001</value>
</property>
   <property>
    <name>mapreduce.jobtracker.staging.root.dir</name>
    <value>/user</value>
</property>
<property>
    <name>mapreduce.history.server.http.address</name>
    <value>127.0.0.1:51111</value>
    <description>Http address of the history server</description>
    <final>false</final>
</property>
<property>
    <name>yarn.app.mapreduce.am.resource.mb</name>
    <value>1024</value>
</property>
<property>
    <name>yarn.app.mapreduce.am.command-opts</name>
    <value>-Xmx768m</value>
</property>
<property>
    <name>mapreduce.map.cpu.vcores</name>
    <value>1</value>
    <description>The number of virtual cores required for each map task.</description>
</property>
<property>
    <name>mapreduce.reduce.cpu.vcores</name>
    <value>1</value>
    <description>The number of virtual cores required for each map task.</description>
</property>
<property>
    <name>mapreduce.map.memory.mb</name>
    <value>1024</value>
    <description>Larger resource limit for maps.</description>
</property>
<property>
    <name>mapreduce.map.java.opts</name>
    <value>-Xmx768m</value>
    <description>Heap-size for child jvms of maps.</description>
</property>
<property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>1024</value>
    <description>Larger resource limit for reduces.</description>
</property>
<property>
    <name>mapreduce.reduce.java.opts</name>
    <value>-Xmx768m</value>
    <description>Heap-size for child jvms of reduces.</description>
</property>
</configuration>
<configuration>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>128</value>
        <description>Minimum limit of memory to allocate to each container request at the Resource Manager.</description>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-vcores</name>
        <value>1</value>
        <description>The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this won't take effect, and the specified value will get allocated the minimum.</description>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-vcores</name>
        <value>2</value>
        <description>The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this won't take effect, and will get capped to this value.</description>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2048</value>
        <description>Physical memory, in MB, to be made available to running containers</description>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>4</value>
        <description>Number of CPU cores that can be allocated for containers.</description>
    </property>
</configuration>

非常感谢您的帮助。

您是否尝试重新启动hadoop进程或集群?可能还有一些工作仍在进行中

通过跟踪作业的url或通过hadoop url查看日志可能会有所帮助


干杯。

我之前遇到过类似的问题,您可能在mapper或reducer中有一个无限循环。检查您的减速机是否正确处理iterable。

是的,我尝试运行了几次作业,但它一直卡在那里。请您向我显示您正在运行的作业的日志,包括来自jps的结果,以防万一。这是我在概述中看到的<代码>用户:Agata名称:字数计算应用程序类型:MAPREDUCE状态:已接受最终状态:未定义开始:2015年3月15日20:23:31经过:14分钟,33秒跟踪URL:未分配诊断:在页面左侧,签出工具,然后签出本地日志,/userlog,并找到与您的作业名称匹配的作业名称。如果可能,您可能需要从作业日志中的每个容器复制内容。此外,请在终端中运行“jps”命令并复制结果。在本地日志中,除了一个指向空白页的身份验证链接外,没有其他内容:(可能是重复的我面临相同的问题,请有人帮助!!!