Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/316.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop java遇到异常:java.net.ConnectException_Java_Hadoop - Fatal编程技术网

Hadoop java遇到异常:java.net.ConnectException

Hadoop java遇到异常:java.net.ConnectException,java,hadoop,Java,Hadoop,我已经在四台机器上安装了Hadood-2.6(分布式模式)。所有守护进程都正常运行。但当我运行标准teragen示例时- hadoop jar hadoop-mapreduce-examples-2.6.0.jar teragen 10 /input 它给了我以下的错误- hadoop jar /root/exp_testing/hadoop_new/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar teragen 10

我已经在四台机器上安装了Hadood-2.6(分布式模式)。所有守护进程都正常运行。但当我运行标准teragen示例时-

hadoop jar hadoop-mapreduce-examples-2.6.0.jar teragen  10  /input
它给了我以下的错误-

hadoop jar /root/exp_testing/hadoop_new/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar teragen  10  /input
15/04/28 05:45:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/04/28 05:45:51 INFO client.RMProxy: Connecting to ResourceManager at enode1/192.168.1.231:8050
15/04/28 05:45:53 INFO terasort.TeraSort: Generating 10 using 2
15/04/28 05:45:53 INFO mapreduce.JobSubmitter: number of splits:2
15/04/28 05:45:54 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1430180067597_0001
15/04/28 05:45:54 INFO impl.YarnClientImpl: Submitted application application_1430180067597_0001
15/04/28 05:45:54 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1430180067597_0001/
15/04/28 05:45:54 INFO mapreduce.Job: Running job: job_1430180067597_0001
15/04/28 05:46:15 INFO mapreduce.Job: Job job_1430180067597_0001 running in uber mode : false
15/04/28 05:46:15 INFO mapreduce.Job:  map 0% reduce 0%
15/04/28 05:46:15 INFO mapreduce.Job: Job job_1430180067597_0001 failed with state FAILED due to: Application application_1430180067597_0001 failed 2 times due to Error launching appattempt_1430180067597_0001_000002. Got exception: java.net.ConnectException: Call From ubuntu/127.0.1.1 to ubuntu:60839 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
    at org.apache.hadoop.ipc.Client.call(Client.java:1472)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy79.startContainers(Unknown Source)
    at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
    at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
    at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
    at org.apache.hadoop.ipc.Client.call(Client.java:1438)
    ... 9 more
. Failing the application.
15/04/28 05:46:15 INFO mapreduce.Job: Counters: 0
我有两台机器(每组包含4个节点),同一台机器的设置适用于另一台机器,但我不知道为什么一台机器会出现问题

/etc/主持人

127.0.0.1       localhost
#127.0.1.1      ubuntu
127.0.0.1       ubuntu
#192.168.1.231  ubuntu



192.168.1.231    enode1
192.168.1.232    enode2
192.168.1.233    enode3
192.168.1.234    enode4


# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
~                                                                                                                                        
core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
   <name>fs.defaultFS</name>
   <value>hdfs://enode1:9000/</value>
</property>

</configuration>
@sandeep007734解决方案正在为我的新集群集工作,我相信他的解决方案,但我已经对旧集群集发表了评论 在/etc/hosts中执行以下命令,并且工作正常

#127.0.1.1 ubuntu

我不知道为什么会这样

获取异常:java.net.ConnectException:从ubuntu/127.0.1.1调用ubuntu:60839连接失败异常:java.net.ConnectException:连接被拒绝


此错误主要是由于环回问题引起的。要纠正此问题,请将
etc/hosts
文件中的
127.0.1.1
更改为
127.0.0.1
。现在,重新启动hadoop进程并尝试运行该示例。它应该可以工作。

尝试从
/etc/hosts
中删除这些行,并在不使用ipv6时禁用ipv6:

127.0.0.1       localhost
127.0.0.1       ubuntu
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
~   
IPv6的一个问题是,对各种与网络相关的Hadoop配置选项使用0.0.0.0将导致Hadoop绑定到IPv6地址

因此,如果您不使用IPv6,最好禁用它,因为它在运行Hadoop时可能会导致问题

要禁用IPv6,请在您选择的编辑器中打开
/etc/sysctl.conf
,并在文件末尾添加以下行:

# disable ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

希望这将帮助您

问题在于主机名配置。如果您使用的是自定义主机名(仅在/etc/hosts文件中定义,而不是在dns中定义),那么hadoop有时会以奇怪的方式运行

您正在使用名称enode1、enode2等作为节点的名称

但在您发布的错误中,它显示:

15/04/28 05:45:54 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1430180067597_0001/
这里是说,要跟踪作业,url是。。。 这意味着hadoop正在获取系统的主机名来执行其操作

现在一个明显的解决方案是进入/etc/hosts文件并添加条目(在每个节点中,包括主节点中)。例如在enode1上

192.168.1.231 ubuntu 
当您尝试格式化namenode并启动集群时,这将很好地工作

但是,如果您试图运行某个作业,您将遇到麻烦,因为从机将尝试使用该地址连接资源管理器

ubuntu/192.168.1.231
这意味着,如果您无法解析ubuntu主机名,请使用IP。但是从机能够解析映射到自己IP的IP

例如,当在enode2机器上运行的从机尝试连接到资源管理器时,它使用ubuntu/192.168.1.231。ubuntu主机名解析为192.168.1.232,因为您刚刚在/etc/hosts文件中定义了它

在作业执行期间,在日志中,您应该能够看到错误:

org.apache.hadoop.ipc.Client: Retrying connect to server
它很长一段时间都试图连接到资源管理器,这就是为什么teragen作业要花这么长时间执行的原因。因为计划在从属服务器上的映射任务会尝试长时间连接到资源管理器,最终失败。只有那些计划在主节点上的映射任务(因为您将主节点也用作从节点)才会成功(因为ubuntu仅在主节点上正确解析了资源管理器IP)

这个问题的解决办法是

  • 停止Hadoop集群

  • 在每台机器上编辑文件/etc/hostname 例如,在机器enode1上

  • 发件人:

    致:

    对应机器上的enode2、enode3

  • 删除/etc/hosts文件中的ubuntu条目
  • 重新启动
  • 确保命令更改了主机名

    主机名

  • 格式化Namdenode

  • 启动集群并再次运行teragen。它应该运行良好

  • 我遇到了同样的问题,最后幸运的是我解决了它。问题是主机

    只是 苏根

    主机节点上的主机名主机

    从节点上的主机名从节点

    重新启动集群

    没关系

    这是我的机器上的样子

    (1) 我的问题是:

    miaofu@miaofu-Virtual-Machine:~/hadoop-2.6.4/etc/hadoop$ hadoop jar ../../share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /in /out2
    16/09/17 15:41:14 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.202.104:8032
    16/09/17 15:41:17 INFO input.FileInputFormat: Total input paths to process : 9
    16/09/17 15:41:17 INFO mapreduce.JobSubmitter: number of splits:9
    16/09/17 15:41:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474096034614_0002
    16/09/17 15:41:18 INFO impl.YarnClientImpl: Submitted application application_1474096034614_0002
    16/09/17 15:41:18 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1474096034614_0002/
    16/09/17 15:41:18 INFO mapreduce.Job: Running job: job_1474096034614_0002
    16/09/17 15:41:26 INFO mapreduce.Job: Job job_1474096034614_0002 running in uber mode : false
    16/09/17 15:41:26 INFO mapreduce.Job:  map 0% reduce 0%
    16/09/17 15:41:39 INFO mapreduce.Job:  map 11% reduce 0%
    16/09/17 15:41:40 INFO mapreduce.Job:  map 22% reduce 0%
    16/09/17 15:41:41 INFO mapreduce.Job:  map 67% reduce 0%
    16/09/17 15:41:54 INFO mapreduce.Job:  map 67% reduce 22%
    16/09/17 15:44:29 INFO mapreduce.Job: Task Id : attempt_1474096034614_0002_m_000006_0, Status : FAILED
    Container launch failed for container_1474096034614_0002_01_000008 : java.net.ConnectException: Call From miaofu-Virtual-Machine/127.0.0.1 to localhost:57019 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.GeneratedConstructorAccessor32.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
        at org.apache.hadoop.ipc.Client.call(Client.java:1473)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy36.startContainers(Unknown Source)
        at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
        at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy37.startContainers(Unknown Source)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:151)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
    
    (2) 以下是我的配置: /etc/主机:

    127.0.0.1       localhost
    127.0.0.1 miaofu-Virtual-Machine
    192.168.202.104 master
    192.168.202.31 slave01
    192.168.202.105 slave02
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    
    (3) 设置主机名 在主节点上:

    root@miaofu-Virtual-Machine:/home/miaofu# vi /etc/hostname 
    root@miaofu-Virtual-Machine:/home/miaofu# hostname
    miaofu-Virtual-Machine
    root@miaofu-Virtual-Machine:/home/miaofu# hostname master
    root@miaofu-Virtual-Machine:/home/miaofu# hostname
    master
    
    在从机上:

    miaofu@miaofu-Virtual-Machine:~$ su root
    密码: 
    ^Z
    [3]+  已停止               su root
    miaofu@miaofu-Virtual-Machine:~$ sudo passwd root
    [sudo] password for miaofu: 
    输入新的 UNIX 密码: 
    重新输入新的 UNIX 密码: 
    passwd:已成功更新密码
    miaofu@miaofu-Virtual-Machine:~$ hostname slave02
    hostname: you must be root to change the host name
    miaofu@miaofu-Virtual-Machine:~$ su root
    密码: 
    root@miaofu-Virtual-Machine:/home/miaofu# 
    root@miaofu-Virtual-Machine:/home/miaofu# 
    root@miaofu-Virtual-Machine:/home/miaofu# 
    root@miaofu-Virtual-Machine:/home/miaofu# hostname slave02
    root@miaofu-Virtual-Machine:/home/miaofu# hostname 
    slave02
    
    (4) 重新启动集群

    stop-yarn.sh
    stop-dfs.sh
    cd
    rm -r hadoop-2.6.4/tmp/*
    
    hadoop namenode -format
    start-dfs.sh
    start-yarn.sh 
    
    (5) 只需运行wordcount

    miaofu@miaofu-Virtual-Machine:~$ hadoop fs -mkdir /in
    miaofu@miaofu-Virtual-Machine:~$ vi retry.sh 
    miaofu@miaofu-Virtual-Machine:~$ hadoop fs -put etc/hadoop/*.xml /in
    put: `etc/hadoop/*.xml': No such file or directory
    miaofu@miaofu-Virtual-Machine:~$ hadoop fs -put hadoop-2.6.4/etc/hadoop/*.xml /in
    jpmiaofu@miaofu-Virtual-Machine:~$ jps
    61591 Jps
    60601 ResourceManager
    60297 SecondaryNameNode
    60732 NodeManager
    60092 DataNode
    59927 NameNode
    miaofu@miaofu-Virtual-Machine:~$ hadoop jar hadoop-2.6.4/
    bin/         etc/         include/     lib/         LICENSE.txt  NOTICE.txt   sbin/        tmp/         
    conf.sh      home/        input/       libexec/     logs/        README.txt   share/       
    miaofu@miaofu-Virtual-Machine:~$ hadoop jar hadoop-2.6.4/share/
    doc/    hadoop/ 
    miaofu@miaofu-Virtual-Machine:~$ hadoop jar hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /in /out
    ^Z
    [1]+  已停止               hadoop jar hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /in /out
    miaofu@miaofu-Virtual-Machine:~$ hadoop jar hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /in /out3
    16/09/17 16:46:24 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.202.104:8032
    16/09/17 16:46:25 INFO input.FileInputFormat: Total input paths to process : 9
    16/09/17 16:46:25 INFO mapreduce.JobSubmitter: number of splits:9
    16/09/17 16:46:26 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474101888060_0001
    16/09/17 16:46:26 INFO impl.YarnClientImpl: Submitted application application_1474101888060_0001
    16/09/17 16:46:26 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1474101888060_0001/
    16/09/17 16:46:26 INFO mapreduce.Job: Running job: job_1474101888060_0001
    16/09/17 16:46:35 INFO mapreduce.Job: Job job_1474101888060_0001 running in uber mode : false
    16/09/17 16:46:35 INFO mapreduce.Job:  map 0% reduce 0%
    16/09/17 16:46:44 INFO mapreduce.Job:  map 22% reduce 0%
    16/09/17 16:46:45 INFO mapreduce.Job:  map 33% reduce 0%
    16/09/17 16:46:48 INFO mapreduce.Job:  map 67% reduce 0%
    16/09/17 16:46:49 INFO mapreduce.Job:  map 100% reduce 0%
    16/09/17 16:46:51 INFO mapreduce.Job:  map 100% reduce 100%
    16/09/17 16:46:52 INFO mapreduce.Job: Job job_1474101888060_0001 completed successfully
    16/09/17 16:46:52 INFO mapreduce.Job: Counters: 50
        File System Counters
            FILE: Number of bytes read=21875
            FILE: Number of bytes written=1110853
            FILE: Number of read operations=0
            FILE: Number of large read operations=0
            FILE: Number of write operations=0
            HDFS: Number of bytes read=28532
            HDFS: Number of bytes written=10579
            HDFS: Number of read operations=30
            HDFS: Number of large read operations=0
            HDFS: Number of write operations=2
        Job Counters 
            Killed map tasks=1
            Launched map tasks=9
            Launched reduce tasks=1
            Data-local map tasks=9
            Total time spent by all maps in occupied slots (ms)=84614
            Total time spent by all reduces in occupied slots (ms)=4042
            Total time spent by all map tasks (ms)=84614
            Total time spent by all reduce tasks (ms)=4042
            Total vcore-milliseconds taken by all map tasks=84614
            Total vcore-milliseconds taken by all reduce tasks=4042
            Total megabyte-milliseconds taken by all map tasks=86644736
            Total megabyte-milliseconds taken by all reduce tasks=4139008
        Map-Reduce Framework
            Map input records=796
            Map output records=2887
            Map output bytes=36776
            Map output materialized bytes=21923
            Input split bytes=915
            Combine input records=2887
            Combine output records=1265
            Reduce input groups=606
            Reduce shuffle bytes=21923
            Reduce input records=1265
            Reduce output records=606
            Spilled Records=2530
            Shuffled Maps =9
            Failed Shuffles=0
            Merged Map outputs=9
            GC time elapsed (ms)=590
            CPU time spent (ms)=6470
            Physical memory (bytes) snapshot=2690990080
            Virtual memory (bytes) snapshot=8380964864
            Total committed heap usage (bytes)=1966604288
        Shuffle Errors
            BAD_ID=0
            CONNECTION=0
            IO_ERROR=0
            WRONG_LENGTH=0
            WRONG_MAP=0
            WRONG_REDUCE=0
        File Input Format Counters 
            Bytes Read=27617
        File Output Format Counters 
            Bytes Written=10579
    

    如果有问题,请与我联系13347217145@163.com

    同样的问题也来了。我还尝试了我的机器的实际静态IP来代替127.0.1.1。发布你的
    /etc/hosts
    核心站点.xml
    hdfs站点.xml
    映射站点.xml
    文件。当我用127.0.0.1替换127.0.1.1时,我忘了告诉你一件事。它没有显示任何错误,但在控制台上没有显示任何进度(意味着它挂起了很长时间)。hadoop fs-ls/的结果是什么?问题似乎是容器的创建!你能不能也把这篇文章发到siren-site.xml上。
    miaofu@miaofu-Virtual-Machine:~/hadoop-2.6.4/etc/hadoop$ hadoop jar ../../share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /in /out2
    16/09/17 15:41:14 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.202.104:8032
    16/09/17 15:41:17 INFO input.FileInputFormat: Total input paths to process : 9
    16/09/17 15:41:17 INFO mapreduce.JobSubmitter: number of splits:9
    16/09/17 15:41:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474096034614_0002
    16/09/17 15:41:18 INFO impl.YarnClientImpl: Submitted application application_1474096034614_0002
    16/09/17 15:41:18 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1474096034614_0002/
    16/09/17 15:41:18 INFO mapreduce.Job: Running job: job_1474096034614_0002
    16/09/17 15:41:26 INFO mapreduce.Job: Job job_1474096034614_0002 running in uber mode : false
    16/09/17 15:41:26 INFO mapreduce.Job:  map 0% reduce 0%
    16/09/17 15:41:39 INFO mapreduce.Job:  map 11% reduce 0%
    16/09/17 15:41:40 INFO mapreduce.Job:  map 22% reduce 0%
    16/09/17 15:41:41 INFO mapreduce.Job:  map 67% reduce 0%
    16/09/17 15:41:54 INFO mapreduce.Job:  map 67% reduce 22%
    16/09/17 15:44:29 INFO mapreduce.Job: Task Id : attempt_1474096034614_0002_m_000006_0, Status : FAILED
    Container launch failed for container_1474096034614_0002_01_000008 : java.net.ConnectException: Call From miaofu-Virtual-Machine/127.0.0.1 to localhost:57019 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.GeneratedConstructorAccessor32.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
        at org.apache.hadoop.ipc.Client.call(Client.java:1473)
        at org.apache.hadoop.ipc.Client.call(Client.java:1400)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy36.startContainers(Unknown Source)
        at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
        at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy37.startContainers(Unknown Source)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:151)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
    
    127.0.0.1       localhost
    127.0.0.1 miaofu-Virtual-Machine
    192.168.202.104 master
    192.168.202.31 slave01
    192.168.202.105 slave02
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    
    root@miaofu-Virtual-Machine:/home/miaofu# vi /etc/hostname 
    root@miaofu-Virtual-Machine:/home/miaofu# hostname
    miaofu-Virtual-Machine
    root@miaofu-Virtual-Machine:/home/miaofu# hostname master
    root@miaofu-Virtual-Machine:/home/miaofu# hostname
    master
    
    miaofu@miaofu-Virtual-Machine:~$ su root
    密码: 
    ^Z
    [3]+  已停止               su root
    miaofu@miaofu-Virtual-Machine:~$ sudo passwd root
    [sudo] password for miaofu: 
    输入新的 UNIX 密码: 
    重新输入新的 UNIX 密码: 
    passwd:已成功更新密码
    miaofu@miaofu-Virtual-Machine:~$ hostname slave02
    hostname: you must be root to change the host name
    miaofu@miaofu-Virtual-Machine:~$ su root
    密码: 
    root@miaofu-Virtual-Machine:/home/miaofu# 
    root@miaofu-Virtual-Machine:/home/miaofu# 
    root@miaofu-Virtual-Machine:/home/miaofu# 
    root@miaofu-Virtual-Machine:/home/miaofu# hostname slave02
    root@miaofu-Virtual-Machine:/home/miaofu# hostname 
    slave02
    
    stop-yarn.sh
    stop-dfs.sh
    cd
    rm -r hadoop-2.6.4/tmp/*
    
    hadoop namenode -format
    start-dfs.sh
    start-yarn.sh 
    
    miaofu@miaofu-Virtual-Machine:~$ hadoop fs -mkdir /in
    miaofu@miaofu-Virtual-Machine:~$ vi retry.sh 
    miaofu@miaofu-Virtual-Machine:~$ hadoop fs -put etc/hadoop/*.xml /in
    put: `etc/hadoop/*.xml': No such file or directory
    miaofu@miaofu-Virtual-Machine:~$ hadoop fs -put hadoop-2.6.4/etc/hadoop/*.xml /in
    jpmiaofu@miaofu-Virtual-Machine:~$ jps
    61591 Jps
    60601 ResourceManager
    60297 SecondaryNameNode
    60732 NodeManager
    60092 DataNode
    59927 NameNode
    miaofu@miaofu-Virtual-Machine:~$ hadoop jar hadoop-2.6.4/
    bin/         etc/         include/     lib/         LICENSE.txt  NOTICE.txt   sbin/        tmp/         
    conf.sh      home/        input/       libexec/     logs/        README.txt   share/       
    miaofu@miaofu-Virtual-Machine:~$ hadoop jar hadoop-2.6.4/share/
    doc/    hadoop/ 
    miaofu@miaofu-Virtual-Machine:~$ hadoop jar hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /in /out
    ^Z
    [1]+  已停止               hadoop jar hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /in /out
    miaofu@miaofu-Virtual-Machine:~$ hadoop jar hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar wordcount /in /out3
    16/09/17 16:46:24 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.202.104:8032
    16/09/17 16:46:25 INFO input.FileInputFormat: Total input paths to process : 9
    16/09/17 16:46:25 INFO mapreduce.JobSubmitter: number of splits:9
    16/09/17 16:46:26 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1474101888060_0001
    16/09/17 16:46:26 INFO impl.YarnClientImpl: Submitted application application_1474101888060_0001
    16/09/17 16:46:26 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1474101888060_0001/
    16/09/17 16:46:26 INFO mapreduce.Job: Running job: job_1474101888060_0001
    16/09/17 16:46:35 INFO mapreduce.Job: Job job_1474101888060_0001 running in uber mode : false
    16/09/17 16:46:35 INFO mapreduce.Job:  map 0% reduce 0%
    16/09/17 16:46:44 INFO mapreduce.Job:  map 22% reduce 0%
    16/09/17 16:46:45 INFO mapreduce.Job:  map 33% reduce 0%
    16/09/17 16:46:48 INFO mapreduce.Job:  map 67% reduce 0%
    16/09/17 16:46:49 INFO mapreduce.Job:  map 100% reduce 0%
    16/09/17 16:46:51 INFO mapreduce.Job:  map 100% reduce 100%
    16/09/17 16:46:52 INFO mapreduce.Job: Job job_1474101888060_0001 completed successfully
    16/09/17 16:46:52 INFO mapreduce.Job: Counters: 50
        File System Counters
            FILE: Number of bytes read=21875
            FILE: Number of bytes written=1110853
            FILE: Number of read operations=0
            FILE: Number of large read operations=0
            FILE: Number of write operations=0
            HDFS: Number of bytes read=28532
            HDFS: Number of bytes written=10579
            HDFS: Number of read operations=30
            HDFS: Number of large read operations=0
            HDFS: Number of write operations=2
        Job Counters 
            Killed map tasks=1
            Launched map tasks=9
            Launched reduce tasks=1
            Data-local map tasks=9
            Total time spent by all maps in occupied slots (ms)=84614
            Total time spent by all reduces in occupied slots (ms)=4042
            Total time spent by all map tasks (ms)=84614
            Total time spent by all reduce tasks (ms)=4042
            Total vcore-milliseconds taken by all map tasks=84614
            Total vcore-milliseconds taken by all reduce tasks=4042
            Total megabyte-milliseconds taken by all map tasks=86644736
            Total megabyte-milliseconds taken by all reduce tasks=4139008
        Map-Reduce Framework
            Map input records=796
            Map output records=2887
            Map output bytes=36776
            Map output materialized bytes=21923
            Input split bytes=915
            Combine input records=2887
            Combine output records=1265
            Reduce input groups=606
            Reduce shuffle bytes=21923
            Reduce input records=1265
            Reduce output records=606
            Spilled Records=2530
            Shuffled Maps =9
            Failed Shuffles=0
            Merged Map outputs=9
            GC time elapsed (ms)=590
            CPU time spent (ms)=6470
            Physical memory (bytes) snapshot=2690990080
            Virtual memory (bytes) snapshot=8380964864
            Total committed heap usage (bytes)=1966604288
        Shuffle Errors
            BAD_ID=0
            CONNECTION=0
            IO_ERROR=0
            WRONG_LENGTH=0
            WRONG_MAP=0
            WRONG_REDUCE=0
        File Input Format Counters 
            Bytes Read=27617
        File Output Format Counters 
            Bytes Written=10579