Java Hadoop群集卡在Reduce上挂起>;复制>;

Java Hadoop群集卡在Reduce上挂起>;复制>;,java,apache,hadoop,Java,Apache,Hadoop,到目前为止,对于这个问题,我已经在这里尝试了解决方案。但是,虽然这些解决方案确实会导致执行mapreduce任务,但它们似乎只在名称节点上运行,因为我得到的输出与此处类似 基本上,我使用自己设计的mapreduce算法运行一个2节点集群。mapreduce jar在单节点集群上执行得很好,这让我觉得我的hadoop多节点配置有问题。要设置多节点,我按照教程进行操作 为了报告出现了什么问题,当我执行程序时(在检查各个节点上是否运行了namenodes、TaskTracker、JobTracker和

到目前为止,对于这个问题,我已经在这里尝试了解决方案。但是,虽然这些解决方案确实会导致执行mapreduce任务,但它们似乎只在名称节点上运行,因为我得到的输出与此处类似

基本上,我使用自己设计的mapreduce算法运行一个2节点集群。mapreduce jar在单节点集群上执行得很好,这让我觉得我的hadoop多节点配置有问题。要设置多节点,我按照教程进行操作

为了报告出现了什么问题,当我执行程序时(在检查各个节点上是否运行了namenodes、TaskTracker、JobTracker和Datanodes之后),我的程序会在terminal中的这一行停止

INFO mapred.JobClient:map 100%减少0%

如果我查看任务的日志,我会看到
复制失败:尝试。。。从从属节点开始
,后跟一个
SocketTimeoutException

查看我的从属节点(DataNode)上的日志,可以发现执行在以下行停止:

TaskTracker:尝试。。。0.0%减少>复制>

正如链接1和2中的解决方案所建议的那样,
etc/hosts
文件中删除各种ip地址会导致成功执行
,但是我最终会在我的从节点(数据节点)日志中的链接4中找到一些项目,例如:

INFO org.apache.hadoop.mapred.TaskTracker:收到“KillJobAction”
职务:职务201301055职务0381

WARN org.apache.hadoop.mapred.TaskTracker:未知作业作业\u 201201301055\u 0381
正在删除。

作为一个新的hadoop用户,我对这一点感到怀疑,但看到这一点可能是完全正常的。在我看来,这似乎是指向主机文件中不正确的ip地址,而通过删除此ip地址,我只是停止从节点上的执行,而在namenode上继续处理(这一点都不有利)

总而言之:

  • 这是预期的输出吗
  • 有没有办法在执行后查看在哪个节点上执行的内容
  • 有人能发现我做错了什么吗
  • 为每个节点编辑添加的主机和配置文件 主机:etc/hosts

    127.0.0.1       localhost
    127.0.1.1       joseph-Dell-System-XPS-L702X
    
    #The following lines are for hadoop master/slave setup
    192.168.1.87    master
    192.168.1.74    slave
    
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    
    127.0.0.1       localhost
    127.0.1.1       joseph-Home # this line was incorrect, it was set as 7.0.1.1
    
    #the following lines are for hadoop mutli-node cluster setup
    192.168.1.87    master
    192.168.1.74    slave
    
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    
    从机:etc/hosts

    127.0.0.1       localhost
    127.0.1.1       joseph-Dell-System-XPS-L702X
    
    #The following lines are for hadoop master/slave setup
    192.168.1.87    master
    192.168.1.74    slave
    
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    
    127.0.0.1       localhost
    127.0.1.1       joseph-Home # this line was incorrect, it was set as 7.0.1.1
    
    #the following lines are for hadoop mutli-node cluster setup
    192.168.1.87    master
    192.168.1.74    slave
    
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    
    Master:core site.xml

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hduser/tmp</value>
        <description>A base for other temporary directories.</description>
    </property>
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system. A URI whose
            scheme and authority determine the FileSystem implementation. The
            uri’s scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class. The uri’s authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/home/hduser/tmp</value>
            <description>A base for other temporary directories.</description>
        </property>
    
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system. A URI whose
            scheme and authority determine the FileSystem implementation. The
            uri’s scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class. The uri’s authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
            <description>Default block replication.
            The actual number of replications can be specified when the file is created.
            The default is used if replication is not specified in create time.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
            <description>Default block replication.
            The actual number of replications can be specified when the file is created.
            The default is used if replication is not specified in create time.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>mapred.job.tracker</name>
            <value>master:54311</value>
            <description>The host and port that the MapReduce job tracker runs
            at. If “local”, then jobs are run in-process as a single map
            and reduce task.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
        <property>
            <name>mapred.job.tracker</name>
            <value>master:54311</value>
            <description>The host and port that the MapReduce job tracker runs
            at. If “local”, then jobs are run in-process as a single map
            and reduce task.
            </description>
        </property>
    
    </configuration>
    
    
    hadoop.tmp.dir
    /主页/hduser/tmp
    其他临时目录的基础。
    fs.default.name
    hdfs://master:54310
    默认文件系统的名称。其
    方案和权限决定文件系统的实现。这个
    uri的方案决定了配置属性(fs.scheme.impl)的命名
    文件系统实现类。uri的权限用于
    确定文件系统的主机、端口等。
    
    Slave:core site.xml

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hduser/tmp</value>
        <description>A base for other temporary directories.</description>
    </property>
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system. A URI whose
            scheme and authority determine the FileSystem implementation. The
            uri’s scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class. The uri’s authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/home/hduser/tmp</value>
            <description>A base for other temporary directories.</description>
        </property>
    
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system. A URI whose
            scheme and authority determine the FileSystem implementation. The
            uri’s scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class. The uri’s authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
            <description>Default block replication.
            The actual number of replications can be specified when the file is created.
            The default is used if replication is not specified in create time.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
            <description>Default block replication.
            The actual number of replications can be specified when the file is created.
            The default is used if replication is not specified in create time.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>mapred.job.tracker</name>
            <value>master:54311</value>
            <description>The host and port that the MapReduce job tracker runs
            at. If “local”, then jobs are run in-process as a single map
            and reduce task.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
        <property>
            <name>mapred.job.tracker</name>
            <value>master:54311</value>
            <description>The host and port that the MapReduce job tracker runs
            at. If “local”, then jobs are run in-process as a single map
            and reduce task.
            </description>
        </property>
    
    </configuration>
    
    
    hadoop.tmp.dir
    /主页/hduser/tmp
    其他临时目录的基础。
    fs.default.name
    hdfs://master:54310
    默认文件系统的名称。其
    方案和权限决定文件系统的实现。这个
    uri的方案决定了配置属性(fs.scheme.impl)的命名
    文件系统实现类。uri的权限用于
    确定文件系统的主机、端口等。
    
    Master:hdfs site.xml

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hduser/tmp</value>
        <description>A base for other temporary directories.</description>
    </property>
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system. A URI whose
            scheme and authority determine the FileSystem implementation. The
            uri’s scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class. The uri’s authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/home/hduser/tmp</value>
            <description>A base for other temporary directories.</description>
        </property>
    
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system. A URI whose
            scheme and authority determine the FileSystem implementation. The
            uri’s scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class. The uri’s authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
            <description>Default block replication.
            The actual number of replications can be specified when the file is created.
            The default is used if replication is not specified in create time.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
            <description>Default block replication.
            The actual number of replications can be specified when the file is created.
            The default is used if replication is not specified in create time.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>mapred.job.tracker</name>
            <value>master:54311</value>
            <description>The host and port that the MapReduce job tracker runs
            at. If “local”, then jobs are run in-process as a single map
            and reduce task.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
        <property>
            <name>mapred.job.tracker</name>
            <value>master:54311</value>
            <description>The host and port that the MapReduce job tracker runs
            at. If “local”, then jobs are run in-process as a single map
            and reduce task.
            </description>
        </property>
    
    </configuration>
    
    
    dfs.replication
    2.
    默认块复制。
    创建文件时,可以指定实际的复制次数。
    如果在创建时未指定复制,则使用默认值。
    
    从:hdfs site.xml

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hduser/tmp</value>
        <description>A base for other temporary directories.</description>
    </property>
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system. A URI whose
            scheme and authority determine the FileSystem implementation. The
            uri’s scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class. The uri’s authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/home/hduser/tmp</value>
            <description>A base for other temporary directories.</description>
        </property>
    
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system. A URI whose
            scheme and authority determine the FileSystem implementation. The
            uri’s scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class. The uri’s authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
            <description>Default block replication.
            The actual number of replications can be specified when the file is created.
            The default is used if replication is not specified in create time.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
            <description>Default block replication.
            The actual number of replications can be specified when the file is created.
            The default is used if replication is not specified in create time.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>mapred.job.tracker</name>
            <value>master:54311</value>
            <description>The host and port that the MapReduce job tracker runs
            at. If “local”, then jobs are run in-process as a single map
            and reduce task.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
        <property>
            <name>mapred.job.tracker</name>
            <value>master:54311</value>
            <description>The host and port that the MapReduce job tracker runs
            at. If “local”, then jobs are run in-process as a single map
            and reduce task.
            </description>
        </property>
    
    </configuration>
    
    
    dfs.replication
    2.
    默认块复制。
    创建文件时,可以指定实际的复制次数。
    如果在创建时未指定复制,则使用默认值。
    
    Master:mapred site.xml

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hduser/tmp</value>
        <description>A base for other temporary directories.</description>
    </property>
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system. A URI whose
            scheme and authority determine the FileSystem implementation. The
            uri’s scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class. The uri’s authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/home/hduser/tmp</value>
            <description>A base for other temporary directories.</description>
        </property>
    
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system. A URI whose
            scheme and authority determine the FileSystem implementation. The
            uri’s scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class. The uri’s authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
            <description>Default block replication.
            The actual number of replications can be specified when the file is created.
            The default is used if replication is not specified in create time.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
            <description>Default block replication.
            The actual number of replications can be specified when the file is created.
            The default is used if replication is not specified in create time.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>mapred.job.tracker</name>
            <value>master:54311</value>
            <description>The host and port that the MapReduce job tracker runs
            at. If “local”, then jobs are run in-process as a single map
            and reduce task.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
        <property>
            <name>mapred.job.tracker</name>
            <value>master:54311</value>
            <description>The host and port that the MapReduce job tracker runs
            at. If “local”, then jobs are run in-process as a single map
            and reduce task.
            </description>
        </property>
    
    </configuration>
    
    
    mapred.job.tracker
    船长:54311
    MapReduce作业跟踪器运行的主机和端口
    在如果为“本地”,则作业作为单个映射在进程中运行
    并减少任务。
    
    Slave:mapre-site.xml

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hduser/tmp</value>
        <description>A base for other temporary directories.</description>
    </property>
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system. A URI whose
            scheme and authority determine the FileSystem implementation. The
            uri’s scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class. The uri’s authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/home/hduser/tmp</value>
            <description>A base for other temporary directories.</description>
        </property>
    
        <property>
            <name>fs.default.name</name>
            <value>hdfs://master:54310</value>
            <description>The name of the default file system. A URI whose
            scheme and authority determine the FileSystem implementation. The
            uri’s scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class. The uri’s authority is used to
            determine the host, port, etc. for a filesystem.</description>
        </property>
    
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
            <description>Default block replication.
            The actual number of replications can be specified when the file is created.
            The default is used if replication is not specified in create time.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
            <description>Default block replication.
            The actual number of replications can be specified when the file is created.
            The default is used if replication is not specified in create time.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
        <property>
            <name>mapred.job.tracker</name>
            <value>master:54311</value>
            <description>The host and port that the MapReduce job tracker runs
            at. If “local”, then jobs are run in-process as a single map
            and reduce task.
            </description>
        </property>
    </configuration>
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    
        <property>
            <name>mapred.job.tracker</name>
            <value>master:54311</value>
            <description>The host and port that the MapReduce job tracker runs
            at. If “local”, then jobs are run in-process as a single map
            and reduce task.
            </description>
        </property>
    
    </configuration>
    
    
    mapred.job.tracker
    船长:54311
    MapReduce作业跟踪器运行的主机和端口
    在如果为“本地”,则作业作为单个映射在进程中运行
    并减少任务。
    
    错误在etc/hosts中:

    在错误运行期间,从属etc/hosts文件如下所示:

    127.0.0.1       localhost
    7.0.1.1       joseph-Home # THIS LINE IS INCORRECT, IT SHOULD BE 127.0.1.1
    
    #the following lines are for hadoop mutli-node cluster setup
    192.168.1.87    master
    192.168.1.74    slave
    
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    
    正如您可能已经发现的,此计算机“joseph Home”的ip地址配置不正确。当它应该设置为127.0.1.1时,它被设置为7.0.1.1。因此,将slave etc/hosts文件第2行更改为
    127.0.1.1 joseph Home
    修复了该问题,并且我的日志通常显示在slave节点上

    新的etc/hosts文件:

    127.0.0.1       localhost
    127.0.1.1       joseph-Home # THIS LINE IS INCORRECT, IT SHOULD BE 127.0.1.1
    
    #the following lines are for hadoop mutli-node cluster setup
    192.168.1.87    master
    192.168.1.74    slave
    
    # The following lines are desirable for IPv6 capable hosts
    ::1     ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    

    测试的解决方案是将以下属性添加到hadoop-env.sh,并重新启动所有hadoop集群服务

    hadoop环境sh


    export HADOOP_CLIENT_OPTS=“-Xmx2048m$HADOOP_CLIENT_OPTS”

    我今天也遇到了这个问题。在我的例子中,问题是集群中一个节点的磁盘已满,因此hadoop无法将日志文件写入本地磁盘,因此可能的解决方案是删除本地磁盘上一些未使用的文件。希望它有帮助

    我希望你在所有的机器上都禁用了防火墙。啊,忘了提那件事了