DataNode未在hadoop多节点群集上启动

DataNode未在hadoop多节点群集上启动,hadoop,hadoop2,bigdata,Hadoop,Hadoop2,Bigdata,我从hadoop tmp目录中删除了内容,从namenode目录中删除了当前文件夹,格式化了namenode,但得到了一个异常:org.apache.hadoop.http.HttpServer2:HttpServer.start()抛出了一个非绑定IOException java.net.BindException:Port in use:localhost:0 我的配置如下: core-site.xml <configuration> <property>

我从hadoop tmp目录中删除了内容,从namenode目录中删除了当前文件夹,格式化了namenode,但得到了一个异常:org.apache.hadoop.http.HttpServer2:HttpServer.start()抛出了一个非绑定IOException java.net.BindException:Port in use:localhost:0

我的配置如下:

core-site.xml

<configuration>
    <property>        
        <name>fs.default.name</name>         
        <value>hdfs://master:9000/</value>         
        <description>NameNode URI</description>     
    </property>
</configuration>

fs.default.name
hdfs://master:9000/         
名称节点URI
hdfs site.xml

<configuration>
    <property>         
        <name>dfs.namenode.name.dir</name>         
        <value>file:///home/hduser/hdfs/namenode</value>        
        <description>NameNode directory for namespace and transaction logs storage.</description>     
    </property>
    <property>         
        <name>dfs.datanode.data.dir</name>         
        <value>file:///home/hduser/hdfs/datanode</value>        
        <description>DataNode directory</description>
    </property>           
    <property>         
        <name>dfs.replication</name>         
        <value>2</value>     
    </property>     
</configuration>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
            <name>mapreduce.jobhistory.address</name>
            <value>master:10020</value>
    </property>
</configuration>
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8025</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8050</value>
    </property>
</configuration>

dfs.namenode.name.dir
file:///home/hduser/hdfs/namenode        
名称空间和事务日志存储的名称节点目录。
dfs.datanode.data.dir
file:///home/hduser/hdfs/datanode        
数据节点目录
dfs.replication
2.
mapred site.xml

<configuration>
    <property>         
        <name>dfs.namenode.name.dir</name>         
        <value>file:///home/hduser/hdfs/namenode</value>        
        <description>NameNode directory for namespace and transaction logs storage.</description>     
    </property>
    <property>         
        <name>dfs.datanode.data.dir</name>         
        <value>file:///home/hduser/hdfs/datanode</value>        
        <description>DataNode directory</description>
    </property>           
    <property>         
        <name>dfs.replication</name>         
        <value>2</value>     
    </property>     
</configuration>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
            <name>mapreduce.jobhistory.address</name>
            <value>master:10020</value>
    </property>
</configuration>
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8025</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8050</value>
    </property>
</configuration>

mapreduce.framework.name
纱线
mapreduce.jobhistory.address
船长:10020
纱线站点.xml

<configuration>
    <property>         
        <name>dfs.namenode.name.dir</name>         
        <value>file:///home/hduser/hdfs/namenode</value>        
        <description>NameNode directory for namespace and transaction logs storage.</description>     
    </property>
    <property>         
        <name>dfs.datanode.data.dir</name>         
        <value>file:///home/hduser/hdfs/datanode</value>        
        <description>DataNode directory</description>
    </property>           
    <property>         
        <name>dfs.replication</name>         
        <value>2</value>     
    </property>     
</configuration>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
            <name>mapreduce.jobhistory.address</name>
            <value>master:10020</value>
    </property>
</configuration>
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8025</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8050</value>
    </property>
</configuration>

纱线.节点管理器.辅助服务
mapreduce_shuffle
warn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
warn.resourcemanager.resource-tracker.address
船长:8025
warn.resourcemanager.scheduler.address
船长:8030
.resourcemanager.address
船长:8050
数据节点日志

2017-01-20 16:27:21,927 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-01-20 16:27:23,346 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-01-20 16:27:23,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-01-20 16:27:23,444 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2017-01-20 16:27:23,448 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2017-01-20 16:27:23,450 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is master
2017-01-20 16:27:23,461 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2017-01-20 16:27:23,491 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2017-01-20 16:27:23,493 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2017-01-20 16:27:23,493 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5
2017-01-20 16:27:23,650 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-01-20 16:27:23,663 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-01-20 16:27:23,673 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2017-01-20 16:27:23,677 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-01-20 16:27:23,689 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2017-01-20 16:27:23,690 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-01-20 16:27:23,690 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-01-20 16:27:23,716 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: localhost:0
    at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
    at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
    at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:104)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:760)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1112)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2374)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2261)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2308)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2485)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2509)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
    ... 10 more
2017-01-20 16:27:23,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Shutdown complete.
2017-01-20 16:27:23,728 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.net.BindException: Port in use: localhost:0
    at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
    at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
    at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.<init>(DatanodeHttpServer.java:104)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:760)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1112)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2374)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2261)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2308)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2485)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2509)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
    at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
    ... 10 more
2017-01-20 16:27:23,730 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-01-20 16:27:23,735 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at master/10.0.1.1
************************************************************/ 
2017-01-20 16:27:21927 INFO org.apache.hadoop.hdfs.server.datanode.datanode:[TERM,HUP,INT]的已注册UNIX信号处理程序
2017-01-20 16:27:23346 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:从hadoop-metrics2.properties加载的属性
2017-01-20 16:27:23444 INFO org.apache.hadoop.metrics2.impl.metricsystemimpl:计划的快照时间为10秒。
2017-01-20 16:27:23444 INFO org.apache.hadoop.metrics2.impl.MetricSystemImpl:DataNode metrics系统已启动
2017-01-20 16:27:23448 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner:使用targetBytesPerSec 1048576初始化块扫描程序
2017-01-20 16:27:23450 INFO org.apache.hadoop.hdfs.server.datanode.datanode:配置的主机名是master
2017-01-20 16:27:23461 INFO org.apache.hadoop.hdfs.server.datanode.datanode:使用maxLockedMemory=0启动datanode
2017-01-20 16:27:23491 INFO org.apache.hadoop.hdfs.server.datanode.datanode:Opened streaming server at/0.0.0:50010
2017-01-20 16:27:23493 INFO org.apache.hadoop.hdfs.server.datanode.datanode:平衡带宽为1048576字节/秒
2017-01-20 16:27:23493 INFO org.apache.hadoop.hdfs.server.datanode.datanode:用于平衡的线程数为5
2017-01-20 16:27:23650信息org.mortbay.log:通过org.mortbay.log.slf4jloggeradapter(org.mortbay.log)登录到org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
2017-01-20 16:27:23663 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter:无法初始化FileSignerSecretProvider,返回使用随机机密。
2017-01-20 16:27:23673 INFO org.apache.hadoop.http.HttpRequestLog:未定义http.requests.datanode的http请求日志
2017-01-20 16:27:23677 INFO org.apache.hadoop.http.HttpServer2:添加全局筛选器“安全”(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-01-20 16:27:23689 INFO org.apache.hadoop.http.HttpServer2:向上下文数据节点添加了过滤器static_user_过滤器(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
2017-01-20 16:27:23690 INFO org.apache.hadoop.http.HttpServer2:将过滤器static\u user\u过滤器(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)添加到上下文static
2017-01-20 16:27:23690 INFO org.apache.hadoop.http.HttpServer2:在上下文日志中添加了过滤器static_user_过滤器(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
2017-01-20 16:27:23716 INFO org.apache.hadoop.http.HttpServer2:HttpServer.start()引发了非绑定IOException
java.net.BindException:正在使用的端口:localhost:0
位于org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:919)
位于org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:856)
位于org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:104)
位于org.apache.hadoop.hdfs.server.datanode.datanode.startInfoServer(datanode.java:760)
位于org.apache.hadoop.hdfs.server.datanode.datanode.startDataNode(datanode.java:1112)
位于org.apache.hadoop.hdfs.server.datanode.datanode.(datanode.java:429)
位于org.apache.hadoop.hdfs.server.datanode.datanode.makeInstance(datanode.java:2374)
位于org.apache.hadoop.hdfs.server.datanode.datanode.InstanceDataNode(datanode.java:2261)
位于org.apache.hadoop.hdfs.server.datanode.datanode.createDataNode(datanode.java:2308)
位于org.apache.hadoop.hdfs.server.datanode.datanode.secureMain(datanode.java:2485)
位于org.apache.hadoop.hdfs.server.datanode.datanode.main(datanode.java:2509)
原因:java.net.BindException:无法分配请求的地址
位于sun.nio.ch.Net.bind0(本机方法)
位于sun.nio.ch.Net.bind(Net.java:433)
位于sun.nio.ch.Net.bind(Net.java:425)
位于sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
位于sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
在org.mortbay.jetty.nio.SelectChannelConnector.open上(SelectChannelConnector.java:216)
位于org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:914)
... 10多
2017-01-20 16:27:23727 INFO org.apache.hadoop.hdfs.server.datanode.datanode:关闭完成。
2017-01-20 16:27:23728致命org.apache.hadoop.hdfs.server.datanode.datanode:secureMain中的异常
java.net.BindException:正在使用的端口:localhost:0
位于org.apache.hadoop.http.HttpServer2.openListe