首次在Windows 10中运行Hadoop时datanode执行出错

首次在Windows 10中运行Hadoop时datanode执行出错,windows,hadoop,datanode,Windows,Hadoop,Datanode,我正在尝试在我的Windows 10计算机上运行Hadoop 3.1.1。我修改了所有文件: hdfs-site.xml mapred-site.xml core-site.xml web-site.xml 然后,我执行了以下命令: C:\hadoop-3.1.1\bin> hdfs namenode -format C:\hadoop-3.1.1\sbin> start-dfs.cmd 格式运行正确,因此我指向C:\hadoop-3.1.1\sbin执行以下命令: C:\h

我正在尝试在我的Windows 10计算机上运行Hadoop 3.1.1。我修改了所有文件:

  • hdfs-site.xml
  • mapred-site.xml
  • core-site.xml
  • web-site.xml
然后,我执行了以下命令:

C:\hadoop-3.1.1\bin> hdfs namenode -format
C:\hadoop-3.1.1\sbin> start-dfs.cmd
格式运行正确,因此我指向
C:\hadoop-3.1.1\sbin
执行以下命令:

C:\hadoop-3.1.1\bin> hdfs namenode -format
C:\hadoop-3.1.1\sbin> start-dfs.cmd
命令提示符将打开两个新窗口:一个用于datanode,另一个用于namenode

namenode窗口将继续运行:

2018-09-02 21:37:06,232 INFO ipc.Server: IPC Server Responder: starting
2018-09-02 21:37:06,232 INFO ipc.Server: IPC Server listener on 9000: starting
2018-09-02 21:37:06,247 INFO namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:9000
2018-09-02 21:37:06,247 INFO namenode.FSNamesystem: Starting services required for active state
2018-09-02 21:37:06,247 INFO namenode.FSDirectory: Initializing quota with 4 thread(s)
2018-09-02 21:37:06,247 INFO namenode.FSDirectory: Quota initialization completed in 3 milliseconds
name space=1
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0
2018-09-02 21:37:06,279 INFO blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
而datanode给出以下错误:

ERROR: datanode.DataNode: Exception in secureMain
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:220)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2762)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2677)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2719)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2863)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2887)
2018-09-02 21:37:04,250 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
2018-09-02 21:37:04,250 INFO datanode.DataNode: SHUTDOWN_MSG:

然后,datanode将关闭!我尝试了几种方法来克服这个错误,但这是我第一次在windows上安装Hadoop,不知道下一步该怎么做

我也遇到了同样的问题,对我有效的方法是编辑hdfs-site.xml,如下所示:

 <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///C:/Hadoop/hadoop-3.1.2/data/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/C:/Hadoop/hadoop-3.1.2/data/datanode</value>
  </property>

dfs.replication
1.
dfs.namenode.name.dir
file:///C:/Hadoop/hadoop-3.1.2/data/namenode
dfs.datanode.data.dir
/C:/Hadoop/Hadoop-3.1.2/data/datanode

在删除hdfs-site.xml中datanode的文件系统引用后,我开始工作。我发现这使软件能够创建和初始化自己的数据节点,然后在sbin中弹出。在那之后,我可以顺利地使用hdfs。以下是我在windows上使用Hadoop 3.1.3时的效果:

<configuration>

<property> 
<name>dfs.replication</name> 
<value>1</value> 
</property> 

<property> 
<name>dfs.namenode.name.dir</name> 
<value>file:///C:/Users/myusername/hadoop/hadoop-3.1.3/data/namenode</value>
</property> 

<property> 
<name>dfs.datanode.data.dir</name> 
<value>datanode</value>
</property> 

</configuration>

dfs.replication
1.
dfs.namenode.name.dir
file:///C:/Users/myusername/hadoop/hadoop-3.1.3/data/namenode
dfs.datanode.data.dir
数据节点
干杯,
MV

尝试将属性fs.default.name上的core-site.xml从
localhost
更改为
0.0.0