在Hadoop 3.1.0中,namenode工作,但datanode不工作

在Hadoop 3.1.0中,namenode工作,但datanode不工作,hadoop,namenode,datanode,Hadoop,Namenode,Datanode,在Hadoop 3.1.0中namenode正在工作,但datanode不工作,显示以下消息: STARTUP_MSG: build = https://github.com/apache/hadoop -r 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d; compiled by 'centos' on 2018-03-30T00:00Z STARTUP_MSG: java = 1.8.0_231 ***************************

Hadoop 3.1.0中
namenode
正在工作,但
datanode
不工作,显示以下消息:

STARTUP_MSG:   build = https://github.com/apache/hadoop -r 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d; compiled by 'centos' on 2018-03-30T00:00Z
STARTUP_MSG:   java = 1.8.0_231
************************************************************/
2019-11-13 20:58:38,398 INFO checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/C:/Appliacation/hadoop-3.1.0/data/datanode
2019-11-13 20:58:38,436 WARN checker.StorageLocationChecker: Exception checking StorageLocation [DISK]file:/C:/Appliacation/hadoop-3.1.0/data/datanode
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat;
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:455)
        at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:796)
        at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:710)
        at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:678)
        at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:191)
        at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:98)
        at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:239)
        at org.apache.hadoop.hdfs.server.datanode.StorageLocation.check(StorageLocation.java:52)
        at org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$1.call(ThrottledAsyncChecker.java:142)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
2019-11-13 20:58:38,436 ERROR datanode.DataNode: Exception in secureMain
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
        at org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker.check(StorageLocationChecker.java:220)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2762)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2677)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2719)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2863)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2887)
2019-11-13 20:58:38,436 INFO util.ExitUtil: Exiting with status 1: org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 0, volumes configured: 1, volumes failed: 1, volume failures tolerated: 0
2019-11-13 20:58:38,451 INFO datanode.DataNode: SHUTDOWN_MSG:

我也有同样的问题,我必须替换bin文件夹引用中的一些二进制文件,我还对一些配置文件做了一些更改,如下所示:-

 1. Edit file [core-site.xml]
    <configuration>
       <property>
           <name>fs.defaultFS</name>
           <value>hdfs://0.0.0.0:19000</value>
        </property>
    </configuration>
2. Edit file hdfs-site.xml
    <configuration>
        <property>
               <name>dfs.replication</name>
               <value>1</value>
        </property>
        <property>
            <name>dfs.namenode.dir</name>
            <value>file:///C:/hadoop-3.1.0/data/namenode</value>
        </property>
        <property>
            <name>dfs.datanode.dir</name>
            <value>file:///C:/hadoop-3.1.0/data/datanode</value>
        </property>
    </configuration>
3. Edit file workers
    localhost
4. Edit file mapred-site.xml
<configuration>
 <property>
      <name>mapreduce.job.user.name</name>
      <value>%USERNAME%</value>
    </property>
   <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
    </property>
  <property>
     <name>yarn.apps.stagingDir</name>
     <value>/user/%USERNAME%/staging</value>
   </property>
  <property>
     <name>mapreduce.jobtracker.address</name>
     <value>local</value>
   </property>

</configuration>

5. Edit file yarn-site.xml

    <configuration>


<property>
     <name>yarn.server.resourcemanager.address</name>
     <value>0.0.0.0:8020</value>
   </property>
  <property>
     <name>yarn.server.resourcemanager.application.expiry.interval</name>
     <value>60000</value>
   </property>
  <property>
     <name>yarn.server.nodemanager.address</name>
     <value>0.0.0.0:45454</value>
   </property>
  <property>
     <name>yarn.nodemanager.aux-services</name>
     <value>mapreduce_shuffle</value>
   </property>
  <property>
     <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
     <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>
  <property>
     <name>yarn.server.nodemanager.remote-app-log-dir</name>
     <value>/app-logs</value>
   </property>
  <property>
     <name>yarn.nodemanager.log-dirs</name>
     <value>/dep/logs/userlogs</value>
   </property>
  <property>
     <name>yarn.server.mapreduce-appmanager.attempt-listener.bindAddress</name>
     <value>0.0.0.0</value>
   </property>
  <property>
     <name>yarn.server.mapreduce-appmanager.client-service.bindAddress</name>
     <value>0.0.0.0</value>
   </property>
  <property>
     <name>yarn.log-aggregation-enable</name>
     <value>true</value>
   </property>
  <property>
     <name>yarn.log-aggregation.retain-seconds</name>
     <value>-1</value>
   </property>
  <property>
     <name>yarn.application.classpath</name>
     <value>%HADOOP_CONF_DIR%,%HADOOP_COMMON_HOME%/share/hadoop/common/*,%HADOOP_COMMON_HOME%/share/hadoop/common/lib/*,%HADOOP_HDFS_HOME%/share/hadoop/hdfs/*,%HADOOP_HDFS_HOME%/share/hadoop/hdfs/lib/*,%HADOOP_MAPRED_HOME%/share/hadoop/mapreduce/*,%HADOOP_MAPRED_HOME%/share/hadoop/mapreduce/lib/*,%HADOOP_YARN_HOME%/share/hadoop/yarn/*,%HADOOP_YARN_HOME%/share/hadoop/yarn/lib/*</value>
   </property>


    </configuration>
1。编辑文件[core site.xml]
fs.defaultFS
hdfs://0.0.0.0:19000
2.编辑文件hdfs-site.xml
dfs.replication
1.
dfs.namenode.dir
file:///C:/hadoop-3.1.0/data/namenode
dfs.datanode.dir
file:///C:/hadoop-3.1.0/data/datanode
3.编辑文件工作者
本地服务器
4.编辑文件mapred-site.xml
mapreduce.job.user.name
%用户名%
mapreduce.framework.name
纱线
warn.apps.stagingDir
/用户/%USERNAME%/登台
mapreduce.jobtracker.address
地方的
5.编辑文件-site.xml
warn.server.resourcemanager.address
0.0.0.0:8020
warn.server.resourcemanager.application.expiration.interval
60000
warn.server.nodemanager.address
0.0.0.0:45454
纱线.节点管理器.辅助服务
mapreduce_shuffle
warn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
warn.server.nodemanager.remote-app-log-dir
/应用程序日志
纱线.nodemanager.log-dirs
/dep/logs/userlogs
warn.server.mapreduce-appmanager.trunt-listener.bindAddress
0.0.0.0
warn.server.mapreduce-appmanager.client-service.bindAddress
0.0.0.0
warn.log-aggregation-enable
真的
纱线.log-aggregation.retain-seconds
-1
.application.classpath
%HADOOP_CONF_DIR%,%HADOOP_COMMON_HOME%/share/HADOOP/COMMON/*,%HADOOP_COMMON_HOME%/share/HADOOP/lib/*,%HADOOP_HDFS_HOME%/share/HADOOP/HDFS/*,%HADOOP_HDFS_HOME%/share/HADOOP/HDFS/lib/*,%,%HADOOP\u纱线\u HOME%/share/HADOOP/纱线/lib/*

您的磁盘工作正常吗?