“线程中的异常”;“主要”;适用于hadoop 3.1.3的org.apache.hadoop.ipc.RemoteException(java.io.IOException)

“线程中的异常”;“主要”;适用于hadoop 3.1.3的org.apache.hadoop.ipc.RemoteException(java.io.IOException),java,hadoop,hdfs,hadoop3,Java,Hadoop,Hdfs,Hadoop3,我正在尝试运行mapreduce作业,但Hadoop-3.1.3出现错误 hadoop jar WordCount.jar WordcountDemo.WordCount /mapwork/Mapwork /r_out 错误 2020-04-04 19:59:11,379 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 2020-04-04 19:59:12,499 WARN mapreduce.JobR

我正在尝试运行mapreduce作业,但Hadoop-3.1.3出现错误

hadoop jar WordCount.jar WordcountDemo.WordCount  /mapwork/Mapwork /r_out
错误

2020-04-04 19:59:11,379 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
2020-04-04 19:59:12,499 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2020-04-04 19:59:12,569 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/tejashri/.staging/job_1586009643433_0007
2020-04-04 19:59:12,727 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hadoop-yarn/staging/tejashri/.staging/job_1586009643433_0007/job.jar could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2205)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1545)
        at org.apache.hadoop.ipc.Client.call(Client.java:1491)
        at org.apache.hadoop.ipc.Client.call(Client.java:1388)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:514)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1081)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1866)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1668)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
2020-04-04 19:59:12,734 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/tejashri/.staging/job_1586009643433_0007
Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hadoop-yarn/staging/tejashri/.staging/job_1586009643433_0007/job.jar could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2205)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1545)
        at org.apache.hadoop.ipc.Client.call(Client.java:1491)
        at org.apache.hadoop.ipc.Client.call(Client.java:1388)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:514)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1081)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1866)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1668)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
更新(来自评论):

core site.xml

<configuration> 
<property> 
<name>fs.default.name</name> 
<value>hdfs://localhost:9000</value> 
</property> 
<property> 
<name>hadoop.tmp.dir</name> 
<value>C:\hadoop\hdfstmp</value> 
</property> 
</configuration> 
<configuration> 
<property> 
<name>dfs.replication</name> 
<value>1</value> 
</property> 
<property> 
<name>dfs.namenode.name.dir</name> 
<value>C:\hadoop\data\namenode</value> 
</property> 
<property> 
<name>dfs.datanode.data.dir</name> 
<value>C:\hadoop\data\datanode</value> 
</property> 
<property> 
<name>dfs.datanode.failed.volumes.tolerated</name> 
<value>0</value> 
</property> 
</configuration>
jps的输出

16832 NodeManager 
5556 ResourceManager 
18280 NameNode 
11708 Jps
datanode
错误日志:

2020-04-04 21:42:25,150 WARN common.Storage: Failed to add storage directory [DISK]file:/C:/hadoop/data/datanode
java.io.IOException: Incompatible clusterIDs in C:\hadoop\data\datanode: namenode clusterID = CID-199fd5c5-1f1d-4c44-9e39-80995486695e; datanode clusterID = CID-16d0af22-57e1-4531-a5c8-4bf3eefd351d
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:744)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:294)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:407)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:387)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:559)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1743)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1679)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
        at java.lang.Thread.run(Thread.java:748)
2020-04-04 21:42:25,156 ERROR datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid 7578b7ba-c42a-476b-abc2-2088b15b3474) service to localhost/127.0.0.1:9000. Exiting.
java.io.IOException: All specified directories have failed to load.
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:560)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1743)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1679)
        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
        at java.lang.Thread.run(Thread.java:748)
2020-04-04 21:42:25,158 WARN datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid 7578b7ba-c42a-476b-abc2-2088b15b3474) service to localhost/127.0.0.1:9000
2020-04-04 21:42:25,261 INFO datanode.DataNode: Removed Block pool <registering> (Datanode Uuid 7578b7ba-c42a-476b-abc2-2088b15b3474)
2020-04-04 21:42:27,274 WARN datanode.DataNode: Exiting Datanode
2020-04-04 21:42:25150警告常见。存储:无法添加存储目录[磁盘]文件:/C:/hadoop/data/datanode
java.io.IOException:C:\hadoop\data\datanode中不兼容的clusterID:namenode clusterID=CID-199fd5c5-1f1d-4c44-9e39-80995486695e;数据节点群集ID=CID-16d0af22-57e1-4531-a5c8-4bf3eefd351d
位于org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:744)
位于org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:294)
位于org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:407)
位于org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:387)
位于org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:559)
位于org.apache.hadoop.hdfs.server.datanode.datanode.initStorage(datanode.java:1743)
位于org.apache.hadoop.hdfs.server.datanode.datanode.initBlockPool(datanode.java:1679)
位于org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
位于org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connecttonandhandshake(BPServiceActor.java:282)
位于org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
运行(Thread.java:748)
2020-04-04 21:42:25156错误datanode.datanode:将块池(datanode Uuid 7578b7ba-c42a-476b-abc2-2088b15b3474)服务初始化到本地主机/127.0.0.1:9000失败。退出。
java.io.IOException:无法加载所有指定的目录。
位于org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:560)
位于org.apache.hadoop.hdfs.server.datanode.datanode.initStorage(datanode.java:1743)
位于org.apache.hadoop.hdfs.server.datanode.datanode.initBlockPool(datanode.java:1679)
位于org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
位于org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connecttonandhandshake(BPServiceActor.java:282)
位于org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
运行(Thread.java:748)
2020-04-04 21:42:25158警告datanode.datanode:终止块池服务:块池(datanode Uuid 7578b7ba-c42a-476b-abc2-2088b15b3474)服务到本地主机/127.0.0.1:9000
2020-04-04 21:42:25261信息数据节点。数据节点:已删除的块池(数据节点Uuid 7578b7ba-c42a-476b-abc2-2088b15b3474)
2020-04-04 21:42:27274警告datanode.datanode:正在退出datanode

Mapreduce作业失败,因为它无法访问HDFS,因为
有0个数据节点正在运行,并且在此操作中排除了0个节点。

从datanode日志可以看出,
datanode
守护进程由于
不兼容的集群而无法向HDFS集群注册自身

格式化namenode(在安装和设置期间)时,将生成
clusterID
,并在初始化每个守护程序时将该clusterID存储在其
VERSION
文件中。此clusterID充当datanodes的标识符,允许它们在停止和启动时重新加入集群

当在活动群集上格式化namenode并且未重新初始化其他守护进程时,节点之间可能会出现不兼容的群集

要使群集恢复正常状态

  • 停止集群
  • 删除以下内容: 目录
    C:\hadoop\hdfstmp
    C:\hadoop\data\namenode
    C:\hadoop\data\datanode
  • 格式化名称节点
  • 启动集群

  • 您已重新复制Mapreduce作业所需的数据并运行该作业。

    Datanode未运行。请使用您的群集设置、配置文件和
    jps
    command.cmd:-jps 16832 NodeManager 11716 5556 ResourceManager 18280 NameNode 11708 jps更新帖子。您可以看到,datanode和secondary NameNode守护程序没有运行。你能发布
    hdfs site.xml
    core-site.xml
    core-site.xml:fs.default.name吗hdfs://localhost:9000 hadoop.tmp.dir C:\hadoop\hdfstmp此完整的数据节点日志。谢谢,现在datanode正在工作。我也想运行jar文件,但它总是显示输入路径无效。我该怎么办?正如我所说,您必须将数据重新复制到该位置。在hdfs site.xml中?否。将wordcount程序所需的输入文件复制到
    /mapwork/mapwork
    。你可能早一点做对了,同样的事情。当数据节点变得狂暴时,您已经丢失了它们。现在您必须重做。您已经将输入文件复制到
    /work
    ,那么您的run jar命令也应该是
    hadoop jar/Desktop/Hadoopproject/WordCount.jar WordcountDemo.WordCount/work/out