Java HDP2。2@Linux/CentOS@OracleVM(Hortonworks)从远程提交失败Eclipse@Windows

Java HDP2。2@Linux/CentOS@OracleVM(Hortonworks)从远程提交失败Eclipse@Windows,java,hadoop,hortonworks-data-platform,Java,Hadoop,Hortonworks Data Platform,我在本地机器(Windows 7)上的OracleVM中的CentOS上以伪发行版模式运行HDP2.2。想要测试远程提交,因此在Eclipse中创建了一个WordCount示例,在OVM之外运行,并提交如下(我选择的示例来自网络上的其他地方) 在Eclipse中返回了以下异常(以及Namenode日志(sandbox.hortonworks.com:50070/logs/hadoop hdfs Namenode sandbox.hortonworks.com.log)) 名称节点日志: 2015

我在本地机器(Windows 7)上的OracleVM中的CentOS上以伪发行版模式运行HDP2.2。想要测试远程提交,因此在Eclipse中创建了一个WordCount示例,在OVM之外运行,并提交如下(我选择的示例来自网络上的其他地方)

在Eclipse中返回了以下异常(以及Namenode日志(sandbox.hortonworks.com:50070/logs/hadoop hdfs Namenode sandbox.hortonworks.com.log))

名称节点日志:

2015-12-07 16:21:14,631 INFO  blockmanagement.BlockManager BlockManager.java:setReplication(2710)) - Increasing replication from 1 to 10 for /user/root/.staging/job_1449505005810_0001/job.split
2015-12-07 16:21:14,690 INFO  hdfs.StateChange FSNamesystem.java:saveAllocatedBlock(3663)) - BLOCK* allocateBlock: /user/root/.staging/job_1449505005810_0001/job.split. BP-1487918654-10.0.2.15-1418756667447 blk_1073742153_1339{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b183a7df-9498-4b2c-87f5-4bfb2cf40ca3:NORMAL:10.0.2.15:50010|RBW]]}
2015-12-07 16:21:35,768 WARN  blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseTarget(383)) - Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2015-12-07 16:21:35,769 WARN  blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseTarget(383)) - Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2015-12-07 16:21:35,770 WARN  protocol.BlockStoragePolicy (BlockStoragePolicy.java:chooseStorageTypes(160)) - Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2015-12-07 16:21:35,770 WARN  blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseTarget(383)) - Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2015-12-07 16:21:35,771 INFO  ipc.Server (Server.java:run(2060)) - IPC Server handler 91 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.0.2.2:54842 Call#25 Retry#0
java.io.IOException: File /user/root/.staging/job_1449505005810_0001/job.split could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3203)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
Eclipse控制台

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/.staging/job_1449505005810_0002/job.split could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3203)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
注意Namenode日志中的“WARN”语句。基于这些,我在“org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy”上启用了调试模式,并重新运行该作业以在原始异常之前在namenode日志中获得以下异常

Namenode日志(作业重新提交时):

我已尝试了所有用于解决异常“只能复制到0个节点而不是minReplication(=1)的堆栈溢出解决方案。有1个datanode正在运行,1个节点在此操作中被排除”,但无法解决该异常

  • 尝试格式化namenode

  • 试图放置所有的centOS 中的“/usr/hdp/2.2.0.0-2041/hadoop/conf”中的配置文件 windows本地文件夹,并将其包含在Eclipse类路径中

  • 尝试 打开所有端口以使其可从Eclipse访问(包括 50010).

  • 尝试在/etc/hadoop中放置从属文件和主文件
  • 还有很多其他的
  • 在检查BlockPlacementPolicyDefault()中的代码时,我觉得错误是由于715处的错误逻辑造成的,该逻辑总是返回0,因为localnode已添加到excludednode集中

    703  int addIfIsGoodTarget(DatanodeStorageInfo storage,
    704      Set<Node> excludedNodes,
    705      long blockSize,
    706      int maxNodesPerRack,
    707      boolean considerLoad,
    708      List<DatanodeStorageInfo> results,                           
    709      boolean avoidStaleNodes,
    710      StorageType storageType) {
    711    if (isGoodTarget(storage, blockSize, maxNodesPerRack, considerLoad,
    712        results, avoidStaleNodes, storageType)) {
    713      results.add(storage);
    714      // add node and related nodes to excludedNode
    715      return addToExcludedNodes(storage.getDatanodeDescriptor(), excludedNodes);
    716    } else { 
    717      return -1;
    718    }
    719  }
    
    但是,可能是我想得太多了,这实际上是一个配置问题,将参数传递给下面的方法(其中一些参数必须来自HDFS配置文件)


    apache hadoop链接上的机架感知信息(适用于当前版本)(可能也适用于2.6.0/HDP 2.2)指出/default机架可能存在块放置问题“如果未设置topology.script.file.name或topology.node.switch.mapping.impl,则会为任何传递的IP地址返回机架id“/default rack”。虽然这种行为看起来是可取的,但它可能会导致HDFS块复制出现问题……”以上可能意味着Hortonworks沙盒不能用于远程作业放置…等待任何人的评论..顺便说一句,我可以用HDP 2.3.2沙盒(OVM)复制相同的异常也很抱歉让它复活了。你解决了吗?我用HDP2.6沙盒解决了同样的问题
    2015-12-07 16:22:17,137 INFO  blockmanagement.BlockManager (BlockManager.java:setReplication(2710)) - Increasing replication from 1 to 10 for /user/root/.staging/job_1449505005810_0002/job.split
    2015-12-07 16:22:17,175 INFO  hdfs.StateChange (FSNamesystem.java:saveAllocatedBlock(3663)) - BLOCK* allocateBlock: /user/root/.staging/job_1449505005810_0002/job.split. BP-1487918654-10.0.2.15-1418756667447 blk_1073742154_1340{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b183a7df-9498-4b2c-87f5-4bfb2cf40ca3:NORMAL:10.0.2.15:50010|RBW]]}
    2015-12-07 16:22:38,254 DEBUG blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseLocalRack(530)) - Failed to choose from local rack (location = /default-rack); the second replica is not found, retry choosing ramdomly
    org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: 
    at   org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:691)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:606)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:512)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:472)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:339)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:214)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:111)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:126)
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1545)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3203)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
    
    703  int addIfIsGoodTarget(DatanodeStorageInfo storage,
    704      Set<Node> excludedNodes,
    705      long blockSize,
    706      int maxNodesPerRack,
    707      boolean considerLoad,
    708      List<DatanodeStorageInfo> results,                           
    709      boolean avoidStaleNodes,
    710      StorageType storageType) {
    711    if (isGoodTarget(storage, blockSize, maxNodesPerRack, considerLoad,
    712        results, avoidStaleNodes, storageType)) {
    713      results.add(storage);
    714      // add node and related nodes to excludedNode
    715      return addToExcludedNodes(storage.getDatanodeDescriptor(), excludedNodes);
    716    } else { 
    717      return -1;
    718    }
    719  }
    
    681    if (numOfReplicas>0) {
    682      String detail = enableDebugLogging;
    683      if (LOG.isDebugEnabled()) {
    684        if (badTarget && builder != null) {
    685          detail = builder.toString();
    686          builder.setLength(0); 
    687        } else {
    688          detail = ""; 
    689        }
    690      }
    691      throw new NotEnoughReplicasException(detail);
    692    }
    
    652            final int newExcludedNodes = addIfIsGoodTarget(storages[i],
    653                excludedNodes, blocksize, maxNodesPerRack, considerLoad, results,
    654                avoidStaleNodes, type);