Hadoop distcp错误

Hadoop distcp错误,hadoop,copy,hdfs,cloudera,Hadoop,Copy,Hdfs,Cloudera,我正在尝试在两个支持kerberos的Hadoop集群(版本-Hadoop 2.0.0-cdh4.3.0)之间创建Hadoop Distcp 当我使用命令“hadoop distcp hdfs:cluster1:8020/user/test.txthdfs://cluster2:8020/user“在目标集群中,它工作正常。但是,当我在源集群中执行该命令时,我得到以下错误- Copy failed: java.io.IOException: Failed on local exception:

我正在尝试在两个支持kerberos的Hadoop集群(版本-Hadoop 2.0.0-cdh4.3.0)之间创建Hadoop Distcp

当我使用命令“hadoop distcp hdfs:cluster1:8020/user/test.txthdfs://cluster2:8020/user“在目标集群中,它工作正常。但是,当我在源集群中执行该命令时,我得到以下错误-

Copy failed: java.io.IOException: Failed on local exception: java.io.IOException: Response is null.; Host Details : local host is: "cluster1/10.96.82.149"; destination host is: "cluster2":8020;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:763)
        at org.apache.hadoop.ipc.Client.call(Client.java:1229)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
        at $Proxy9.getDelegationToken(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
        at $Proxy9.getDelegationToken(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:783)
        at org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:783)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:868)
        at org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:509)
        at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:487)
        at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:130)
        at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:111)
        at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:85)
        at org.apache.hadoop.tools.DistCp.setup(DistCp.java:1046)
        at org.apache.hadoop.tools.DistCp.copy(DistCp.java:666)
        at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
Caused by: java.io.IOException: Response is null.
        at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:941)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:836)
当我尝试使用“hadoop distcp hftp:cluster1:50070/user/test.txthdfs://cluster2:8020/user“在源群集或目标群集上,我得到以下错误-

org.apache.hadoop.ipc.RemoteException(java.io.IOException): Security enabled but user not authenticated by filter
        at org.apache.hadoop.ipc.RemoteException.valueOf(RemoteException.java:97)
        at org.apache.hadoop.hdfs.HftpFileSystem$LsParser.startElement(HftpFileSystem.java:425)
        at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501)
        at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179)
        at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:377)
        at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl$NSContentDriver.scanRootElementHook(XMLNSDocumentScannerImpl.java:626)
        at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3104)
        at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:922)
        at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648)
        at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:140)
        at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
        at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
        at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
        at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
        at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205)
        at org.apache.hadoop.hdfs.HftpFileSystem$LsParser.fetchList(HftpFileSystem.java:464)
        at org.apache.hadoop.hdfs.HftpFileSystem$LsParser.getFileStatus(HftpFileSystem.java:475)
        at org.apache.hadoop.hdfs.HftpFileSystem.getFileStatus(HftpFileSystem.java:504)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1378)
        at org.apache.hadoop.tools.DistCp.checkSrcPath(DistCp.java:636)
        at org.apache.hadoop.tools.DistCp.copy(DistCp.java:656)
        at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)

请帮助我爱这个。我想在源集群上执行此操作。

是否使用高可用性NameNodes

我在使用高可用性的distcp时遇到过问题。为了克服它,我只指定了活动namenode的主机名,而不是集群的逻辑名称