Java 将文件从一台服务器上载到另一台服务器中的Hdfs

Java 将文件从一台服务器上载到另一台服务器中的Hdfs,java,hadoop,hdfs,Java,Hadoop,Hdfs,我想将文件从外部Windows服务器上载到其他服务器中的Hdfs。Hdfs是该服务器中cloudera docker容器的一部分 我从Windows server连接到Hdfs,如下所示: Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://%HDFS_SERVER_IP%:8020"); fs = FileSystem.get(conf); 当我调用fs.copyFromLocalFile(lo

我想将文件从外部Windows服务器上载到其他服务器中的Hdfs。Hdfs是该服务器中cloudera docker容器的一部分

我从Windows server连接到Hdfs,如下所示:

Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://%HDFS_SERVER_IP%:8020");
fs = FileSystem.get(conf);
当我调用
fs.copyFromLocalFile(localFilePath,hdfsFilePath)时,它抛出以下异常,并创建Hdfs中没有任何内容的文件:

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/test/test.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1595)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3287)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677)
    at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

    at org.apache.hadoop.ipc.Client.call(Client.java:1475)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1455)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1251)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
并且datanode中似乎存在问题,下面是从其日志中复制的:

正在重试连接到服务器:0.0.0.0/0.0.0.0:8022。已尝试0 时间;;重试策略无效 RetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1000 毫秒)

我格式化了datanodes并重新启动hdfs,但在这种情况下仍然无法上载文件。除了读取、写入文件等其他功能外,还可以使用配置,如果本地系统和Hdfs位于同一服务器中,则可以传输文件

服务器连接到代理服务器,我配置了Hdfs docker容器的代理环境。如何使用Hdfs Java Api在不同服务器之间传输文件

更新1:

hdfs dfsadmin-报告:

17/04/05 07:14:02 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
Total Nodes:1
         Node-Id             Node-State Node-Http-Address       Number-of-Running-Containers
quickstart.cloudera:37449               RUNNING quickstart.cloudera:8042                                   0
[root@quickstart conf]# hdfs dfsadmin -report
Configured Capacity: 211243687936 (196.74 GB)
Present Capacity: 78773199014 (73.36 GB)
DFS Remaining: 77924307110 (72.57 GB)
DFS Used: 848891904 (809.57 MB)
DFS Used%: 1.08%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (1):

Name: XXXX:50010 (quickstart.cloudera)
Hostname: quickstart.cloudera
Decommission Status : Normal
Configured Capacity: 211243687936 (196.74 GB)
DFS Used: 848891904 (809.57 MB)
Non DFS Used: 132470488922 (123.37 GB)
DFS Remaining: 77924307110 (72.57 GB)
DFS Used%: 0.40%
DFS Remaining%: 36.89%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 6
Last contact: Wed Apr 05 07:15:00 UTC 2017
纱线节点-列表-全部:

17/04/05 07:14:02 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
Total Nodes:1
         Node-Id             Node-State Node-Http-Address       Number-of-Running-Containers
quickstart.cloudera:37449               RUNNING quickstart.cloudera:8042                                   0
core-site.xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://quickstart.cloudera:8020</value>
  </property>

  <!-- OOZIE proxy user setting -->
  <property>
    <name>hadoop.proxyuser.oozie.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.oozie.groups</name>
    <value>*</value>
  </property>

  <!-- HTTPFS proxy user setting -->
  <property>
    <name>hadoop.proxyuser.httpfs.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.httpfs.groups</name>
    <value>*</value>
  </property>

  <!-- Llama proxy user setting -->
  <property>
    <name>hadoop.proxyuser.llama.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.llama.groups</name>
    <value>*</value>
  </property>

  <!-- Hue proxy user setting -->
  <property>
    <name>hadoop.proxyuser.hue.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.hue.groups</name>
    <value>*</value>
  </property>

</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
  <!-- Immediately exit safemode as soon as one DataNode checks in.
       On a multi-node cluster, these configurations must be removed.  -->
  <property>
    <name>dfs.safemode.extension</name>
    <value>0</value>
  </property>
  <property>
     <name>dfs.safemode.min.datanodes</name>
     <value>1</value>
  </property>
  <property>
     <name>dfs.permissions.enabled</name>
     <value>false</value>
  </property>
  <property>
     <name>dfs.permissions</name>
     <value>false</value>
  </property>
  <property>
     <name>dfs.safemode.min.datanodes</name>
     <value>1</value>
  </property>
  <property>
     <name>dfs.webhdfs.enabled</name>
     <value>true</value>
  </property>
  <property>
     <name>hadoop.tmp.dir</name>
     <value>/var/lib/hadoop-hdfs/cache/${user.name}</value>
  </property>
  <property>
     <name>dfs.namenode.name.dir</name>
     <value>/var/lib/hadoop-hdfs/cache/${user.name}/dfs/name</value>
  </property>
  <property>
     <name>dfs.namenode.checkpoint.dir</name>
     <value>/var/lib/hadoop-hdfs/cache/${user.name}/dfs/namesecondary</value>
  </property>
  <property>
     <name>dfs.datanode.data.dir</name>
     <value>/var/lib/hadoop-hdfs/cache/${user.name}/dfs/data</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-bind-host</name>
    <value>0.0.0.0</value>
  </property>

  <property>
    <name>dfs.namenode.servicerpc-address</name>
    <value>0.0.0.0:8022</value>
  </property>
  <property>
    <name>dfs.https.address</name>
    <value>0.0.0.0:50470</value>
  </property>
  <property>
    <name>dfs.namenode.http-address</name>
    <value>0.0.0.0:50070</value>
  </property>
  <property>
    <name>dfs.datanode.address</name>
    <value>0.0.0.0:50010</value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>0.0.0.0:50020</value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>0.0.0.0:50075</value>
  </property>
  <property>
    <name>dfs.datanode.https.address</name>
    <value>0.0.0.0:50475</value>
  </property>
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>0.0.0.0:50090</value>
  </property>
  <property>
    <name>dfs.namenode.secondary.https-address</name>
    <value>0.0.0.0:50495</value>
  </property>

  <!-- Impala configuration -->
  <property>
    <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>dfs.client.file-block-storage-locations.timeout.millis</name>
    <value>10000</value>
  </property>
  <property>
    <name>dfs.client.read.shortcircuit</name>
    <value>true</value>
  </property>
  <property>
    <name>dfs.domain.socket.path</name>
    <value>/var/run/hadoop-hdfs/dn._PORT</value>
  </property>
</configuration>

fs.defaultFS
hdfs://quickstart.cloudera:8020
hadoop.proxyuser.oozie.hosts
*
hadoop.proxyuser.oozie.groups
*
hadoop.proxyuser.httpfs.hosts
*
hadoop.proxyuser.httpfs.groups
*
hadoop.proxyuser.llama.hosts
*
hadoop.proxyuser.llama.groups
*
hadoop.proxyuser.hue.hosts
*
hadoop.proxyuser.hue.groups
*
hdfs-site.xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://quickstart.cloudera:8020</value>
  </property>

  <!-- OOZIE proxy user setting -->
  <property>
    <name>hadoop.proxyuser.oozie.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.oozie.groups</name>
    <value>*</value>
  </property>

  <!-- HTTPFS proxy user setting -->
  <property>
    <name>hadoop.proxyuser.httpfs.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.httpfs.groups</name>
    <value>*</value>
  </property>

  <!-- Llama proxy user setting -->
  <property>
    <name>hadoop.proxyuser.llama.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.llama.groups</name>
    <value>*</value>
  </property>

  <!-- Hue proxy user setting -->
  <property>
    <name>hadoop.proxyuser.hue.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.hue.groups</name>
    <value>*</value>
  </property>

</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
  <!-- Immediately exit safemode as soon as one DataNode checks in.
       On a multi-node cluster, these configurations must be removed.  -->
  <property>
    <name>dfs.safemode.extension</name>
    <value>0</value>
  </property>
  <property>
     <name>dfs.safemode.min.datanodes</name>
     <value>1</value>
  </property>
  <property>
     <name>dfs.permissions.enabled</name>
     <value>false</value>
  </property>
  <property>
     <name>dfs.permissions</name>
     <value>false</value>
  </property>
  <property>
     <name>dfs.safemode.min.datanodes</name>
     <value>1</value>
  </property>
  <property>
     <name>dfs.webhdfs.enabled</name>
     <value>true</value>
  </property>
  <property>
     <name>hadoop.tmp.dir</name>
     <value>/var/lib/hadoop-hdfs/cache/${user.name}</value>
  </property>
  <property>
     <name>dfs.namenode.name.dir</name>
     <value>/var/lib/hadoop-hdfs/cache/${user.name}/dfs/name</value>
  </property>
  <property>
     <name>dfs.namenode.checkpoint.dir</name>
     <value>/var/lib/hadoop-hdfs/cache/${user.name}/dfs/namesecondary</value>
  </property>
  <property>
     <name>dfs.datanode.data.dir</name>
     <value>/var/lib/hadoop-hdfs/cache/${user.name}/dfs/data</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-bind-host</name>
    <value>0.0.0.0</value>
  </property>

  <property>
    <name>dfs.namenode.servicerpc-address</name>
    <value>0.0.0.0:8022</value>
  </property>
  <property>
    <name>dfs.https.address</name>
    <value>0.0.0.0:50470</value>
  </property>
  <property>
    <name>dfs.namenode.http-address</name>
    <value>0.0.0.0:50070</value>
  </property>
  <property>
    <name>dfs.datanode.address</name>
    <value>0.0.0.0:50010</value>
  </property>
  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>0.0.0.0:50020</value>
  </property>
  <property>
    <name>dfs.datanode.http.address</name>
    <value>0.0.0.0:50075</value>
  </property>
  <property>
    <name>dfs.datanode.https.address</name>
    <value>0.0.0.0:50475</value>
  </property>
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>0.0.0.0:50090</value>
  </property>
  <property>
    <name>dfs.namenode.secondary.https-address</name>
    <value>0.0.0.0:50495</value>
  </property>

  <!-- Impala configuration -->
  <property>
    <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>dfs.client.file-block-storage-locations.timeout.millis</name>
    <value>10000</value>
  </property>
  <property>
    <name>dfs.client.read.shortcircuit</name>
    <value>true</value>
  </property>
  <property>
    <name>dfs.domain.socket.path</name>
    <value>/var/run/hadoop-hdfs/dn._PORT</value>
  </property>
</configuration>

dfs.replication
1.
dfs.safemode.extension
0
dfs.safemode.min.datanodes
1.
dfs.permissions.enabled
假的
dfs.0权限
假的
dfs.safemode.min.datanodes
1.
dfs.webhdfs.enabled
真的
hadoop.tmp.dir
/var/lib/hadoop hdfs/cache/${user.name}
dfs.namenode.name.dir
/var/lib/hadoop hdfs/cache/${user.name}/dfs/name
dfs.namenode.checkpoint.dir
/var/lib/hadoop hdfs/cache/${user.name}/dfs/namesecondary
dfs.datanode.data.dir
/var/lib/hadoop hdfs/cache/${user.name}/dfs/data
dfs.namenode.rpc-bind-host
0.0.0.0
dfs.namenode.servicerpc-address
0.0.0.0:8022
dfs.https.address
0.0.0.0:50470
dfs.namenode.http-address
0.0.0.0:50070
dfs.datanode.address
0.0.0.0:50010
dfs.datanode.ipc.address
0.0.0.0:50020
dfs.datanode.http.address
0.0.0.0:50075
dfs.datanode.https.address
0.0.0.0:50475
dfs.namenode.secondary.http-address
0.0.0.0:50090
dfs.namenode.secondary.https-address
0.0.0.0:50495
dfs.datanode.hdfs-blocks-metadata.enabled
真的
dfs.client.file-block-storage-locations.timeout.millis
10000
dfs.client.read.shortcircuit
真的
dfs.domain.socket.path
/var/run/hadoop hdfs/dn.\u端口

端口在
核心站点.xml
中的属性
fs.defaultFS
hdfs站点.xml
中的
dfs.namenode.servicerpc地址
之间存在冲突

hdfs site.xml
中修改此选项,然后重新启动服务

<property>
    <name>dfs.namenode.servicerpc-address</name>
    <value>0.0.0.0:8020</value>
</property>

dfs.namenode.servicerpc-address
0.0.0.0:8020

我只将
conf.set(“fs.defaultFS”,“hdfs://%hdfs\u SERVER\u IP%:8020”)
更改为
conf.set(“fs.defaultFS”,“webhdfs://%hdfs\u SERVER\u IP%:50070”)
然后我成功地将文件上传到不同服务器中的hdfs。我提到了这一点。

您从哪里运行此代码?它必须在Windows服务器上。同时发布完整的stacktrace。如何初始化文件系统?datanode是否有足够的空间!添加
hdfs dfsadmin-report
warn节点-list-all
core site.xml
hdfs site.xml
属性的输出。@franklinsijo我添加了输出。1)尝试执行
netstat-anp
以查看哪些端口正在实际使用(您可以
netstat-anp | grep80
稍微过滤一下结果)。2)尝试禁用防火墙几分钟,然后重复测试。3) 尝试使用IP来代替主机名(或者至少确保使用跟踪路由或其他方法正确解析主机名)。我已修改,无法初始化namenode。我将属性的名称更改为dfs.namenode.rpc-address,但仍然得到相同的异常。org.apache.hadoop.ipc.Client:正在重试连接到服务器:0.0.0.0/0.0.0:8020。仍然在datanode的日志中。它们与问题和org.apache.hadoop.ipc.Client中描述的相同:重试连接到服务器:0.0.0.0/0.0.0:8020在datanode的日志中可以将
0.0.0.0
的所有实例替换为
quickstart.cloudera