Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 无法将现有文件附加到HDFS_Hadoop_Bigdata - Fatal编程技术网

Hadoop 无法将现有文件附加到HDFS

Hadoop 无法将现有文件附加到HDFS,hadoop,bigdata,Hadoop,Bigdata,我在VM上运行单节点Hadoop 1.2.1集群 我的hdfs-site.xml如下所示: <configuration> <property> <name>dfs.replication</name> <value>1</value> <description>Default block replication. </description> </property> <

我在VM上运行单节点Hadoop 1.2.1集群

我的hdfs-site.xml如下所示:

<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
 </description>
</property>
<property>
  <name>dfs.support.append</name>
  <value>true</value>
  <description>Does HDFS allow appends to files?
  </description>
</property>
</configuration>
现在,如果我试图附加到现有文件,我将得到以下错误:

org.apache.hadoop.ipc.RemoteException: java.io.IOException: Append is not supported. Please see the dfs.support.append configuration parameter
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1781)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:725)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

    at org.apache.hadoop.ipc.Client.call(Client.java:1113)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy1.append(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy1.append(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:933)
    at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:922)
    at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:196)
    at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:659)
    at com.vanilla.hadoop.AppendToHdfsFile.main(AppendToHdfsFile.java:29)

怎么了?我遗漏了什么吗?

您应该尝试使用2.X.X版本或0.2X版本,因为在hadoop 0.20.2之后在hdfs上追加文件。请参阅和的详细信息。您应该尝试使用2.X.X版本或0.2X版本,因为在hadoop 0.20.2之后在hdfs上追加文件。有关更多信息,请参见和

自1.0.3起不支持追加。无论如何,如果您确实需要前面的功能,要启用附加功能,请将标志“dfs.support.break.append”设置为true


从1.0.3开始不支持追加。无论如何,如果您确实需要前面的功能,要启用附加功能,请将标志“dfs.support.break.append”设置为true


现在让我们从配置文件系统开始:

public FileSystem configureFileSystem(String coreSitePath, String hdfsSitePath) {
    FileSystem fileSystem = null;
    try {
        Configuration conf = new Configuration();
        conf.setBoolean("dfs.support.append", true);
        Path coreSite = new Path(coreSitePath);
        Path hdfsSite = new Path(hdfsSitePath);
        conf.addResource(coreSite);
        conf.addResource(hdfsSite);
        fileSystem = FileSystem.get(conf);
    } catch (IOException ex) {
        System.out.println("Error occurred while configuring FileSystem");
    }
    return fileSystem;
}
确保hdfs-site.xml中的属性dfs.support.append设置为true

您可以通过编辑hdfs-site.xml文件手动设置,也可以通过以下方式编程设置:

conf.setBoolean(“dfs.support.append”,true)

让我们从附加到HDFS中的文件开始。

public String appendToFile(FileSystem fileSystem, String content, String dest) throws IOException {
    Path destPath = new Path(dest);
    if (!fileSystem.exists(destPath)) {
        System.err.println("File doesn't exist");
        return "Failure";
    }
    Boolean isAppendable = Boolean.valueOf(fileSystem.getConf().get("dfs.support.append"));
    if(isAppendable) {
        FSDataOutputStream fs_append = fileSystem.append(destPath);
        PrintWriter writer = new PrintWriter(fs_append);
        writer.append(content);
        writer.flush();
        fs_append.hflush();
        writer.close();
        fs_append.close();
        return "Success";
    }
    else {
        System.err.println("Please set the dfs.support.append property to true");
        return "Failure";
    }
}
为了查看数据是否已正确写入HDFS,让我们编写一个方法从HDFS读取数据,并以字符串形式返回内容

public String readFromHdfs(FileSystem fileSystem, String hdfsFilePath) {
    Path hdfsPath = new Path(hdfsFilePath);
    StringBuilder fileContent = new StringBuilder("");
    try{
        BufferedReader bfr=new BufferedReader(new InputStreamReader(fileSystem.open(hdfsPath)));
        String str;
        while ((str = bfr.readLine()) != null) {
            fileContent.append(str+"\n");
        }
    }
    catch (IOException ex){
        System.out.println("----------Could not read from HDFS---------\n");
    }
    return fileContent.toString();
}
在那之后,我们已经成功地在HDFS中写入和读取了该文件。是时候关闭文件系统了

public void closeFileSystem(FileSystem fileSystem){
    try {
        fileSystem.close();
    }
    catch (IOException ex){
        System.out.println("----------Could not close the FileSystem----------");
    }
}
在执行代码之前,应该在系统上运行Hadoop

您只需转到HADOOP_主页并运行以下命令:

/sbin/start all.sh


要获得完整的参考信息,请使用

让我们从配置文件系统开始:

public FileSystem configureFileSystem(String coreSitePath, String hdfsSitePath) {
    FileSystem fileSystem = null;
    try {
        Configuration conf = new Configuration();
        conf.setBoolean("dfs.support.append", true);
        Path coreSite = new Path(coreSitePath);
        Path hdfsSite = new Path(hdfsSitePath);
        conf.addResource(coreSite);
        conf.addResource(hdfsSite);
        fileSystem = FileSystem.get(conf);
    } catch (IOException ex) {
        System.out.println("Error occurred while configuring FileSystem");
    }
    return fileSystem;
}
确保hdfs-site.xml中的属性dfs.support.append设置为true

您可以通过编辑hdfs-site.xml文件手动设置,也可以通过以下方式编程设置:

conf.setBoolean(“dfs.support.append”,true)

让我们从附加到HDFS中的文件开始。

public String appendToFile(FileSystem fileSystem, String content, String dest) throws IOException {
    Path destPath = new Path(dest);
    if (!fileSystem.exists(destPath)) {
        System.err.println("File doesn't exist");
        return "Failure";
    }
    Boolean isAppendable = Boolean.valueOf(fileSystem.getConf().get("dfs.support.append"));
    if(isAppendable) {
        FSDataOutputStream fs_append = fileSystem.append(destPath);
        PrintWriter writer = new PrintWriter(fs_append);
        writer.append(content);
        writer.flush();
        fs_append.hflush();
        writer.close();
        fs_append.close();
        return "Success";
    }
    else {
        System.err.println("Please set the dfs.support.append property to true");
        return "Failure";
    }
}
为了查看数据是否已正确写入HDFS,让我们编写一个方法从HDFS读取数据,并以字符串形式返回内容

public String readFromHdfs(FileSystem fileSystem, String hdfsFilePath) {
    Path hdfsPath = new Path(hdfsFilePath);
    StringBuilder fileContent = new StringBuilder("");
    try{
        BufferedReader bfr=new BufferedReader(new InputStreamReader(fileSystem.open(hdfsPath)));
        String str;
        while ((str = bfr.readLine()) != null) {
            fileContent.append(str+"\n");
        }
    }
    catch (IOException ex){
        System.out.println("----------Could not read from HDFS---------\n");
    }
    return fileContent.toString();
}
在那之后,我们已经成功地在HDFS中写入和读取了该文件。是时候关闭文件系统了

public void closeFileSystem(FileSystem fileSystem){
    try {
        fileSystem.close();
    }
    catch (IOException ex){
        System.out.println("----------Could not close the FileSystem----------");
    }
}
在执行代码之前,应该在系统上运行Hadoop

您只需转到HADOOP_主页并运行以下命令:

/sbin/start all.sh

供完全参考使用