Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/319.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/google-cloud-platform/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 无法实例化Hadoop hdfs DistributedFileSystem_Java_Hdfs - Fatal编程技术网

Java 无法实例化Hadoop hdfs DistributedFileSystem

Java 无法实例化Hadoop hdfs DistributedFileSystem,java,hdfs,Java,Hdfs,我已经建立了一个hadoop hdfs集群,由于我是hadoop新手,我一直在尝试按照一个简单的示例从我在本地机器上编写的java驱动程序中读/写hdfs。我尝试测试的示例如下: public static void main(String[] args) throws IOException { args = new String[3]; args[0] = "add"; args[1] = "./files/jaildata.csv"; args[2] =

我已经建立了一个hadoop hdfs集群,由于我是hadoop新手,我一直在尝试按照一个简单的示例从我在本地机器上编写的java驱动程序中读/写hdfs。我尝试测试的示例如下:

public static void main(String[] args) throws IOException {

    args = new String[3];
    args[0] = "add";
    args[1] = "./files/jaildata.csv";
    args[2] = "hdfs://<Namenode-Host>:<Port>/dir1/dir2/";        
    if (args.length < 1) {
        System.out.println("Usage: hdfsclient add/read/delete/mkdir [<local_path> <hdfs_path>]");
        System.exit(1);
    }

    FileSystemOperations client = new FileSystemOperations();
    String hdfsPath = "hdfs://<Namenode-Host>:<Port>";

    Configuration conf = new Configuration();
    conf.addResource(new Path("file:///user/local/hadoop/etc/hadoop/core-site.xml"));
    conf.addResource(new Path("file:///user/local/hadoop/etc/hadoop/hdfs-site.xml"));

    if (args[0].equals("add")) {
        if (args.length < 3) {
            System.out.println("Usage: hdfsclient add <local_path> <hdfs_path>");
            System.exit(1);
        }
        client.addFile(args[1], args[2], conf);

    } else {
        System.out.println("Usage: hdfsclient add/read/delete/mkdir [<local_path> <hdfs_path>]");
        System.exit(1);
    }
    System.out.println("Done!");
}
public void addFile(String source, String dest, Configuration conf) throws IOException {

    FileSystem fileSystem = FileSystem.get(conf);

    // Get the filename out of the file path
    String filename = source.substring(source.lastIndexOf('/') + 1, source.length());

    // Create the destination path including the filename.
    if (dest.charAt(dest.length() - 1) != '/') {
        dest = dest + "/" + filename;
    } else {
        dest = dest + filename;
    }
    Path path = new Path(dest);
    if (fileSystem.exists(path)) {
        System.out.println("File " + dest + " already exists");
        return;
    }

    // Create a new file and write data to it.
    FSDataOutputStream out = fileSystem.create(path);
    InputStream in = new BufferedInputStream(new FileInputStream(new File(source)));

    byte[] b = new byte[1024];
    int numBytes = 0;
    while ((numBytes = in.read(b)) > 0) {
        out.write(b, 0, numBytes);
    }

    // Close all the file descriptors
    in.close();
    out.close();
    fileSystem.close();
}
该项目是一个maven项目,将
hadoop-common-2.6.5
hadoop-hdfs-2.9.0
hadoop=hdfs client 2.9.0
添加到依赖项中,并配置为构建包含所有依赖项的jar

我的问题是,无论我尝试了不同的演示示例,我都会在
FileSystem
FileSystem FileSystem=FileSystem.get(conf)处创建
FileSystem
时遇到以下异常:

我不知道如何通过,我已经尝试了我在网上看到的几个解决方案中的每一个,所以我将非常感谢关于这个问题的任何建议


谢谢。

org.apache.hadoop.fs.FSDataOutputStreamBuilder
类不在
hadoop-common-2.6.5
中,而是在
hadoop-common-2.9.0

正如我所注意到的,您已经在为
hdfs客户端
使用2.9.0版本。 将其他hadoop包与2.9.0保持一致,以避免类似问题


请在hadoop common的2.9.0版本的构建中参考,以解决此问题。

在运行时此类似乎不可用:
org/apache/hadoop/fs/FSDataOutputStreamBuilder
。你能发布你的Maven或Gradle构建文件吗?我猜你是指我的pom.xml?是的,没错。Pfff我的错误。。。感谢用户987339和@gil.fernandes为我们指明了正确的方向!问题是hadoop的通用版本。使用hadoop-common-2.9.0解决了这个问题。再次感谢,忘了。。。完成!;)
Exception in thread "main" java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.hdfs.DistributedFileSystem could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2565)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2576)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2593)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataOutputStreamBuilder