Hadoop 2.6.1 java.lang.NullPointerException位于org.apache.Hadoop.hdfs.DFSOutputStream.isLazyPersist(DFSOutputStream.java:1709)

Hadoop 2.6.1 java.lang.NullPointerException位于org.apache.Hadoop.hdfs.DFSOutputStream.isLazyPersist(DFSOutputStream.java:1709),java,hadoop,Java,Hadoop,当我尝试附加到HDFS中的文件时,我得到如下异常。请告知 file.append(new Path(uri)); 例外情况 java.lang.NullPointerException at org.apache.hadoop.hdfs.DFSOutputStream.isLazyPersist(DFSOutputStream.java:1709) at org.apache.hadoop.hdfs.DFSOutputStream.getChecksum4Compute(DFS

当我尝试附加到HDFS中的文件时,我得到如下异常。请告知

file.append(new Path(uri));
例外情况

java.lang.NullPointerException
    at org.apache.hadoop.hdfs.DFSOutputStream.isLazyPersist(DFSOutputStream.java:1709)
    at org.apache.hadoop.hdfs.DFSOutputStream.getChecksum4Compute(DFSOutputStream.java:1550)
    at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1560)
    at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1667)
    at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForAppend(DFSOutputStream.java:1694)
    at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1824)
    at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1885)
    at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1855)
    at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:340)
    at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:336)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:348)
    at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:318)
    at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1164)
java.lang.NullPointerException
位于org.apache.hadoop.hdfs.DFSOutputStream.isLazyPersist(DFSOutputStream.java:1709)
位于org.apache.hadoop.hdfs.DFSOutputStream.getChecksum4Compute(DFSOutputStream.java:1550)
位于org.apache.hadoop.hdfs.DFSOutputStream。(DFSOutputStream.java:1560)
位于org.apache.hadoop.hdfs.DFSOutputStream。(DFSOutputStream.java:1667)
位于org.apache.hadoop.hdfs.DFSOutputStream.newStreamForAppend(DFSOutputStream.java:1694)
位于org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1824)
位于org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1885)
位于org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1855)
位于org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:340)
位于org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:336)
位于org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
位于org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:348)
位于org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:318)
位于org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1164)

请注意,我只在Hadoop版本2.6.1中遇到了这个问题,但在版本2.7.1

中工作正常。问题在于apache Hadoop客户端jar的版本。在我的pom.xml中,我有一个最新版本2.7.1,它与hadoop 2.6.1安装不兼容。hadoop客户端jar没有向后兼容性。对我的pom.xml的最终更改如下所示

<dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.6.1</version>
<dependency>

org.apache.hadoop
hadoop客户端
2.6.1

这是由于版本冲突造成的。尝试更改xml文件中的依赖项

由于某种原因,HdfsFileStatus为null。请查收