Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/spring-boot/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop:java.io.IOException:对localhost/127.0.0.1:54310的调用在本地异常上失败:java.io.eofeException_Java_Hadoop_Filesystems_Hdfs_Hadoop Streaming - Fatal编程技术网

Hadoop:java.io.IOException:对localhost/127.0.0.1:54310的调用在本地异常上失败:java.io.eofeException

Hadoop:java.io.IOException:对localhost/127.0.0.1:54310的调用在本地异常上失败:java.io.eofeException,java,hadoop,filesystems,hdfs,hadoop-streaming,Java,Hadoop,Filesystems,Hdfs,Hadoop Streaming,我是hadoop的新手,今天我才开始使用它, 我想将文件写入hdfs hadoop服务器,我正在使用服务器hadoop 1.2.1,当我在cli中发出jps命令时,我能够看到所有节点都在运行 31895 Jps 29419 SecondaryNameNode 29745 TaskTracker 29257 DataNode 这是我将文件写入hdfs系统的示例客户机代码 public static void main(String[] args) { try {

我是hadoop的新手,今天我才开始使用它, 我想将文件写入hdfs hadoop服务器,我正在使用服务器hadoop 1.2.1,当我在cli中发出jps命令时,我能够看到所有节点都在运行

31895 Jps
29419 SecondaryNameNode
29745 TaskTracker
29257 DataNode
这是我将文件写入hdfs系统的示例客户机代码

public static void main(String[] args) 
   {
        try {
          //1. Get the instance of COnfiguration
          Configuration configuration = new Configuration();
          configuration.addResource(new Path("/data/WorkArea/hadoop/hadoop-1.2.1/hadoop-1.2.1/conf/core-site.xml"));
          configuration.addResource(new Path("/data/WorkArea/hadoop/hadoop-1.2.1/hadoop-1.2.1/conf/hdfs-site.xml"));
          //2. Create an InputStream to read the data from local file
          InputStream inputStream = new BufferedInputStream(new FileInputStream("/home/local/PAYODA/hariprasanth.l/Desktop/ProjectionTest"));
          //3. Get the HDFS instance
          FileSystem hdfs = FileSystem.get(new URI("hdfs://localhost:54310"), configuration);
          //4. Open a OutputStream to write the data, this can be obtained from the FileSytem
          OutputStream outputStream = hdfs.create(new Path("hdfs://localhost:54310/user/hadoop/Hadoop_File.txt"),
          new Progressable() {  
                  @Override
                  public void progress() {
             System.out.println("....");
                  }
                        });
          try
          {
            IOUtils.copyBytes(inputStream, outputStream, 4096, false); 
          }
          finally
          {
            IOUtils.closeStream(inputStream);
            IOUtils.closeStream(outputStream);
          } 
       } catch (Exception e) {
           e.printStackTrace();
       }
   }
运行代码时出现异常

java.io.IOException: Call to localhost/127.0.0.1:54310 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1063)
at org.apache.hadoop.ipc.Client.call(Client.java:1031)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at com.sun.proxy.$Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:235)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:275)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:249)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:163)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:283)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:247)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:109)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1792)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:76)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1826)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1808)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:265)
at com.test.hadoop.writefiles.FileWriter.main(FileWriter.java:27)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:760)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:698)
正如我在谷歌上搜索到的,它显示我与版本不匹配

hadoop的服务器版本是-1.2.1 我使用的客户端jar是

hadoop-common-0.22.0.jar
hadoop-hdfs-0.22.0.jar
请尽快告诉我问题所在

如果可能的话,推荐一个我可以找到hadoop客户端JAR的地方,也给JAR命名。。。请

问候,,
Hari

没有运行NameNode。问题在于您的Namenode。启动前是否格式化了NameNode

hadoop namenode -format

这是因为不同jar中的类表示相同(即
hadoop commons
hadoop core
具有相同的类)。 实际上,我对使用相应的罐子感到困惑


最后,我使用了ApacheHadoop核心。它像苍蝇一样工作。

2988 org.eclipse.equinox.launcher_1.2.0.v20110502.jar\n 3719 TaskTracker\n 3271 NameNode\n 3511 secondary NameNode\n 8472 Jps\n 3606 JobTracker\n这是我的Jps命令输出NameNode也没有问题。。。你能告诉我配置eclipse part8486 NodeManager 7823 DataNode 8092 SecondaryNameNode 7613 NameNode 10831 Jps 8265 ResourceManager的步骤吗?当你运行Jps时,你应该得到以上所有信息。我认为你现在缺少DataNode一旦删除hadoop文件系统路径中的所有文件(hadoop.tmp.dir)在/hadoop_path/etc/hadoop/core-site.xml中保存它。现在重新格式化它。这可能会有所帮助。看起来hadoop服务没有正确启动。没有启动“NameNode”服务。现在这可能不是问题,但一旦您解决了jar依赖性问题,它就会出现。请发布您的core-site.xml、hdfs-site.xml和mapred-site.xml文件。
hadoop namenode -format