Java 在hadoop中运行jar文件时出错
在hadoop中运行jar文件时,我得到一个空指针异常。我不明白出了什么问题 以下是我的驾驶课程:Java 在hadoop中运行jar文件时出错,java,hadoop,jar,nullpointerexception,Java,Hadoop,Jar,Nullpointerexception,在hadoop中运行jar文件时,我得到一个空指针异常。我不明白出了什么问题 以下是我的驾驶课程: package mapreduce; import java.io.*; import org.apache.hadoop.fs.Path; import org.apache.hadoop.conf.*; import org.apache.hadoop.io.*; import org.apache.hadoop.mapred.*; import org.apache.hadoop.uti
package mapreduce;
import java.io.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
public class StockDriver extends Configured implements Tool
{
public int run(String[] args) throws Exception
{
//creating a JobConf object and assigning a job name for identification purposes
JobConf conf = new JobConf(getConf(), StockDriver.class);
conf.setJobName("StockDriver");
//Setting configuration object with the Data Type of output Key and Value
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
//Providing the mapper and reducer class names
conf.setMapperClass(StockMapper.class);
conf.setReducerClass(StockReducer.class);
File in = new File(args[0]);
int number_of_companies = in.listFiles().length;
for(int iter=1;iter<=number_of_companies;iter++)
{
Path inp = new Path(args[0]+"/i"+Integer.toString(iter)+".txt");
Path out = new Path(args[1]+Integer.toString(iter));
//the HDFS input and output directory to be fetched from the command line
FileInputFormat.addInputPath(conf, inp);
FileOutputFormat.setOutputPath(conf, out);
JobClient.runJob(conf);
}
return 0;
}
public static void main(String[] args) throws Exception
{
int res = ToolRunner.run(new Configuration(), new StockDriver(),args);
System.exit(res);
}
}
当我尝试使用java-jar myfile.jar args运行jar文件时…
。
但当我尝试在hadoop集群上使用hadoop jar myfile.jar[MainClass]参数运行它时…
给出了错误
为了澄清,第29行是
int number\u of_companys=in.listFiles().length代码>问题的原因是使用文件
api读取HDFS文件。如果创建路径不存在的File
对象,则listFiles
方法返回null
。作为HDFS中的输入目录(我假设),本地文件系统不存在该目录,NPE来自:
in.listFiles().length
使用以下命令提取HDFS目录中的文件数:
FileSystem fs = FileSystem.get(new Configuration());
int number_of_companies = fs.listStatus(new Path(arg[0])).length;
您是否为arg[0]中的每个文件运行单独的MR作业?@blackSmith否,我正在为每个文件使用相同的Mapreduce循环作业。
Exception in thread "main" java.lang.NullPointerException
at mapreduce.StockDriver.run(StockDriver.java:29)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at mapreduce.StockDriver.main(StockDriver.java:44)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
in.listFiles().length
FileSystem fs = FileSystem.get(new Configuration());
int number_of_companies = fs.listStatus(new Path(arg[0])).length;