Eclipse上的Hadoop MapReduce:清理暂存区域文件:/app/Hadoop/tmp/mapred/staging/myname183880112/.staging/job\u local183880112\u 0001 2014-04-04 16:02:31.633 java[44631:1903]无法从SCDynamicStore加载领域信息 14/04/04 16:02:32警告util.NativeCodeLoader:无法为您的平台加载本机hadoop库。。。在适用的情况下使用内置java类 14/04/04 16:02:32 WARN mapred.JobClient:使用GenericOptionsParser解析参数。应用程序应该为相同的应用程序实现工具。 14/04/04 16:02:32 WARN mapred.JobClient:未设置作业jar文件。可能找不到用户类。请参阅JobConf(类)或JobConf#setJar(字符串)。 14/04/04 16:02:32警告snappy.LoadSnappy:snappy本机库未加载 14/04/04 16:02:32信息映射。文件输入格式:进程的总输入路径:1 14/04/04 16:02:32 INFO mapred.JobClient:清理暂存区域文件:/app/hadoop/tmp/mapred/staging/myname183880112/.staging/job\u local183880112\u 0001 java.lang.NullPointerException 位于org.apache.hadoop.conf.Configuration.getLocalPath(Configuration.java:950) 位于org.apache.hadoop.mapred.JobConf.getLocalPath(JobConf.java:476) 位于org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:121) 位于org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:592) 位于org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:1013) 位于org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) 位于java.security.AccessController.doPrivileged(本机方法) 位于javax.security.auth.Subject.doAs(Subject.java:415) 位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) 位于org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) 位于org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910) 位于org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353) 位于LineIndex.main(LineIndex.java:92)

Eclipse上的Hadoop MapReduce:清理暂存区域文件:/app/Hadoop/tmp/mapred/staging/myname183880112/.staging/job\u local183880112\u 0001 2014-04-04 16:02:31.633 java[44631:1903]无法从SCDynamicStore加载领域信息 14/04/04 16:02:32警告util.NativeCodeLoader:无法为您的平台加载本机hadoop库。。。在适用的情况下使用内置java类 14/04/04 16:02:32 WARN mapred.JobClient:使用GenericOptionsParser解析参数。应用程序应该为相同的应用程序实现工具。 14/04/04 16:02:32 WARN mapred.JobClient:未设置作业jar文件。可能找不到用户类。请参阅JobConf(类)或JobConf#setJar(字符串)。 14/04/04 16:02:32警告snappy.LoadSnappy:snappy本机库未加载 14/04/04 16:02:32信息映射。文件输入格式:进程的总输入路径:1 14/04/04 16:02:32 INFO mapred.JobClient:清理暂存区域文件:/app/hadoop/tmp/mapred/staging/myname183880112/.staging/job\u local183880112\u 0001 java.lang.NullPointerException 位于org.apache.hadoop.conf.Configuration.getLocalPath(Configuration.java:950) 位于org.apache.hadoop.mapred.JobConf.getLocalPath(JobConf.java:476) 位于org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:121) 位于org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:592) 位于org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:1013) 位于org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) 位于java.security.AccessController.doPrivileged(本机方法) 位于javax.security.auth.Subject.doAs(Subject.java:415) 位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) 位于org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) 位于org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910) 位于org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353) 位于LineIndex.main(LineIndex.java:92),eclipse,hadoop,nullpointerexception,mapreduce,Eclipse,Hadoop,Nullpointerexception,Mapreduce,我试图在Eclipse中使用Mapreduce为行索引执行Mapreduce程序。出现上述错误。我的代码是: 2014-04-04 16:02:31.633 java[44631:1903] Unable to load realm info from SCDynamicStore 14/04/04 16:02:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... usin

我试图在Eclipse中使用Mapreduce为行索引执行Mapreduce程序。出现上述错误。我的代码是:

2014-04-04 16:02:31.633 java[44631:1903] Unable to load realm info from SCDynamicStore
14/04/04 16:02:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/04/04 16:02:32 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/04/04 16:02:32 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/04/04 16:02:32 WARN snappy.LoadSnappy: Snappy native library not loaded
14/04/04 16:02:32 INFO mapred.FileInputFormat: Total input paths to process : 1
14/04/04 16:02:32 INFO mapred.JobClient: Cleaning up the staging area file:/app/hadoop/tmp/mapred/staging/myname183880112/.staging/job_local183880112_0001
java.lang.NullPointerException
    at org.apache.hadoop.conf.Configuration.getLocalPath(Configuration.java:950)
    at org.apache.hadoop.mapred.JobConf.getLocalPath(JobConf.java:476)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:121)
    at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:592)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:1013)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
    at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
    at LineIndex.main(LineIndex.java:92)
公共类行索引{
公共静态类LineIndexMapper扩展了MapReduceBase
实现映射器{
私有最终静态文本字=新文本();
私有最终静态文本位置=新文本();
公共无效映射(可长写键、文本值、,
OutputCollector输出,报告器(报告器)
抛出IOException{
FileSplit FileSplit=(FileSplit)reporter.getInputSplit();
字符串文件名=fileSplit.getPath().getName();
location.set(文件名);
字符串行=val.toString();
StringTokenizer itr=新的StringTokenizer(line.toLowerCase());
而(itr.hasMoreTokens()){
set(itr.nextToken());
输出。收集(字、位置);
}
}
}
公共静态类LineIndexReducer扩展了MapReduceBase
机具减速器{
public void reduce(文本键、迭代器值、,
OutputCollector输出,报告器(报告器)
抛出IOException{
布尔值优先=真;
StringBuilder toReturn=新建StringBuilder();
while(values.hasNext()){
如果(!第一个)
返回。追加(“,”);
第一个=假;
toReturn.append(values.next().toString());
}
collect(键,新文本(toReturn.toString());
}
}
/**
*程序的实际main()方法;这是
*MapReduce作业的“驱动程序”。
*/
公共静态void main(字符串[]args){
JobClient=newjobclient();
JobConf conf=newjobconf(LineIndex.class);
conf.setJobName(“LineIndexer”);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(Text.class);
addInputPath(conf,新路径(“输入”));
setOutputPath(conf,新路径(“输出”));
conf.setMapperClass(LineIndexMapper.class);
conf.setReducerClass(LineIndexReducer.class);
conf.addResource(新路径(“/usr/local/hadoop/etc/hadoop/core site.xml”);
conf.addResource(新路径(“/usr/local/hadoop/etc/hadoop/hdfs site.xml”);
client.setConf(conf);
试一试{
runJob(conf);
}捕获(例外e){
e、 printStackTrace();
}
}
}
我无法理解并解决此处的错误Nullpointerexception。
有人能帮我一下吗???

能否将mapred-site.xml文件添加到配置类对象中,然后再试一次。 您可能还需要在该xml文件中指定属性
mapred.local.dir

 public class LineIndex {

  public static class LineIndexMapper extends MapReduceBase
      implements Mapper<LongWritable, Text, Text, Text> {

    private final static Text word = new Text();
    private final static Text location = new Text();

    public void map(LongWritable key, Text val,
        OutputCollector<Text, Text> output, Reporter reporter)
        throws IOException {

      FileSplit fileSplit = (FileSplit)reporter.getInputSplit();
      String fileName = fileSplit.getPath().getName();
      location.set(fileName);

      String line = val.toString();
      StringTokenizer itr = new StringTokenizer(line.toLowerCase());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        output.collect(word, location);
      }
    }
  }



  public static class LineIndexReducer extends MapReduceBase
      implements Reducer<Text, Text, Text, Text> {

    public void reduce(Text key, Iterator<Text> values,
        OutputCollector<Text, Text> output, Reporter reporter)
        throws IOException {

      boolean first = true;
      StringBuilder toReturn = new StringBuilder();
      while (values.hasNext()){
        if (!first)
          toReturn.append(", ");
        first=false;
        toReturn.append(values.next().toString());
      }

      output.collect(key, new Text(toReturn.toString()));
    }
  }


  /**
   * The actual main() method for our program; this is the
   * "driver" for the MapReduce job.
   */
  public static void main(String[] args) {
    JobClient client = new JobClient();
    JobConf conf = new JobConf(LineIndex.class);

    conf.setJobName("LineIndexer");

    conf.setOutputKeyClass(Text.class);
    conf.setOutputValueClass(Text.class);

    FileInputFormat.addInputPath(conf, new Path("input"));
    FileOutputFormat.setOutputPath(conf, new Path("output"));


    conf.setMapperClass(LineIndexMapper.class);
    conf.setReducerClass(LineIndexReducer.class);
    conf.addResource(new Path("/usr/local/hadoop/etc/hadoop/core-site.xml"));
    conf.addResource(new Path("/usr/local/hadoop/etc/hadoop/hdfs-site.xml"));

    client.setConf(conf);

    try {
      JobClient.runJob(conf);
    } catch (Exception e) {
      e.printStackTrace();
    }
  }
}