Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop MapReduce驱动程序的addInputPath中出错_Hadoop_Mapreduce_Hadoop Plugins - Fatal编程技术网

Hadoop MapReduce驱动程序的addInputPath中出错

Hadoop MapReduce驱动程序的addInputPath中出错,hadoop,mapreduce,hadoop-plugins,Hadoop,Mapreduce,Hadoop Plugins,我在MapReduce驱动程序的addInputPath方法中遇到错误。 错误是 "The method addInputPath(Job, Path) in the type FileInputFormat is not applicable for the arguments (JobConf, Path)" 以下是我的驾驶员代码: package org.myorg; import org.apache.hadoop.conf.Configuration; import org.apa

我在MapReduce驱动程序的addInputPath方法中遇到错误。 错误是

"The method addInputPath(Job, Path) in the type FileInputFormat is not applicable for the arguments (JobConf, Path)"
以下是我的驾驶员代码:

package org.myorg;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class WordCount extends Configured implements Tool{
    public int run(String[] args) throws Exception
    {
          //creating a JobConf object and assigning a job name for identification purposes
          JobConf conf = new JobConf(getConf(), org.myorg.WordCount.class);
          conf.setJobName("WordCount");

          //Setting configuration object with the Data Type of output Key and Value
          conf.setOutputKeyClass(Text.class);
          conf.setOutputValueClass(IntWritable.class);

          //Providing the mapper and reducer class names
          conf.setMapperClass(WordCountMapper.class);
          conf.setReducerClass(WordCountReducer.class);

          //the hdfs input and output directory to be fetched from the command line
          **FileInputFormat.addInputPath(conf, new Path(args[0]));**
          FileOutputFormat.setOutputPath(conf, new Path(args[1]));

          JobClient.runJob(conf);
          return 0;
    }

    public static void main(String[] args) throws Exception
    {
          int res = ToolRunner.run(new Configuration(), new WordCount(),args);
          System.exit(res);
    }
}
我导入了正确的org.apache.hadoop.mapred.FileOutputFormat

我的WordCountMapper正确地实现了Mapper

FileOutputFormat.setOutputPath工作正常


为什么AddInputPath会引发错误?

问题是您混合了旧API(
.mapred.
)和新API(
.mapreduce.
)。这两个API不兼容


我建议您使用新API中的所有对象,而不要使用旧API中的任何内容。也就是说,不要使用
JobConf
JobClient
。改用
作业
配置
。请确保您使用的是包含
.mapreduce.
而不是
.mapred.
的导入中的
映射器
还原器
等。谢谢。关于这一点,大家都很困惑。在大多数情况下,我完全忽略了旧API的存在。