Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/jenkins/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java wordcount程序中的NoClassDefFoundError_Java_Hadoop - Fatal编程技术网

Java wordcount程序中的NoClassDefFoundError

Java wordcount程序中的NoClassDefFoundError,java,hadoop,Java,Hadoop,我正在运行hadoop wordcount程序。但它给了我类似“NoClassDefFoundError”的错误 用于运行的命令: hadoop -jar /home/user/Pradeep/sample.jar hdp_java.WordCount /user/hduser/ana.txt /user/hduser/prout Exception in thread "main" java.lang.NoClassDefFoundError: WordCount Caused b

我正在运行hadoop wordcount程序。但它给了我类似“NoClassDefFoundError”的错误

用于运行的命令:

 hadoop -jar /home/user/Pradeep/sample.jar hdp_java.WordCount /user/hduser/ana.txt /user/hduser/prout
  Exception in thread "main" java.lang.NoClassDefFoundError: WordCount
   Caused by: java.lang.ClassNotFoundException: WordCount
    at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
    Could not find the main class: WordCount. Program will exit.
我在eclipse中创建了这个程序,然后将其导出为jar文件

Eclipse代码:

package hdp_java;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

public class WordCount {

public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
   private final static IntWritable one = new IntWritable(1);
  private Text word = new Text();

public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
    String line = value.toString();
    StringTokenizer tokenizer = new StringTokenizer(line);
    while (tokenizer.hasMoreTokens()) {
        word.set(tokenizer.nextToken());
        context.write(word, one);
    }
}
 }

  public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterable<IntWritable> values, Context context) 
  throws IOException, InterruptedException {
    int sum = 0;
    for (IntWritable val : values) {
        sum += val.get();
    }
    context.write(key, new IntWritable(sum));
}
  }

    public static void main(String[] args) throws Exception {
       Configuration conf = new Configuration();

    Job job = new Job(conf, "wordcount");

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);

job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);

FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

job.waitForCompletion(true);
}

}
package-hdp\u-java;
导入java.io.IOException;
导入java.util.StringTokenizer;
导入org.apache.hadoop.conf.Configuration;
导入org.apache.hadoop.fs.Path;
导入org.apache.hadoop.io.IntWritable;
导入org.apache.hadoop.io.LongWritable;
导入org.apache.hadoop.io.Text;
导入org.apache.hadoop.mapreduce.Job;
导入org.apache.hadoop.mapreduce.Mapper;
导入org.apache.hadoop.mapreduce.Reducer;
导入org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
导入org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
导入org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
导入org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
公共类字数{
公共静态类映射扩展映射器{
私有最终静态IntWritable one=新的IntWritable(1);
私有文本字=新文本();
公共void映射(LongWritable键、文本值、上下文上下文)引发IOException、InterruptedException{
字符串行=value.toString();
StringTokenizer标记器=新的StringTokenizer(行);
while(tokenizer.hasMoreTokens()){
set(tokenizer.nextToken());
上下文。写(单词,一);
}
}
}
公共静态类Reduce扩展Reducer{
公共void reduce(文本键、Iterable值、上下文)
抛出IOException、InterruptedException{
整数和=0;
for(可写入值:值){
sum+=val.get();
}
write(key,newintwriteable(sum));
}
}
公共静态void main(字符串[]args)引发异常{
Configuration conf=新配置();
Job Job=新作业(conf,“wordcount”);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
setInputFormatClass(TextInputFormat.class);
setOutputFormatClass(TextOutputFormat.class);
addInputPath(作业,新路径(args[0]);
setOutputPath(作业,新路径(args[1]);
job.waitForCompletion(true);
}
}

有人能告诉我哪里错了吗?

在代码中添加这一行:

job.setJarByClass(WordCount.class);

如果仍然不起作用,请将此作业导出为jar,并将其作为外部jar添加到自身中,然后查看其是否起作用。

在代码中添加此行:

job.setJarByClass(WordCount.class);

如果仍然不起作用,请将此作业导出为jar,并将其作为外部jar添加到自身中,然后查看其是否起作用。

您需要告诉hadoop作业要使用哪个jar,如下所示:

job.setJarByClass(WordCount.class);
在提交作业时,还要确保向
HADOOP_类路径
-libjars
添加任何依赖项,如以下示例所示:

使用以下命令添加(例如)当前和
lib
目录中的所有jar依赖项:

export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:`echo *.jar`:`echo lib/*.jar | sed 's/ /:/g'`
请记住,通过
hadoopjar
启动作业时,还需要通过使用
-libjar
传递任何依赖项的jar。我喜欢使用:

hadoop jar <jar> <class> -libjars `echo ./lib/*.jar | sed 's/ /,/g'` [args...]
hadoop jar-libjars`echo./lib/*.jar | sed's//,/g'`[args…]
注意:sed命令需要不同的分隔符;
HADOOP\u类路径是
分开的,
-libjar
需要
分开。

你需要告诉HADOOP作业使用哪个jar,就像这样:

job.setJarByClass(WordCount.class);
在提交作业时,还要确保向
HADOOP_类路径
-libjars
添加任何依赖项,如以下示例所示:

使用以下命令添加(例如)当前和
lib
目录中的所有jar依赖项:

export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:`echo *.jar`:`echo lib/*.jar | sed 's/ /:/g'`
请记住,通过
hadoopjar
启动作业时,还需要通过使用
-libjar
传递任何依赖项的jar。我喜欢使用:

hadoop jar <jar> <class> -libjars `echo ./lib/*.jar | sed 's/ /,/g'` [args...]
hadoop jar-libjars`echo./lib/*.jar | sed's//,/g'`[args…]

注意:sed命令需要不同的分隔符;
HADOOP\u类路径是
分开的,
-libjar
需要
分开。

用类似7zip的东西打开你的Jar,检查你的wordCount.class是否在应该的位置。用类似7zip的东西打开你的Jar,检查你的wordCount.class是否在应该的位置。嗨,塔里克,在添加上面的行之后,它给了我不同的错误。警告:$HADOOP\u HOME已弃用。未能从/home/user/hpcc/lz_data/Pradeep/sample.jarHi-Tariq加载主类清单属性,在添加上述行之后,它会给我不同的错误。警告:$HADOOP\u HOME已弃用。未能从/home/user/hpcc/lz_data/Pradeep/sample.jarHi quetzalcatl加载主类清单属性,我添加了setJarByClass,但它仍然给我带来错误。但这次是不同的一次。eclipse中添加的所有依赖jar都无法从/home/user/hpcc/lz_data/Pradeep/sample.jar加载主类清单属性。您是如何制作jar的?我正在使用eclipse(indigo)中的导出->jar文件选项。谢谢,它成功了。我用下面的命令hadoop-jar/home/user/Pradeep/sample.jar hdp_java.WordCount/user/hduser/ana.txt/user/hduser/prout代替了hadoop-jar(wihtout-hypen,“-”)嗨,Quetzalcatl,我添加了setJarByClass,但它仍然给我错误。但这次是不同的一次。eclipse中添加的所有依赖jar都无法从/home/user/hpcc/lz_data/Pradeep/sample.jar加载主类清单属性。您是如何制作jar的?我正在使用eclipse(indigo)中的导出->jar文件选项。谢谢,它成功了。我使用下面的命令hadoop-jar/home/user/Pradeep/sample.jar hdp_java.WordCount/user/hduser/ana.txt/user/hduser/prout代替hadoop-jar(wihtout-hypen,“-”)