wordcount exception in thread main java.lang.classnotfoundexception:Wordcount

cnzyhzz 2015-05-25 09:14:36
版本:hadoop-1.2.1,jdk-1.8.0_45,ubuntu-10.04
编写TokenizerMapper .java,IntSumReducer.java,WordCount.java三个文件,
1,“cd ~/wordcount_01”,
2, “javac -classpath /home/brian/usr/hadoop/hadoop-1.2.1/hadoop-core-1.2.1.jar:/home/brian/usr/hadoop/hadoop-
1.2.1/lib/commons-cli-1.2.jar -d ./classes/ ./src/*.java” 此处WordCount.java不能正常编译,
改命令为:
“javac -classpath /home/brian/usr/hadoop/hadoop-1.2.1/hadoop-core-1.2.1.jar:/home/brian/usr/hadoop/hadoop-1.2.1/lib/commons-cli-1.2.jar:OtherJar -d ./classes/ ./src/*.java”(WordCount.java可以编译)
3, “jar -cvf wordcount.jar -C ./classes/ .”
4,cd ~/usr/hadoop/hadoop-1.2.1,启动hadoop
5,以README,txt做测试“./bin/hadoop fs -put READER.txt readme.txt
6,“./bin/hadoop fs -rmr output”
7, “./bin/hadoop jar /home/brian/wordcount_01/wordcount.jar com.brianchen.hadoop.WordCount readme.txt output”
此处出错!显示:
Exception in thread "main" java.lang.ClassNotFoundException: WordCount
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:249)
at org.apache.hadoop.util.RunJar.main(RunJar.java:205)
编写一个WordCount.java文件,运行过程与三个文件时情况相同,输入:“./bin/hadoop jar /home/brian/wordcount_01/wordcount.jar com.brianchen.hadoop.WordCount readme.txt output”
出错显示:
Exception in thread "main" java.lang.ClassNotFoundException: WordCount
at java.net.URLClassLoader$1.run(URLClassLoader.java:381)
at java.security.AccessController.doPrivileged(Native Method)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)(大致这样,与上面相似)
(一个WordCount.java文件时代码):
package com.brianchen.hadoop;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()){
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer extends
Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
(三个文件时代码与此类似)
尝试解决方法(均失败!):1重新编译;2去掉包,重新编译;3将生成的wordcoun.jar放到hadoop目录下运行;
hadoop自带的wordcount可以正常运行。
求大神指点迷津!!!!先谢过。
...全文
1443 1 打赏 收藏 转发到动态 举报
写回复
用AI写文章
1 条回复
切换为时间正序
请发表友善的回复…
发表回复
cnzyhzz 2015-05-25
  • 打赏
  • 举报
回复
发错地方了。

750

社区成员

发帖
与我相关
我的任务
社区描述
虚拟化相关技术讨论专区
社区管理员
  • 虚拟化社区
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧