Hadoop的 WordCount败在最后一步,求救啊

幽幽浮浮 2013-12-17 11:22:54
在 ubuntu里搭建了单机模式。
运行WordCount时,程序执行失败,NoClassDefFoundError,怎么解决啊?我是新手。具体如下:


1、这是在eclipse里编写的WordCount:



2、然后执行如下命令,生产jar文件
xxxxxx@ubuntu:~/workspace/WordCount/bin/WordCount$ jar cvf WordCount.jar *.class

已添加清单

正在添加: WordCount.class(输入 = 1700) (输出 = 827)(压缩了 51%)

正在添加: WordCount$Map.class(输入 = 2435) (输出 = 946)(压缩了 61%)


在系统中查看已经生产了jar文件:




3、然后执行程序:

hadoop@ubuntu:/home/long/workspace/WordCount/bin$ /usr/local/hadoop/bin/hadoop jar WordCount.jar WordCount /input /output

出现下面错误,怎么解决呢?我找寻不到~~~纠结
Exception in thread "main" java.lang.NoClassDefFoundError: WordCount (wrong name: WordCount/WordCount)

PS: Hadoop单机环境是OK的,包括input /output文件夹都是正常。

这个是java程序执行的问题吧,求指点!
...全文
583 13 打赏 收藏 转发到动态 举报
写回复
用AI写文章
13 条回复
切换为时间正序
请发表友善的回复…
发表回复
zhao88148201 2013-12-17
  • 打赏
  • 举报
回复
带上包名试试
吸尘器 2013-12-17
  • 打赏
  • 举报
回复
在你的开发工具里面执行没问题吗
幽幽浮浮 2013-12-17
  • 打赏
  • 举报
回复


bbs代码格式化真差劲。
幽幽浮浮 2013-12-17
  • 打赏
  • 举报
回复
有的,我把代码贴出来: public class WordCount { public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); context.write(word, one); } } } public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf, "wordcount"); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); }
吸尘器 2013-12-17
  • 打赏
  • 举报
回复
你那类里面有没有main方法,没有就执行不了
幽幽浮浮 2013-12-17
  • 打赏
  • 举报
回复
[color=#0000FF]我用 eclipse导出 jar,执行没那个错误了。但是报了其它错误: hadoop@ubuntu:/home/long/workspace/myHadoop/bin/hadooptest$ /usr/local/hadoop/bin/hadoop jar wc.jar hadooptest.WordCount /input /output 13/12/16 11:41:17 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 13/12/16 11:41:17 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 13/12/16 11:41:17 INFO input.FileInputFormat: Total input paths to process : 2 13/12/16 11:41:17 INFO mapreduce.JobSubmitter: number of splits:2 13/12/16 11:41:17 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name 13/12/16 11:41:17 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 13/12/16 11:41:17 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class 13/12/16 11:41:17 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class 13/12/16 11:41:17 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name 13/12/16 11:41:17 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class 13/12/16 11:41:17 INFO Configuration.deprecation: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class 13/12/16 11:41:17 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir 13/12/16 11:41:17 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 13/12/16 11:41:17 INFO Configuration.deprecation: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class 13/12/16 11:41:17 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 13/12/16 11:41:17 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class 13/12/16 11:41:17 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir 13/12/16 11:41:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1385465186213_0004 13/12/16 11:41:18 INFO impl.YarnClientImpl: Submitted application application_1385465186213_0004 to ResourceManager at /0.0.0.0:8032 13/12/16 11:41:18 INFO mapreduce.Job: The url to track the job: http://ubuntu:8088/proxy/application_1385465186213_0004/ 13/12/16 11:41:18 INFO mapreduce.Job: Running job: job_1385465186213_0004 13/12/16 11:41:24 INFO mapreduce.Job: Job job_1385465186213_0004 running in uber mode : false 13/12/16 11:41:24 INFO mapreduce.Job: map 0% reduce 0% 13/12/16 11:41:25 INFO mapreduce.Job: Task Id : attempt_1385465186213_0004_m_000001_0, Status : FAILED Container launch failed for container_1385465186213_0004_01_000003 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:152) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) 不好意思,信息贴的有点多。 这是什么错误?是不是我 Hadoop没弄好?
幽幽浮浮 2013-12-17
  • 打赏
  • 举报
回复


同上。
幽幽浮浮 2013-12-17
  • 打赏
  • 举报
回复
已经 加上了,还是不行啊 public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf, "wordcount"); job.setJarByClass(WordCount.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); } 我的 代码,还是一样错误啊~~版主救救我
撸大湿 2013-12-17
  • 打赏
  • 举报
回复
在main添加以下代码

public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = new Job(conf, "wordcount");
    job.setJarByClass(WordCount.class); //添加这段代码试试
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    job.setMapperClass(Map.class);
    job.setReducerClass(Reduce.class);     
    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    job.waitForCompletion(true);
}
幽幽浮浮 2013-12-17
  • 打赏
  • 举报
回复
引用 8 楼 tanqingru 的回复:
改成这样试试: hadoop jar ./WordCount.jar WordCount /input /output
一样错误提示。
bamuta 2013-12-17
  • 打赏
  • 举报
回复
改成这样试试: hadoop jar ./WordCount.jar WordCount /input /output
幽幽浮浮 2013-12-17
  • 打赏
  • 举报
回复
引用 6 楼 longmarchufo 的回复:
eclipse没装插件,所以仅在命令行运行。 带上包名,是: hadoop jar WordCount.jar WordCount.WordCount /input /output ?
运行出现新的错误: ClassNotFoundException Exception in thread "main" java.lang.ClassNotFoundException: WordCount.WordCOunt at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:270) at org.apache.hadoop.util.RunJar.main(RunJar.java:205)
幽幽浮浮 2013-12-17
  • 打赏
  • 举报
回复
eclipse没装插件,所以仅在命令行运行。 带上包名,是: hadoop jar WordCount.jar WordCount.WordCount /input /output ?

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧