eclipse Run on Hadoop java.lang.NullPointerException

念来过倒蛋笨 2014-08-20 09:23:05
Hadoop版本:2.4.0
Hadoop运行环境:Ubuntu14.04
Eclipse运行环境:Win7
使用Eclipse链接Hadoop成功,已能成功向HDFS上传文件夹和文件,但是在运行WordCount(使用自带的代码)时 Run as->Run on Hadoop,没有提示选择server直接报错:
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:421)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:281)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:68)

WordCount代码:

package org.apache.hadoop.examples;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

public static class TokenizerMapper extends
Mapper<Object, Text, Text, IntWritable> {

private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}

public static class IntSumReducer extends
Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();

public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}

public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] names = new String[] { "input", "output" };
String[] otherArgs = new GenericOptionsParser(conf, names)
.getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}


哪位高手能帮帮忙...
...全文
2285 7 打赏 收藏 转发到动态 举报
写回复
用AI写文章
7 条回复
切换为时间正序
请发表友善的回复…
发表回复
Sixian_Go 2016-04-20
  • 打赏
  • 举报
回复
linux上出现这种问题怎么解决?单表联合,希望能给予解答,谢谢
shijiebei2009 2014-11-08
  • 打赏
  • 举报
回复
看着这群人,我也是无语了,那是win上的配置,你linux不需要的。
iyarcxy123 2014-08-20
  • 打赏
  • 举报
回复
引用 1 楼 z363115269 的回复:
我已经解决了,在Hadoop的bin目录下放winutils.exe,在环境变量中配置 HADOOP_HOME,把hadoop.dll拷贝到C:\Windows\System32下面即可
我的hadoop2.2是在linux下,现在通过eclipse编写mapreduce类运行也报这个错,你说的winutils.exe这个文件放到hadoop的bin下什么意思?是放到linux下hadoop的bin下吗?
念来过倒蛋笨 2014-08-20
  • 打赏
  • 举报
回复
我已经解决了,在Hadoop的bin目录下放winutils.exe,在环境变量中配置 HADOOP_HOME,把hadoop.dll拷贝到C:\Windows\System32下面即可
zhu82722873 2014-08-20
  • 打赏
  • 举报
回复
论坛上一群猪,还装大神。水平高的谁有空玩这个啊!
念来过倒蛋笨 2014-08-20
  • 打赏
  • 举报
回复
引用 2 楼 iyarcxy123 的回复:
[quote=引用 1 楼 z363115269 的回复:] 我已经解决了,在Hadoop的bin目录下放winutils.exe,在环境变量中配置 HADOOP_HOME,把hadoop.dll拷贝到C:\Windows\System32下面即可
我的hadoop2.2是在linux下,现在通过eclipse编写mapreduce类运行也报这个错,你说的winutils.exe这个文件放到hadoop的bin下什么意思?是放到linux下hadoop的bin下吗?[/quote] 是在Win下的那个,Eclipse不是要配置Hadoop installation directory,就放在Win下的Hadoop里面
念来过倒蛋笨 2014-08-20
  • 打赏
  • 举报
回复
是在Win下的那个,Eclipse不是要配置Hadoop installation directory,就放在Win下的Hadoop里面

58,456

社区成员

发帖
与我相关
我的任务
社区描述
Java Eclipse
社区管理员
  • Eclipse
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧