hadoop2.0 map执行失败,求救~~~
哪位能给解释下这是怎么回事啊。我写了个mapreduce里面什么逻辑都没有,执行的时候也不报任何错误,但是map总是失败。日志里也没有任何错误信息。崩溃了,因为是新搭的全分布环境,现在都搞不懂是环境有问题还是代码写错了。。。。
[hadoop@namenode Desktop]$ hadoop jar demo.jar com.hadoop.demo.ColorBalls /db/db /result/
13/12/28 00:09:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/12/28 00:09:36 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
13/12/28 00:09:36 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
13/12/28 00:09:36 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/12/28 00:09:37 INFO input.FileInputFormat: Total input paths to process : 1
13/12/28 00:09:37 INFO mapreduce.JobSubmitter: number of splits:1
13/12/28 00:09:37 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar
13/12/28 00:09:37 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
13/12/28 00:09:37 WARN conf.Configuration: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
13/12/28 00:09:37 WARN conf.Configuration: mapred.job.name is deprecated. Instead, use mapreduce.job.name
13/12/28 00:09:37 WARN conf.Configuration: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
13/12/28 00:09:37 WARN conf.Configuration: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
13/12/28 00:09:37 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
13/12/28 00:09:37 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
13/12/28 00:09:37 WARN conf.Configuration: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class
13/12/28 00:09:37 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
13/12/28 00:09:37 WARN conf.Configuration: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
13/12/28 00:09:37 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
13/12/28 00:09:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1388155188191_0003
13/12/28 00:09:38 INFO client.YarnClientImpl: Submitted application application_1388155188191_0003 to ResourceManager at namenode/192.168.159.10:18040
13/12/28 00:09:38 INFO mapreduce.Job: The url to track the job: http://namenode:18088/proxy/application_1388155188191_0003/
13/12/28 00:09:38 INFO mapreduce.Job: Running job: job_1388155188191_0003
13/12/28 00:09:47 INFO mapreduce.Job: Job job_1388155188191_0003 running in uber mode : false
13/12/28 00:09:47 INFO mapreduce.Job: map 0% reduce 0%
13/12/28 00:09:48 INFO mapreduce.Job: Task Id : attempt_1388155188191_0003_m_000000_0, Status : FAILED
13/12/28 00:09:52 INFO mapreduce.Job: Task Id : attempt_1388155188191_0003_m_000000_1, Status : FAILED
13/12/28 00:09:55 INFO mapreduce.Job: Task Id : attempt_1388155188191_0003_m_000000_2, Status : FAILED
13/12/28 00:09:58 INFO mapreduce.Job: map 100% reduce 0%
13/12/28 00:09:58 INFO mapreduce.Job: Job job_1388155188191_0003 failed with state FAILED due to: Task failed task_1388155188191_0003_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
13/12/28 00:09:58 INFO mapreduce.Job: Counters: 9
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=4943
Total time spent by all reduces in occupied slots (ms)=0
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
以下是我的代码
package com.hadoop.demo;
import java.io.IOException;
import java.util.*;
import java.util.Map.Entry;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class ColorBalls extends Configured implements Tool {
public static class ColorBallsMapper extends Mapper<IntWritable, Text, IntWritable, Text>
{
public void ColorBallsMap(IntWritable key, Text value, Context context) throws IOException, InterruptedException
{
context.write(key, value);
}
}
public static class ColorBallsReducer extends Reducer<IntWritable, Text, IntWritable, Text>
{
public void reduce(IntWritable key, Iterator<Text> values, Context context) throws IOException, InterruptedException
{
while(values.hasNext())
{
context.write(null, values.next());
}
}
}
public void setConf(Configuration conf) {
// TODO Auto-generated method stub
}
public Configuration getConf() {
// TODO Auto-generated method stub
return null;
}
public int run(String args[]) throws Exception
{
Job job = new Job(new Configuration());
job.setJarByClass(ColorBalls.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
job.setMapperClass(ColorBalls.ColorBallsMapper.class);
job.setReducerClass(ColorBalls.ColorBallsReducer.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(Text.class);
job.setInputFormatClass(KeyValueTextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
return job.waitForCompletion(true)? 0:1;
}
public static void main(String args[]) throws Exception {
// TODO Auto-generated method stub
int i = ToolRunner.run(new ColorBalls(), args);
System.exit(i);
}
}