hadoop2.0 map执行失败,求救~~~

soapppp 2013-12-28 12:13:00
哪位能给解释下这是怎么回事啊。我写了个mapreduce里面什么逻辑都没有,执行的时候也不报任何错误,但是map总是失败。日志里也没有任何错误信息。崩溃了,因为是新搭的全分布环境,现在都搞不懂是环境有问题还是代码写错了。。。。

[hadoop@namenode Desktop]$ hadoop jar demo.jar com.hadoop.demo.ColorBalls /db/db /result/
13/12/28 00:09:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/12/28 00:09:36 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
13/12/28 00:09:36 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
13/12/28 00:09:36 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/12/28 00:09:37 INFO input.FileInputFormat: Total input paths to process : 1
13/12/28 00:09:37 INFO mapreduce.JobSubmitter: number of splits:1
13/12/28 00:09:37 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar
13/12/28 00:09:37 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
13/12/28 00:09:37 WARN conf.Configuration: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
13/12/28 00:09:37 WARN conf.Configuration: mapred.job.name is deprecated. Instead, use mapreduce.job.name
13/12/28 00:09:37 WARN conf.Configuration: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
13/12/28 00:09:37 WARN conf.Configuration: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
13/12/28 00:09:37 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
13/12/28 00:09:37 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
13/12/28 00:09:37 WARN conf.Configuration: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class
13/12/28 00:09:37 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
13/12/28 00:09:37 WARN conf.Configuration: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
13/12/28 00:09:37 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
13/12/28 00:09:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1388155188191_0003
13/12/28 00:09:38 INFO client.YarnClientImpl: Submitted application application_1388155188191_0003 to ResourceManager at namenode/192.168.159.10:18040
13/12/28 00:09:38 INFO mapreduce.Job: The url to track the job: http://namenode:18088/proxy/application_1388155188191_0003/
13/12/28 00:09:38 INFO mapreduce.Job: Running job: job_1388155188191_0003
13/12/28 00:09:47 INFO mapreduce.Job: Job job_1388155188191_0003 running in uber mode : false
13/12/28 00:09:47 INFO mapreduce.Job: map 0% reduce 0%
13/12/28 00:09:48 INFO mapreduce.Job: Task Id : attempt_1388155188191_0003_m_000000_0, Status : FAILED


13/12/28 00:09:52 INFO mapreduce.Job: Task Id : attempt_1388155188191_0003_m_000000_1, Status : FAILED


13/12/28 00:09:55 INFO mapreduce.Job: Task Id : attempt_1388155188191_0003_m_000000_2, Status : FAILED


13/12/28 00:09:58 INFO mapreduce.Job: map 100% reduce 0%
13/12/28 00:09:58 INFO mapreduce.Job: Job job_1388155188191_0003 failed with state FAILED due to: Task failed task_1388155188191_0003_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

13/12/28 00:09:58 INFO mapreduce.Job: Counters: 9
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=4943
Total time spent by all reduces in occupied slots (ms)=0
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0


以下是我的代码

package com.hadoop.demo;

import java.io.IOException;
import java.util.*;
import java.util.Map.Entry;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;




public class ColorBalls extends Configured implements Tool {
public static class ColorBallsMapper extends Mapper<IntWritable, Text, IntWritable, Text>
{
public void ColorBallsMap(IntWritable key, Text value, Context context) throws IOException, InterruptedException
{

context.write(key, value);

}

}

public static class ColorBallsReducer extends Reducer<IntWritable, Text, IntWritable, Text>
{

public void reduce(IntWritable key, Iterator<Text> values, Context context) throws IOException, InterruptedException
{
while(values.hasNext())
{
context.write(null, values.next());
}

}

}


public void setConf(Configuration conf) {
// TODO Auto-generated method stub

}

public Configuration getConf() {
// TODO Auto-generated method stub
return null;
}


public int run(String args[]) throws Exception
{
Job job = new Job(new Configuration());
job.setJarByClass(ColorBalls.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
job.setMapperClass(ColorBalls.ColorBallsMapper.class);
job.setReducerClass(ColorBalls.ColorBallsReducer.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(Text.class);
job.setInputFormatClass(KeyValueTextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);

return job.waitForCompletion(true)? 0:1;

}

public static void main(String args[]) throws Exception {
// TODO Auto-generated method stub
int i = ToolRunner.run(new ColorBalls(), args);
System.exit(i);

}

}
...全文
1329 4 打赏 收藏 转发到动态 举报
写回复
用AI写文章
4 条回复
切换为时间正序
请发表友善的回复…
发表回复
江南浙里 2015-02-04
  • 打赏
  • 举报
回复
job.setInputFormatClass(KeyValueTextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); 这行代码倒是帮了我的忙了,谢谢!
soapppp 2013-12-28
  • 打赏
  • 举报
回复
datanode下 /hadoop-2.0-cdh4.4/logs/userlogs/application_1388196388963_0002/container_1388196388963_0002_01_000001/stdout里有一个错误 java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator$1.run(DefaultSpeculator.java:189) at java.lang.Thread.run(Thread.java:744) hadoop-2.0-cdh4.4/logs/userlogs/application_1388196388963_0002/container_1388196388963_0002_01_000003/stdout里有一个错误 Error occurred during initialization of VM Too small initial heap
soapppp 2013-12-28
  • 打赏
  • 举报
回复
自己顶自己顶
少主无翼 2013-12-28
  • 打赏
  • 举报
回复
Error occurred during initialization of VM Too small initial heap 应该是vm的内存小了 试着调大内存看看 在mapred-site.xml中修改 <property> <name>mapred.child.java.opts</name> <value>-Xmx512</value> <final>true</final> </property> 没用过2.0版本 所以具体也不知道

20,809

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧