20,809
社区成员
发帖
与我相关
我的任务
分享
[root@master ~]# hdfs dfs -ls /
Found 1 items
drwxr-xr-x - root supergroup 0 2014-08-26 00:56 /input
[root@master ~]# hdfs dfs -ls /input
Found 1 items
-rw-r--r-- 3 root supergroup 22 2014-08-26 00:56 /input/file.txt
[root@master ~]#
运行程序报错如下:[root@master ~]# hadoop jar wordcount.jar WordCount hdfs://master:9000/input/file.txt hdfs://master:9000/output
14/08/26 01:14:50 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
14/08/26 01:14:50 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
14/08/26 01:14:50 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
14/08/26 01:14:51 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
14/08/26 01:14:51 INFO mapred.FileInputFormat: Total input paths to process : 1
14/08/26 01:14:51 INFO mapreduce.JobSubmitter: number of splits:1
14/08/26 01:14:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local545501673_0001
14/08/26 01:14:51 WARN conf.Configuration: file:/data/hadoop/tmp/mapred/staging/root545501673/.staging/job_local545501673_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/08/26 01:14:51 WARN conf.Configuration: file:/data/hadoop/tmp/mapred/staging/root545501673/.staging/job_local545501673_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
14/08/26 01:14:51 WARN conf.Configuration: file:/data/hadoop/tmp/mapred/local/localRunner/root/job_local545501673_0001/job_local545501673_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
14/08/26 01:14:51 WARN conf.Configuration: file:/data/hadoop/tmp/mapred/local/localRunner/root/job_local545501673_0001/job_local545501673_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
14/08/26 01:14:51 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
14/08/26 01:14:51 INFO mapreduce.Job: Running job: job_local545501673_0001
14/08/26 01:14:51 INFO mapred.LocalJobRunner: OutputCommitter set in config null
14/08/26 01:14:51 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter
14/08/26 01:14:52 INFO mapred.LocalJobRunner: Waiting for map tasks
14/08/26 01:14:52 INFO mapred.LocalJobRunner: Starting task: attempt_local545501673_0001_m_000000_0
14/08/26 01:14:52 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
14/08/26 01:14:52 INFO mapred.MapTask: Processing split: hdfs://master:9000/input/file.txt:0+22
14/08/26 01:14:52 INFO mapred.MapTask: numReduceTasks: 1
14/08/26 01:14:52 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/08/26 01:14:52 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/08/26 01:14:52 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/08/26 01:14:52 INFO mapred.MapTask: soft limit at 83886080
14/08/26 01:14:52 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/08/26 01:14:52 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/08/26 01:14:52 INFO mapred.LocalJobRunner:
14/08/26 01:14:52 INFO mapred.MapTask: Starting flush of map output
14/08/26 01:14:52 INFO mapred.MapTask: Spilling map output
14/08/26 01:14:52 INFO mapred.MapTask: bufstart = 0; bufend = 38; bufvoid = 104857600
14/08/26 01:14:52 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214384(104857536); length = 13/6553600
14/08/26 01:14:52 INFO mapred.MapTask: Finished spill 0
14/08/26 01:14:52 INFO mapred.Task: Task:attempt_local545501673_0001_m_000000_0 is done. And is in the process of committing
14/08/26 01:14:52 INFO mapred.LocalJobRunner: hdfs://master:9000/input/file.txt:0+22
14/08/26 01:14:52 INFO mapred.Task: Task 'attempt_local545501673_0001_m_000000_0' done.
14/08/26 01:14:52 INFO mapred.LocalJobRunner: Finishing task: attempt_local545501673_0001_m_000000_0
14/08/26 01:14:52 INFO mapred.LocalJobRunner: map task executor complete.
14/08/26 01:14:52 INFO mapred.LocalJobRunner: Waiting for reduce tasks
14/08/26 01:14:52 INFO mapred.LocalJobRunner: Starting task: attempt_local545501673_0001_r_000000_0
14/08/26 01:14:52 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
14/08/26 01:14:52 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@fcc06e0
14/08/26 01:14:52 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=333971456, maxSingleShuffleLimit=83492864, mergeThreshold=220421168, ioSortFactor=10, memToMemMergeOutputsThreshold=10
14/08/26 01:14:52 INFO reduce.EventFetcher: attempt_local545501673_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
14/08/26 01:14:52 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local545501673_0001_m_000000_0 decomp: 48 len: 52 to MEMORY
14/08/26 01:14:52 INFO reduce.InMemoryMapOutput: Read 48 bytes from map-output for attempt_local545501673_0001_m_000000_0
14/08/26 01:14:52 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 48, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->48
14/08/26 01:14:52 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
14/08/26 01:14:52 INFO mapred.LocalJobRunner: 1 / 1 copied.
14/08/26 01:14:52 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
14/08/26 01:14:52 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:263)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:142)
at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
14/08/26 01:14:52 INFO mapred.Merger: Merging 1 sorted segments
14/08/26 01:14:52 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 42 bytes
14/08/26 01:14:52 INFO reduce.MergeManagerImpl: Merged 1 segments, 48 bytes to disk to satisfy reduce memory limit
14/08/26 01:14:52 INFO reduce.MergeManagerImpl: Merging 1 files, 52 bytes from disk
14/08/26 01:14:52 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
14/08/26 01:14:52 INFO mapred.Merger: Merging 1 sorted segments
14/08/26 01:14:52 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 42 bytes
14/08/26 01:14:52 INFO mapred.LocalJobRunner: 1 / 1 copied.
14/08/26 01:14:52 INFO mapred.LocalJobRunner: reduce task executor complete.
14/08/26 01:14:52 WARN mapred.LocalJobRunner: job_local545501673_0001
java.lang.Exception: java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.mapred.Reducer.<init>()
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.mapred.Reducer.<init>()
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:409)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoSuchMethodException: org.apache.hadoop.mapred.Reducer.<init>()
at java.lang.Class.getConstructor0(Class.java:2849)
at java.lang.Class.getDeclaredConstructor(Class.java:2053)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:125)
... 8 more
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount
{
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable>
{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context)
{
StringTokenizer tokenizer = new StringTokenizer(value.toString());
while (tokenizer.hasMoreTokens())
{
word.set(tokenizer.nextToken());
try
{
context.write(word, one);
}
catch (IOException | InterruptedException e)
{
e.printStackTrace();
}
}
}
}
public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable>
{
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context)
{
int sum = 0;
for (IntWritable val : values)
{
sum += val.get();
}
result.set(sum);
try
{
context.write(key, result);
}
catch (IOException | InterruptedException e)
{
e.printStackTrace();
}
}
}
public static void main(String[] args)
{
Configuration conf = new Configuration();
Job job = null;
try
{
job = Job.getInstance(conf);
}
catch (IOException e)
{
e.printStackTrace();
}
job.setJarByClass(WordCount.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
try
{
FileInputFormat.addInputPath(job, new Path("hdfs://master:9000/input"));
FileOutputFormat.setOutputPath(job, new Path("hdfs://master:9000/output"));
}
catch (IllegalArgumentException | IOException e)
{
e.printStackTrace();
}
try
{
job.submit();
}
catch (ClassNotFoundException | IOException | InterruptedException e)
{
e.printStackTrace();
}
}
}
//package helloworld;
import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;
public class WordCount
{
public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable>
{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
@Override
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter)
{
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens())
{
word.set(tokenizer.nextToken());
try
{
output.collect(word, one);
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
}
public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable>
{
@Override
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter)
{
int sum = 0;
while (values.hasNext())
{
sum += values.next().get();
}
try
{
output.collect(key, new IntWritable(sum));
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
public static void main(String[] args)
{
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setReducerClass(Reducer.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
try
{
JobClient.runJob(conf);
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
14/08/26 01:14:52 INFO mapreduce.Job: Job job_local545501673_0001 running in uber mode : false
14/08/26 01:14:52 INFO mapreduce.Job: map 100% reduce 0%
14/08/26 01:14:52 INFO mapreduce.Job: Job job_local545501673_0001 failed with state FAILED due to: NA
14/08/26 01:14:52 INFO mapreduce.Job: Counters: 38
File System Counters
FILE: Number of bytes read=3661
FILE: Number of bytes written=222411
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=22
HDFS: Number of bytes written=0
HDFS: Number of read operations=5
HDFS: Number of large read operations=0
HDFS: Number of write operations=1
Map-Reduce Framework
Map input records=1
Map output records=4
Map output bytes=38
Map output materialized bytes=52
Input split bytes=85
Combine input records=0
Combine output records=0
Reduce input groups=0
Reduce shuffle bytes=52
Reduce input records=0
Reduce output records=0
Spilled Records=4
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=0
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=212336640
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=22
File Output Format Counters
Bytes Written=0
java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
at WordCount.main(WordCount.java:91)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
[root@master ~]#