hadoop2.6.0 自带wordcount遇上问题

yezhirm7 2017-06-13 03:20:53
装好里hadoop想做个案例,试试看效果,结果mapreduce遇上问题,
hadoop@master:/usr/hadoop/hadoop-2.6.0$ ./bin/hadoop fs -ls /tmp/input/
Found 4 items
-rw-r--r-- 3 hadoop supergroup 21 2017-06-13 15:07 /tmp/input/f1
-rw-r--r-- 3 hadoop supergroup 25 2017-06-13 15:07 /tmp/input/f2
-rw-r--r-- 3 hadoop supergroup 12 2017-06-13 15:07 /tmp/input/test1.txt
-rw-r--r-- 3 hadoop supergroup 20 2017-06-13 15:07 /tmp/input/test2.txt



hadoop@master:/usr/hadoop/hadoop-2.6.0$ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /tmp/input /output

下面是运行的过程,ob: map 0% reduce 0%就出现问题了,不知道哪位大神能帮我解答,^^

17/06/13 15:08:55 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.1.100:8032
17/06/13 15:08:55 INFO input.FileInputFormat: Total input paths to process : 4
17/06/13 15:08:55 INFO mapreduce.JobSubmitter: number of splits:4
17/06/13 15:08:56 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1497335254625_0001
17/06/13 15:08:56 INFO impl.YarnClientImpl: Submitted application application_1497335254625_0001
17/06/13 15:08:56 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1497335254625_0001/
17/06/13 15:08:56 INFO mapreduce.Job: Running job: job_1497335254625_0001
17/06/13 15:09:01 INFO mapreduce.Job: Job job_1497335254625_0001 running in uber mode : false
17/06/13 15:09:01 INFO mapreduce.Job: map 0% reduce 0%
17/06/13 15:09:07 INFO mapreduce.Job: Task Id : attempt_1497335254625_0001_m_000001_0, Status : FAILED
Exception from container-launch.
Container id: container_1497335254625_0001_01_000003
Exit code: 1
Exception message: /usr/hadoop/hadoop-2.6.0/tmp/hadoopData /nm-local-dir/usercache/hadoop/appcache/application_1497335254625_0001/container_1497335254625_0001_01_000003/default_container_executor_session.sh: 行 3: /usr/hadoop/hadoop-2.6.0/tmp/hadoopData: 是一个目录
/bin/mv: 目标"/nm-local-dir/nmPrivate/application_1497335254625_0001/container_1497335254625_0001_01_000003/container_1497335254625_0001_01_000003.pid" 不是目录

Stack trace: ExitCodeException exitCode=1: /usr/hadoop/hadoop-2.6.0/tmp/hadoopData /nm-local-dir/usercache/hadoop/appcache/application_1497335254625_0001/container_1497335254625_0001_01_000003/default_container_executor_session.sh: 行 3: /usr/hadoop/hadoop-2.6.0/tmp/hadoopData: 是一个目录
/bin/mv: 目标"/nm-local-dir/nmPrivate/application_1497335254625_0001/container_1497335254625_0001_01_000003/container_1497335254625_0001_01_000003.pid" 不是目录

at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

17/06/13 15:09:13 INFO mapreduce.Job: Task Id : attempt_1497335254625_0001_m_000000_0, Status : FAILED
Exception from container-launch.
Container id: container_1497335254625_0001_01_000002
Exit code: 1

...全文
529 3 打赏 收藏 转发到动态 举报
写回复
用AI写文章
3 条回复
切换为时间正序
请发表友善的回复…
发表回复
  • 打赏
  • 举报
回复
input目录后面应该是你所处理的文件路径,应该要指定,如果是该文件夹下的所有文件应该是/tmp/input/* /output
shadon178 2017-06-16
  • 打赏
  • 举报
回复
应该目录问题,你仔细检查下。
yezhirm7 2017-06-13
  • 打赏
  • 举报
回复
…… …… …… …… Container exited with a non-zero exit code 1 17/06/13 15:09:27 INFO mapreduce.Job: Task Id : attempt_1497335254625_0001_m_000000_2, Status : FAILED Exception from container-launch. Container id: container_1497335254625_0001_01_000014 Exit code: 1 Exception message: /usr/hadoop/hadoop-2.6.0/tmp/hadoopData /nm-local-dir/usercache/hadoop/appcache/application_1497335254625_0001/container_1497335254625_0001_01_000014/default_container_executor_session.sh: 行 3: /usr/hadoop/hadoop-2.6.0/tmp/hadoopData: 是一个目录 /bin/mv: 目标"/nm-local-dir/nmPrivate/application_1497335254625_0001/container_1497335254625_0001_01_000014/container_1497335254625_0001_01_000014.pid" 不是目录 Stack trace: ExitCodeException exitCode=1: /usr/hadoop/hadoop-2.6.0/tmp/hadoopData /nm-local-dir/usercache/hadoop/appcache/application_1497335254625_0001/container_1497335254625_0001_01_000014/default_container_executor_session.sh: 行 3: /usr/hadoop/hadoop-2.6.0/tmp/hadoopData: 是一个目录 /bin/mv: 目标"/nm-local-dir/nmPrivate/application_1497335254625_0001/container_1497335254625_0001_01_000014/container_1497335254625_0001_01_000014.pid" 不是目录 at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 17/06/13 15:09:29 INFO mapreduce.Job: map 100% reduce 100% 17/06/13 15:09:29 INFO mapreduce.Job: Job job_1497335254625_0001 failed with state FAILED due to: Task failed task_1497335254625_0001_m_000001 Job failed as tasks failed. failedMaps:1 failedReduces:0 17/06/13 15:09:30 INFO mapreduce.Job: Counters: 13 Job Counters Failed map tasks=13 Killed map tasks=2 Launched map tasks=15 Other local map tasks=12 Data-local map tasks=4 Total time spent by all maps in occupied slots (ms)=79969 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=79969 Total vcore-seconds taken by all map tasks=79969 Total megabyte-seconds taken by all map tasks=81888256 Map-Reduce Framework CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧