执行pi例子,报exit code 137错误

jieluosanqiuye 2015-06-23 04:07:30
各位好,我在ubunto 上部署了jdk 1.8 hadoop 2.7.0,能够正常启动hdfs和yarn,但是执行pi例子时候,却无法正常运行,不知道什么原因,望各位大侠指点一二,多谢!:

yanggl@yanggl-VirtualBox:~/hadoop-2.7.0$ bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar pi 2 10
Number of Maps = 2
Samples per Map = 10
Java HotSpot(TM) Client VM warning: You have loaded library /home/yanggl/hadoop-2.7.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
15/06/23 15:45:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Starting Job
15/06/23 15:45:39 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/06/23 15:45:42 INFO input.FileInputFormat: Total input paths to process : 2
15/06/23 15:45:43 INFO mapreduce.JobSubmitter: number of splits:2
15/06/23 15:45:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1435043164302_0003
15/06/23 15:45:45 INFO impl.YarnClientImpl: Submitted application application_1435043164302_0003
15/06/23 15:45:46 INFO mapreduce.Job: The url to track the job: http://yanggl-VirtualBox:8088/proxy/application_1435043164302_0003/
15/06/23 15:45:46 INFO mapreduce.Job: Running job: job_1435043164302_0003
15/06/23 15:46:05 INFO mapreduce.Job: Job job_1435043164302_0003 running in uber mode : false
15/06/23 15:46:05 INFO mapreduce.Job: map 0% reduce 0%
15/06/23 15:48:11 INFO mapreduce.Job: map 33% reduce 0%
15/06/23 15:48:13 INFO mapreduce.Job: map 50% reduce 0%
15/06/23 15:48:22 INFO mapreduce.Job: map 100% reduce 0%
15/06/23 15:49:10 INFO mapreduce.Job: Task Id : attempt_1435043164302_0003_m_000001_0, Status : FAILED
Container killed on request. Exit code is 137
Container exited with a non-zero exit code 137
Killed by external signal

15/06/23 15:49:17 INFO mapreduce.Job: map 50% reduce 0%
15/06/23 15:52:30 INFO mapreduce.Job: map 100% reduce 0%
15/06/23 15:53:16 INFO mapreduce.Job: map 100% reduce 17%
15/06/23 15:53:20 INFO mapreduce.Job: map 100% reduce 67%
15/06/23 15:53:29 INFO mapreduce.Job: map 100% reduce 100%
15/06/23 15:53:41 INFO mapreduce.Job: Job job_1435043164302_0003 completed successfully
15/06/23 15:53:48 INFO mapreduce.Job: Counters: 51
File System Counters
FILE: Number of bytes read=50
FILE: Number of bytes written=344910
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=528
HDFS: Number of bytes written=215
HDFS: Number of read operations=11
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Failed map tasks=1
Launched map tasks=3
Launched reduce tasks=1
Other local map tasks=1
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=562906
Total time spent by all reduces in occupied slots (ms)=276847
Total time spent by all map tasks (ms)=562906
Total time spent by all reduce tasks (ms)=276847
Total vcore-seconds taken by all map tasks=562906
Total vcore-seconds taken by all reduce tasks=276847
Total megabyte-seconds taken by all map tasks=576415744
Total megabyte-seconds taken by all reduce tasks=283491328
Map-Reduce Framework
Map input records=2
Map output records=4
Map output bytes=36
Map output materialized bytes=56
Input split bytes=292
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=56
Reduce input records=4
Reduce output records=0
Spilled Records=8
Shuffled Maps =2
Failed Shuffles=1
Merged Map outputs=2
GC time elapsed (ms)=30889
CPU time spent (ms)=7160
Physical memory (bytes) snapshot=323993600
Virtual memory (bytes) snapshot=955912192
Total committed heap usage (bytes)=245702656
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=1
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=236
File Output Format Counters
Bytes Written=97
15/06/23 15:53:49 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
15/06/23 15:53:51 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
... ...
...全文
3262 2 打赏 收藏 转发到动态 举报
写回复
用AI写文章
2 条回复
切换为时间正序
请发表友善的回复…
发表回复
qq_33174666 2017-09-24
  • 打赏
  • 举报
回复
我也是这个错,从mongdb读取大量数据时就会报这个错,但是少量会正常运行,不知道什么原因
柱子89 2016-05-12
  • 打赏
  • 举报
回复
我的也是这个错,改成1和1可以正常运行,pi=4.

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧