combiner使用中出现的奇怪现象
最近在使用hadoop,写了一个带combiner的mapreduce程序,在集群上运行的时候遇到了一个很奇怪的现象:
12/04/28 16:44:44 INFO mapred.JobClient: Map-Reduce Framework
12/04/28 16:44:44 INFO mapred.JobClient: Map input records=8326122
12/04/28 16:44:44 INFO mapred.JobClient: Reduce shuffle bytes=565
12/04/28 16:44:44 INFO mapred.JobClient: Spilled Records=1172
12/04/28 16:44:44 INFO mapred.JobClient: Map output bytes=63635984
12/04/28 16:44:44 INFO mapred.JobClient: CPU time spent (ms)=322680
12/04/28 16:44:44 INFO mapred.JobClient: Total committed heap usage (bytes)=10796335104
12/04/28 16:44:44 INFO mapred.JobClient: Map input bytes=695976718
12/04/28 16:44:44 INFO mapred.JobClient: Combine input records=7955458
12/04/28 16:44:44 INFO mapred.JobClient: SPLIT_RAW_BYTES=1243
12/04/28 16:44:44 INFO mapred.JobClient: Reduce input records=74
12/04/28 16:44:44 INFO mapred.JobClient: Reduce input groups=1
12/04/28 16:44:44 INFO mapred.JobClient: Combine output records=1034
12/04/28 16:44:44 INFO mapred.JobClient: Physical memory (bytes) snapshot=10758852608
12/04/28 16:44:44 INFO mapred.JobClient: Reduce output records=1
12/04/28 16:44:44 INFO mapred.JobClient: Virtual memory (bytes) snapshot=15771127808
12/04/28 16:44:44 INFO mapred.JobClient: Map output records=7954498
其中红色的两处数据对应不上,导致结果错误,不加combiner就能得到正确的结果
请各位大侠分析一下这个是什么问题。