Hortonworks sandbox 2.0.6下 hive 中select命令出错
select语句后边跟一个group by,则会顺利通过,跟2个以上group by会出错,请教各位大神是什么情况?
我用的hadoop版本是Hortonworks 2.0 sandbox(对应hadoop版本是2.2.0),安装在windows环境的虚拟机上。
情况一:select语句后边跟一个group by:
select trim(pID),max(trim(oID)),max(md),max(newDate)
from table_name group by trim(pID);
顺利执行!
情况二:select语句后边跟2个以上group by:
select trim(pID),max(trim(oID)),trim(cID), trim(prID),max(md),max(newDate)
from table_name group by trim(pID), trim(cID), trim(prID);
出错信息:
14/01/26 21:40:15 ERROR exec.Task: Ended Job = job_1390796761818_0004 with errors
14/01/26 21:40:16 INFO impl.YarnClientImpl: Killing application application_1390796761818_0004
14/01/26 21:40:16 INFO ql.Driver: </PERFLOG method=task.MAPRED.Stage-1 start=1390801038548 end=1390801216121 duration=177573>
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/01/26 21:40:16 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/01/26 21:40:16 INFO ql.Driver: </PERFLOG method=Driver.execute start=1390801038543 end=1390801216121 duration=177578>
MapReduce Jobs Launched:
14/01/26 21:40:16 INFO ql.Driver: MapReduce Jobs Launched:
14/01/26 21:40:16 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
Job 0: Map: 3 Reduce: 1 Cumulative CPU: 18.51 sec HDFS Read: 0 HDFS Write: 0 FAIL
14/01/26 21:40:16 INFO ql.Driver: Job 0: Map: 3 Reduce: 1 Cumulative CPU: 18.51 sec HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 18 seconds 510 msec
14/01/26 21:40:16 INFO ql.Driver: Total MapReduce CPU Time Spent: 18 seconds 510 msec
14/01/26 21:40:16 ERROR beeswax.BeeswaxServiceImpl: Exception while processing query
BeeswaxException(message:Driver returned: 2. Errors: OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1390796761818_0004, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1390796761818_0004/
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1390796761818_0004
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 1
2014-01-26 21:37:28,063 Stage-1 map = 0%, reduce = 0%
2014-01-26 21:38:29,232 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 38.41 sec
2014-01-26 21:38:30,698 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 38.41 sec
2014-01-26 21:38:32,745 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 39.67 sec
2014-01-26 21:38:33,774 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 39.67 sec
2014-01-26 21:38:34,959 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 42.34 sec
2014-01-26 21:38:36,021 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 42.34 sec
2014-01-26 21:38:37,074 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 43.42 sec
2014-01-26 21:38:38,114 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 43.42 sec
2014-01-26 21:38:39,185 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 43.42 sec
2014-01-26 21:38:40,218 Stage-1 map = 0%, reduce = 0%
2014-01-26 21:39:40,252 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 24.74 sec
2014-01-26 21:39:41,372 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 24.74 sec
2014-01-26 21:39:42,422 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 16.97 sec
2014-01-26 21:39:43,464 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 18.51 sec
2014-01-26 21:39:44,495 Stage-1 map = 0%, reduce = 0%
2014-01-26 21:40:13,540 Stage-1 map = 100%, reduce = 100%
MapReduce Total cumulative CPU time: 18 seconds 510 msec