Hortonworks sandbox2.0 hive中进行select group by问题

jpjiang4648 2014-01-28 08:55:30
在Hortonworks sandbox2.0中进行hive操作,select后跟一个group by 顺利执行,但是跟两个以上group by会出错,请教各位大神。

我的环境是:在windows 7 上安装了虚拟机,装了Hortonworks sandbox2.0.6,对应Hadoop版本是2.2.0,Hive版本是0.12。

情况一:Select后跟一个group by

select trim(pID),max(trim(oID)),max(md),max(newDate)
from table_name group by trim(pID);

顺利执行

情况二:Select后跟二个及以上group by
select trim(pID),max(trim(oID)),trim(cID), trim(prID),max(md),max(newDate)
from table_name group by trim(pID), trim(cID), trim(prID);

出现错误:
14/01/26 21:40:15 ERROR exec.Task: Ended Job = job_1390796761818_0004 with errors
14/01/26 21:40:16 INFO impl.YarnClientImpl: Killing application application_1390796761818_0004
14/01/26 21:40:16 INFO ql.Driver: </PERFLOG method=task.MAPRED.Stage-1 start=1390801038548 end=1390801216121 duration=177573>
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/01/26 21:40:16 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/01/26 21:40:16 INFO ql.Driver: </PERFLOG method=Driver.execute start=1390801038543 end=1390801216121 duration=177578>
MapReduce Jobs Launched:
14/01/26 21:40:16 INFO ql.Driver: MapReduce Jobs Launched:
14/01/26 21:40:16 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
Job 0: Map: 3 Reduce: 1 Cumulative CPU: 18.51 sec HDFS Read: 0 HDFS Write: 0 FAIL
14/01/26 21:40:16 INFO ql.Driver: Job 0: Map: 3 Reduce: 1 Cumulative CPU: 18.51 sec HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 18 seconds 510 msec
14/01/26 21:40:16 INFO ql.Driver: Total MapReduce CPU Time Spent: 18 seconds 510 msec
14/01/26 21:40:16 ERROR beeswax.BeeswaxServiceImpl: Exception while processing query
BeeswaxException(message:Driver returned: 2. Errors: OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1390796761818_0004, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1390796761818_0004/
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1390796761818_0004
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 1
2014-01-26 21:37:28,063 Stage-1 map = 0%, reduce = 0%
2014-01-26 21:38:29,232 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 38.41 sec
2014-01-26 21:38:30,698 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 38.41 sec
2014-01-26 21:38:32,745 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 39.67 sec
2014-01-26 21:38:33,774 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 39.67 sec
2014-01-26 21:38:34,959 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 42.34 sec
2014-01-26 21:38:36,021 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 42.34 sec
2014-01-26 21:38:37,074 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 43.42 sec
2014-01-26 21:38:38,114 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 43.42 sec
2014-01-26 21:38:39,185 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 43.42 sec
2014-01-26 21:38:40,218 Stage-1 map = 0%, reduce = 0%
2014-01-26 21:39:40,252 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 24.74 sec
2014-01-26 21:39:41,372 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 24.74 sec
2014-01-26 21:39:42,422 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 16.97 sec
2014-01-26 21:39:43,464 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 18.51 sec
2014-01-26 21:39:44,495 Stage-1 map = 0%, reduce = 0%
2014-01-26 21:40:13,540 Stage-1 map = 100%, reduce = 100%
MapReduce Total cumulative CPU time: 18 seconds 510 msec
...全文
276 回复 打赏 收藏 转发到动态 举报
写回复
用AI写文章
回复
切换为时间正序
请发表友善的回复…
发表回复

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧