Hortonworks sandbox 2.0.6下 hive 中select命令出错

jpjiang4648 2014-01-27 03:26:18
select语句后边跟一个group by,则会顺利通过,跟2个以上group by会出错,请教各位大神是什么情况?
我用的hadoop版本是Hortonworks 2.0 sandbox(对应hadoop版本是2.2.0),安装在windows环境的虚拟机上。

情况一:select语句后边跟一个group by:

select trim(pID),max(trim(oID)),max(md),max(newDate)
from table_name group by trim(pID);

顺利执行!

情况二:select语句后边跟2个以上group by:

select trim(pID),max(trim(oID)),trim(cID), trim(prID),max(md),max(newDate)
from table_name group by trim(pID), trim(cID), trim(prID);

出错信息:
14/01/26 21:40:15 ERROR exec.Task: Ended Job = job_1390796761818_0004 with errors
14/01/26 21:40:16 INFO impl.YarnClientImpl: Killing application application_1390796761818_0004
14/01/26 21:40:16 INFO ql.Driver: </PERFLOG method=task.MAPRED.Stage-1 start=1390801038548 end=1390801216121 duration=177573>
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/01/26 21:40:16 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/01/26 21:40:16 INFO ql.Driver: </PERFLOG method=Driver.execute start=1390801038543 end=1390801216121 duration=177578>
MapReduce Jobs Launched:
14/01/26 21:40:16 INFO ql.Driver: MapReduce Jobs Launched:
14/01/26 21:40:16 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
Job 0: Map: 3 Reduce: 1 Cumulative CPU: 18.51 sec HDFS Read: 0 HDFS Write: 0 FAIL
14/01/26 21:40:16 INFO ql.Driver: Job 0: Map: 3 Reduce: 1 Cumulative CPU: 18.51 sec HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 18 seconds 510 msec
14/01/26 21:40:16 INFO ql.Driver: Total MapReduce CPU Time Spent: 18 seconds 510 msec
14/01/26 21:40:16 ERROR beeswax.BeeswaxServiceImpl: Exception while processing query
BeeswaxException(message:Driver returned: 2. Errors: OK
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1390796761818_0004, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1390796761818_0004/
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1390796761818_0004
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 1
2014-01-26 21:37:28,063 Stage-1 map = 0%, reduce = 0%
2014-01-26 21:38:29,232 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 38.41 sec
2014-01-26 21:38:30,698 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 38.41 sec
2014-01-26 21:38:32,745 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 39.67 sec
2014-01-26 21:38:33,774 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 39.67 sec
2014-01-26 21:38:34,959 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 42.34 sec
2014-01-26 21:38:36,021 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 42.34 sec
2014-01-26 21:38:37,074 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 43.42 sec
2014-01-26 21:38:38,114 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 43.42 sec
2014-01-26 21:38:39,185 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 43.42 sec
2014-01-26 21:38:40,218 Stage-1 map = 0%, reduce = 0%
2014-01-26 21:39:40,252 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 24.74 sec
2014-01-26 21:39:41,372 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 24.74 sec
2014-01-26 21:39:42,422 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 16.97 sec
2014-01-26 21:39:43,464 Stage-1 map = 0%, reduce = 0%, Cumulative CPU 18.51 sec
2014-01-26 21:39:44,495 Stage-1 map = 0%, reduce = 0%
2014-01-26 21:40:13,540 Stage-1 map = 100%, reduce = 100%
MapReduce Total cumulative CPU time: 18 seconds 510 msec
...全文
441 6 打赏 收藏 转发到动态 举报
写回复
用AI写文章
6 条回复
切换为时间正序
请发表友善的回复…
发表回复
jpjiang4648 2014-02-24
  • 打赏
  • 举报
回复
引用 4 楼 tntzbzc 的回复:
[quote=引用 2 楼 jpjiang4648 的回复:] [quote=引用 1 楼 s060403072 的回复:]
版主,这个版块里Hadoop高手都去哪里了啊?[/quote] LZ你发错版块了 hadoop版块在这 http://bbs.csdn.net/forums/hadoop 早报哥帮忙传送一下撒 你这个是HIVE LOG,看着没用 上MR LOG和YARN LOG,看看到底什么错误[/quote] 谢谢版主提醒,我的问题已经解决了,是HDP的一些目录被误删除了。
海兰 2014-02-10
  • 打赏
  • 举报
回复
引用 4 楼 tntzbzc 的回复:
[quote=引用 2 楼 jpjiang4648 的回复:] [quote=引用 1 楼 s060403072 的回复:]
版主,这个版块里Hadoop高手都去哪里了啊?[/quote] LZ你发错版块了 hadoop版块在这 http://bbs.csdn.net/forums/hadoop 早报哥帮忙传送一下撒 你这个是HIVE LOG,看着没用 上MR LOG和YARN LOG,看看到底什么错误[/quote] 才晓得有个传送的功能。。。
撸大湿 2014-02-09
  • 打赏
  • 举报
回复
引用 2 楼 jpjiang4648 的回复:
[quote=引用 1 楼 s060403072 的回复:]
版主,这个版块里Hadoop高手都去哪里了啊?[/quote] LZ你发错版块了 hadoop版块在这 http://bbs.csdn.net/forums/hadoop 早报哥帮忙传送一下撒 你这个是HIVE LOG,看着没用 上MR LOG和YARN LOG,看看到底什么错误
海兰 2014-02-07
  • 打赏
  • 举报
回复
都回家过年去了吧~~
jpjiang4648 2014-02-07
  • 打赏
  • 举报
回复
引用 1 楼 s060403072 的回复:
版主,这个版块里Hadoop高手都去哪里了啊?
海兰 2014-01-28
  • 打赏
  • 举报
回复

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧