Spark执行了60个小时没结果,也没报错

xzg1109 2017-11-13 04:58:05
[root@master opt]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive2.1.1/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hive2.1.1/lib/spark-assembly-1.6.3-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hive2.1.1/lib/spark-examples-1.6.3-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in file:/opt/hive2.1.1/conf/hive-log4j2.properties Async: true
hive> select * from t1;
OK
1 wangming
2 wangfang
3 songyun
4 xiaoling
5 huiming
6 sunlu
Time taken: 3.758 seconds, Fetched: 6 row(s)

hive> select count(*) from t1;
Query ID = root_20171113114006_fc8ac12a-563c-4770-8f39-8e72f15b209b
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Spark Job = ad0e3f12-503b-4d2e-bc95-f89b54b44b76

Query Hive on Spark job[0] stages:
0
1

Status: Running (Hive on Spark job[0])
Job Progress Format
CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount [StageCost]
2017-11-13 11:40:33,290 Stage-0_0: 0/1 Stage-1_0: 0/1
2017-11-13 11:40:36,355 Stage-0_0: 0/1 Stage-1_0: 0/1
2017-11-13 11:40:39,407 Stage-0_0: 0/1 Stage-1_0: 0/1
2017-11-13 11:40:42,865 Stage-0_0: 0/1 Stage-1_0: 0/1
2017-11-13 11:40:45,953 Stage-0_0: 0/1 Stage-1_0: 0/1
2017-11-13 11:40:48,995 Stage-0_0: 0/1 Stage-1_0: 0/1
2017-11-13 11:40:52,043 Stage-0_0: 0/1 Stage-1_0: 0/1
2017-11-13 11:40:55,097 Stage-0_0: 0/1 Stage-1_0: 0/1
2017-11-13 11:40:58,142 Stage-0_0: 0/1 Stage-1_0: 0/1
......
执行了60个小时没结果,也没报错

版本信息
hadoop 2.7.1
hive 2.1.1
spark 1.6.3

日志没报错
17/11/13 10:44:38 INFO master.Master: Received unregister request from application app-20171110193202-0001
17/11/13 10:44:38 INFO master.Master: Removing app app-20171110193202-0001
17/11/13 10:44:38 INFO master.Master: master:40485 got disassociated, removing it.
17/11/13 10:44:38 INFO master.Master: 192.168.50.130:35575 got disassociated, removing it.
17/11/13 10:44:38 INFO spark.SecurityManager: Changing view acls to: root
17/11/13 10:44:38 INFO spark.SecurityManager: Changing modify acls to: root
17/11/13 10:44:38 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
17/11/13 11:14:36 INFO master.Master: Registering app Hive on Spark
17/11/13 11:14:36 INFO master.Master: Registered app Hive on Spark with ID app-20171113111436-0002
...全文
948 4 打赏 收藏 转发到动态 举报
写回复
用AI写文章
4 条回复
切换为时间正序
请发表友善的回复…
发表回复
zengjc 2018-05-04
  • 打赏
  • 举报
回复
引用 3 楼 xzg1109 的回复:
原因是hive-site.xml配置内存太高了 <property> <name>spark.executor.memory</name> <value>4096m</value> </property> 改成 <property> <name>spark.executor.memory</name> <value>512m</value> </property>
LZ是怎么发现是因为内存分配过大导致的?
xzg1109 2017-11-17
  • 打赏
  • 举报
回复
原因是hive-site.xml配置内存太高了 <property> <name>spark.executor.memory</name> <value>4096m</value> </property> 改成 <property> <name>spark.executor.memory</name> <value>512m</value> </property>
xzg1109 2017-11-14
  • 打赏
  • 举报
回复
引用 1 楼 cy309173854 的回复:
可以查看下这个 app-20171110193202-0001 的log 看下 。 还有感觉配置是不是有点问题啊 ? In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number>
hive-site.xml里有配置的,为什么还会提示? app-20171110193202-0001,Application Detail UI, 点 detail按钮,显示如下日志信息,这是出异常了吗,看着不像 Active Stages (1) mapPartitionsToPair at MapTran.java:40 +details org.apache.spark.api.java.AbstractJavaRDDLike.mapPartitionsToPair(JavaRDDLike.scala:46) org.apache.hadoop.hive.ql.exec.spark.MapTran.doTransform(MapTran.java:40) org.apache.hadoop.hive.ql.exec.spark.CacheTran.transform(CacheTran.java:45) org.apache.hadoop.hive.ql.exec.spark.SparkPlan.generateGraph(SparkPlan.java:73) org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient$JobStatusJob.call(RemoteHiveSparkClient.java:337) org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:358) org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:323) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Pending Stages (1) org.apache.spark.api.java.AbstractJavaRDDLike.foreachAsync(JavaRDDLike.scala:46) org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient$JobStatusJob.call(RemoteHiveSparkClient.java:339) org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:358) org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:323) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748)
曹宇飞丶 2017-11-13
  • 打赏
  • 举报
回复
可以查看下这个 app-20171110193202-0001 的log 看下 。 还有感觉配置是不是有点问题啊 ? In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number>

679

社区成员

发帖
与我相关
我的任务
社区描述
智能路由器通常具有独立的操作系统,包括OpenWRT、eCos、VxWorks等,可以由用户自行安装各种应用,实现网络和设备的智能化管理。
linuxpython 技术论坛(原bbs)
社区管理员
  • 智能路由器社区
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧