求助:spark on yarn中client模式报错

demonwang1025 2017-02-27 04:36:15
如题,但是用cluster模式就可以正确运行。
环境是java1.8 hadoop2.7.3 spark1.7.3
附上yarn的日志
ubuntu@Master:/usr/soft/hadoop-2.7.3/logs$ spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode client --driver-memory 1G --executor-memory 1G --executor-cores 1 /usr/soft/spark-1.6.3/examples/target/spark-examples_2.10-1.6.3.jar 10
17/02/27 16:11:37 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 46860.
17/02/27 16:11:37 INFO spark.SparkEnv: Registering MapOutputTracker
17/02/27 16:11:37 INFO spark.SparkEnv: Registering BlockManagerMaster
17/02/27 16:11:37 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-6b35587e-ebb4-4468-a7ba-e8b734982a99
17/02/27 16:11:37 INFO storage.MemoryStore: MemoryStore started with capacity 511.1 MB
17/02/27 16:11:37 INFO spark.SparkEnv: Registering OutputCommitCoordinator
17/02/27 16:11:37 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
17/02/27 16:11:37 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
17/02/27 16:11:37 INFO ui.SparkUI: Started SparkUI at http://172.16.239.1:4040
17/02/27 16:11:37 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-cbd08d7c-fc96-4d16-9cbb-257e2edb3728/httpd-4f06c896-7401-420a-8eb1-6e0be82e7dd6
17/02/27 16:11:38 INFO spark.HttpServer: Starting HTTP Server
17/02/27 16:11:38 INFO server.Server: jetty-8.y.z-SNAPSHOT
17/02/27 16:11:38 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:45684
17/02/27 16:11:38 INFO util.Utils: Successfully started service 'HTTP file server' on port 45684.
17/02/27 16:11:38 INFO spark.SparkContext: Added JAR file:/usr/soft/spark-1.6.3/examples/target/spark-examples_2.10-1.6.3.jar at http://172.16.239.1:45684/jars/spark-examples_2.10-1.6.3.jar with timestamp 1488183098062
17/02/27 16:11:38 INFO client.RMProxy: Connecting to ResourceManager at Master/172.16.239.1:8032
17/02/27 16:11:38 INFO yarn.Client: Requesting a new application from cluster with 3 NodeManagers
17/02/27 16:11:38 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
17/02/27 16:11:38 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
17/02/27 16:11:38 INFO yarn.Client: Setting up container launch context for our AM
17/02/27 16:11:38 INFO yarn.Client: Setting up the launch environment for our AM container
17/02/27 16:11:38 INFO yarn.Client: Preparing resources for our AM container
17/02/27 16:11:40 INFO yarn.Client: Uploading resource file:/usr/soft/spark-1.6.3/assembly/target/scala-2.10/spark-assembly-1.6.3-hadoop2.7.3.jar -> hdfs://Master:9000/user/ubuntu/.sparkStaging/application_1488181713860_0003/spark-assembly-1.6.3-hadoop2.7.3.jar
17/02/27 16:11:47 INFO yarn.Client: Uploading resource file:/tmp/spark-cbd08d7c-fc96-4d16-9cbb-257e2edb3728/__spark_conf__7095116935778215646.zip -> hdfs://Master:9000/user/ubuntu/.sparkStaging/application_1488181713860_0003/__spark_conf__7095116935778215646.zip
17/02/27 16:11:47 INFO spark.SecurityManager: Changing view acls to: ubuntu
17/02/27 16:11:47 INFO spark.SecurityManager: Changing modify acls to: ubuntu
17/02/27 16:11:47 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ubuntu); users with modify permissions: Set(ubuntu)
17/02/27 16:11:47 INFO yarn.Client: Submitting application 3 to ResourceManager
17/02/27 16:11:47 INFO impl.YarnClientImpl: Submitted application application_1488181713860_0003
17/02/27 16:11:48 INFO yarn.Client: Application report for application_1488181713860_0003 (state: ACCEPTED)
17/02/27 16:11:48 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1488183107661
final status: UNDEFINED
tracking URL: http://Master:8088/proxy/application_1488181713860_0003/
user: ubuntu
17/02/27 16:12:05 INFO yarn.Client: Application report for application_1488181713860_0003 (state: ACCEPTED)
17/02/27 16:12:06 INFO yarn.Client: Application report for application_1488181713860_0003 (state: ACCEPTED)
17/02/27 16:12:07 INFO yarn.Client: Application report for application_1488181713860_0003 (state: FAILED)
17/02/27 16:12:07 INFO yarn.Client:
client token: N/A
diagnostics: Application application_1488181713860_0003 failed 2 times due to AM Container for appattempt_1488181713860_0003_000002 exited with exitCode: -103
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1488183107661
final status: FAILED
tracking URL: http://Master:8088/cluster/app/application_1488181713860_0003
user: ubuntu
17/02/27 16:12:07 INFO yarn.Client: Deleting staging directory .sparkStaging/application_1488181713860_0003
17/02/27 16:12:07 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:29)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/02/27 16:12:07 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}

17/02/27 16:12:07 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
17/02/27 16:12:07 INFO ui.SparkUI: Stopped Spark web UI at http://172.16.239.1:4040
17/02/27 16:12:07 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
17/02/27 16:12:07 INFO cluster.YarnClientSchedulerBackend: Asking each executor to shut down
17/02/27 16:12:07 INFO cluster.YarnClientSchedulerBackend: Stopped
17/02/27 16:12:07 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/02/27 16:12:07 INFO storage.MemoryStore: MemoryStore cleared
17/02/27 16:12:07 INFO storage.BlockManager: BlockManager stopped
17/02/27 16:12:07 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
17/02/27 16:12:07 WARN metrics.MetricsSystem: Stopping a MetricsSystem that is not running
17/02/27 16:12:08 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/02/27 16:12:08 INFO spark.SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:29)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/02/27 16:12:08 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
17/02/27 16:12:08 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
17/02/27 16:12:08 INFO util.ShutdownHookManager: Shutdown hook called
17/02/27 16:12:08 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-cbd08d7c-fc96-4d16-9cbb-257e2edb3728/httpd-4f06c896-7401-420a-8eb1-6e0be82e7dd6
17/02/27 16:12:08 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-cbd08d7c-fc96-4d16-9cbb-257e2edb3728

节点日志
17/02/27 00:28:49 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
17/02/27 00:28:51 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1488183797412_0002_000001
17/02/27 00:28:52 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 16
17/02/27 00:28:52 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM
17/02/27 00:28:52 INFO util.ShutdownHookManager: Shutdown hook called





...全文
1295 5 打赏 收藏 转发到动态 举报
写回复
用AI写文章
5 条回复
切换为时间正序
请发表友善的回复…
发表回复
我先森 2018-01-02
  • 打赏
  • 举报
回复
这种问题你去看日志啊,贴这些其实没什么用,也看不出来什么
2017-7-20 2017-11-08
  • 打赏
  • 举报
回复
解决了吗?我也是这样的问题求告知
demonwang1025 2017-03-16
  • 打赏
  • 举报
回复
引用 2 楼 sky402101 的回复:
你是不是用的hadoop 2.7.0+,jdk用的是8??
是的啊~是JDK的问题吗?
sky402101 2017-03-07
  • 打赏
  • 举报
回复
你是不是用的hadoop 2.7.0+,jdk用的是8??
_明月 2017-02-27
  • 打赏
  • 举报
回复
抱歉,由于个人能力有限,帮不了你。

1,258

社区成员

发帖
与我相关
我的任务
社区描述
Spark由Scala写成,是UC Berkeley AMP lab所开源的类Hadoop MapReduce的通用的并行计算框架,Spark基于MapReduce算法实现的分布式计算。
社区管理员
  • Spark
  • shiter
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧