spark-shell 执行出错

西瓜shine 2017-03-02 08:52:57
./spark-shell --master spark://Master:7077 --executor-memory 1024m --driver-memory 1024m

执行结果出错:
17/03/02 12:37:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/03/02 12:37:40 INFO spark.SecurityManager: Changing view acls to: hadoop
17/03/02 12:37:40 INFO spark.SecurityManager: Changing modify acls to: hadoop
17/03/02 12:37:40 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
17/03/02 12:37:41 INFO spark.HttpServer: Starting HTTP Server
17/03/02 12:37:41 INFO server.Server: jetty-8.y.z-SNAPSHOT
17/03/02 12:37:41 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:40136
17/03/02 12:37:41 INFO util.Utils: Successfully started service 'HTTP class server' on port 40136.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.6.0
/_/

Using Scala version 2.10.5 (OpenJDK Server VM, Java 1.8.0_111)
Type in expressions to have them evaluated.
Type :help for more information.
17/03/02 12:37:47 INFO spark.SparkContext: Running Spark version 1.6.0
17/03/02 12:37:48 INFO spark.SecurityManager: Changing view acls to: hadoop
17/03/02 12:37:48 INFO spark.SecurityManager: Changing modify acls to: hadoop
17/03/02 12:37:48 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
17/03/02 12:37:48 INFO util.Utils: Successfully started service 'sparkDriver' on port 38286.
17/03/02 12:37:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
17/03/02 12:37:49 INFO Remoting: Starting remoting
17/03/02 12:37:49 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@127.0.0.1:47787]
17/03/02 12:37:49 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 47787.
17/03/02 12:37:49 INFO spark.SparkEnv: Registering MapOutputTracker
17/03/02 12:37:49 INFO spark.SparkEnv: Registering BlockManagerMaster
17/03/02 12:37:49 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-d6835a89-4165-4003-b7b8-ef21c655153e
17/03/02 12:37:49 INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB
17/03/02 12:37:49 INFO spark.SparkEnv: Registering OutputCommitCoordinator
17/03/02 12:37:50 INFO server.Server: jetty-8.y.z-SNAPSHOT
17/03/02 12:37:50 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
17/03/02 12:37:50 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
17/03/02 12:37:50 INFO ui.SparkUI: Started SparkUI at http://127.0.0.1:4040
17/03/02 12:37:50 INFO client.AppClient$ClientEndpoint: Connecting to master spark://Master:7077...
17/03/02 12:37:50 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20170302123750-0004
17/03/02 12:37:50 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37168.
17/03/02 12:37:50 INFO netty.NettyBlockTransferService: Server created on 37168
17/03/02 12:37:50 INFO client.AppClient$ClientEndpoint: Executor added: app-20170302123750-0004/0 on worker-20170302103714-127.0.0.1-34196 (127.0.0.1:34196) with 1 cores
17/03/02 12:37:50 INFO storage.BlockManagerMaster: Trying to register BlockManager
17/03/02 12:37:50 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20170302123750-0004/0 on hostPort 127.0.0.1:34196 with 1 cores, 1024.0 MB RAM
17/03/02 12:37:50 INFO client.AppClient$ClientEndpoint: Executor added: app-20170302123750-0004/1 on worker-20170302103714-127.0.0.1-53686 (127.0.0.1:53686) with 1 cores
17/03/02 12:37:50 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20170302123750-0004/1 on hostPort 127.0.0.1:53686 with 1 cores, 1024.0 MB RAM
17/03/02 12:37:50 INFO storage.BlockManagerMasterEndpoint: Registering block manager 127.0.0.1:37168 with 517.4 MB RAM, BlockManagerId(driver, 127.0.0.1, 37168)
17/03/02 12:37:50 INFO storage.BlockManagerMaster: Registered BlockManager
17/03/02 12:37:50 INFO client.AppClient$ClientEndpoint: Executor added: app-20170302123750-0004/2 on worker-20170302103715-127.0.0.1-32909 (127.0.0.1:32909) with 1 cores
17/03/02 12:37:50 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20170302123750-0004/2 on hostPort 127.0.0.1:32909 with 1 cores, 1024.0 MB RAM
17/03/02 12:37:50 INFO client.AppClient$ClientEndpoint: Executor updated: app-20170302123750-0004/2 is now RUNNING
17/03/02 12:37:50 INFO client.AppClient$ClientEndpoint: Executor updated: app-20170302123750-0004/0 is now RUNNING
17/03/02 12:37:50 INFO client.AppClient$ClientEndpoint: Executor updated: app-20170302123750-0004/1 is now RUNNING
17/03/02 12:37:51 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/03/02 12:37:51 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
17/03/02 12:37:53 INFO repl.SparkILoop: Created sql context..
SQL context available as sqlContext.
17/03/02 12:37:54 INFO client.AppClient$ClientEndpoint: Executor updated: app-20170302123750-0004/1 is now EXITED (Command exited with code 1)
17/03/02 12:37:54 INFO cluster.SparkDeploySchedulerBackend: Executor app-20170302123750-0004/1 removed: Command exited with code 1

scala> 17/03/02 12:39:54 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(1,Command exited with code 1)] in 1 attempts
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:359)
at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.executorRemoved(SparkDeploySchedulerBackend.scala:144)
at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anonfun$receive$1.applyOrElse(AppClient.scala:186)
at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116)
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204)
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [120 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:107)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
... 12 more
...全文
695 回复 打赏 收藏 转发到动态 举报
写回复
用AI写文章
回复
切换为时间正序
请发表友善的回复…
发表回复

1,258

社区成员

发帖
与我相关
我的任务
社区描述
Spark由Scala写成,是UC Berkeley AMP lab所开源的类Hadoop MapReduce的通用的并行计算框架,Spark基于MapReduce算法实现的分布式计算。
社区管理员
  • Spark
  • shiter
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧