spark无法启动,Hadoop可以正常启动,该怎么解决

time_exceed 2016-10-29 09:06:37
先贴错误:
[root@master spark-2.0.1-bin-hadoop2.7]# ./sbin/start-all.sh 
org.apache.spark.deploy.master.Master running as process 3530. Stop it first.
slaver1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver1.out
slaver2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver2.out
slaver1: failed to launch org.apache.spark.deploy.worker.Worker:
slaver1: at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:693)
slaver1: at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
slaver1: full log in /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver1.out
slaver2: failed to launch org.apache.spark.deploy.worker.Worker:
slaver2: at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:693)
slaver2: at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
slaver2: full log in /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver2.out


我的spark-env.sh如下:
export JAVA_HOME=/usr/java/jdk1.8.0_101
export SCALA_HOME=/usr/scala/scala-2.11.8
export SPARK_MASTER_IP=master
export SPARK_WORKER_MEMORY=0.5g
export HADOOP_HOME=/usr/hadoop/hadoop-2.7.3
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export SPARK_WORKER_CORES=1
export SPARK_MASTER_PORT=7077
export SPARK_LOCAL_DIRS=/usr/spark/spark-2.0.1

#set ipython start
PYSPARK_DRIVER_PYTHON=ipython

当我在Hadoop上启动./sbin/start-all.sh时,各个节点都可以正常运行。
但是在spark上启动./sbin/start-all.sh时,就报了上面的错。
我三个虚拟机内存是2个g,系统是红帽5.

启动Hadoop时,192.168.183.70:50070网页的内容如下:


求大神指教,该怎么做。
...全文
1452 2 打赏 收藏 转发到动态 举报
写回复
用AI写文章
2 条回复
切换为时间正序
请发表友善的回复…
发表回复
sflotus 2016-12-12
  • 打赏
  • 举报
回复
3个虚拟机,live node 也应该是3个
LinkSe7en 2016-10-31
  • 打赏
  • 举报
回复
slaver1: full log in /usr/spark/spark-2.0.1-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slaver1.out 你要把worker的日志贴上来才知道具体问题在哪

1,258

社区成员

发帖
与我相关
我的任务
社区描述
Spark由Scala写成,是UC Berkeley AMP lab所开源的类Hadoop MapReduce的通用的并行计算框架,Spark基于MapReduce算法实现的分布式计算。
社区管理员
  • Spark
  • shiter
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧