【求助】hadoop安装好后master上jps看不到namenode和jobtracker进程

hunhunfang 2013-03-25 11:27:58
用的3台计算机,ubuntu系统,master是ubuntu server 12.04 另外两台分别是ubuntu 11.10和ubuntu 12.04部署hadoop集群,安装完成后datenode上进程都正常,master上jps看不到master上看不到namenode和jobtracker进程
hadoop@node1:~/hadoop$ bin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
starting namenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-node1.out
node3: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-node3.out
node2: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-node2.out
node1: starting secondarynamenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-node1.out
starting jobtracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-node1.out
node3: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node3.out
node2: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node2.out

hadoop@node1:~/hadoop$ jps
16993 SecondaryNameNode
17210 Jps
hadoop@node1:~/hadoop$
按照网上大神们说的删除了tmp下所有文件并重新格式化namenode启动后还是一样的。
配置如下:

hadoop@node1:~/hadoop/conf$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop/tmp</value>
<description></description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://masternode:54310</value>
<description></description>
</property>
</configuration>

hadoop@node1:~/hadoop/conf$ cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
<description></description>
</property>
</configuration>

hadoop@node1:~/hadoop/conf$ cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>masternode:54311</value>
<description></description>
</property>
</configuration>

jobtracker的log文件如下
2006-03-11 23:54:44,348 FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException: Problem binding to masternode/122.72.28.136:54311 : Cannot assign requested address
at org.apache.hadoop.ipc.Server.bind(Server.java:218)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:289)
at org.apache.hadoop.ipc.Server.<init>(Server.java:1443)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:343)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:324)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:284)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:45)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:331)
at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1450)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:258)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:250)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:245)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4164)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:216)
... 12 more

2006-03-11 23:54:44,353 INFO org.apache.hadoop.mapred.JobTracker: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down JobTracker at node1/192.168.10.237
************************************************************/

namenode的log文件如下:
2006-03-11 23:54:37,009 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Problem binding to masternode/122.72.28.136:54310 : Cannot assign requested address
at org.apache.hadoop.ipc.Server.bind(Server.java:218)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:289)
at org.apache.hadoop.ipc.Server.<init>(Server.java:1443)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:343)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:324)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:284)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:45)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:331)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:305)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:433)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:421)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:216)
... 12 more

2006-03-11 23:54:37,010 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.10.237
************************************************************/
跪求解决方法
...全文
6058 8 打赏 收藏 转发到动态 举报
写回复
用AI写文章
8 条回复
切换为时间正序
请发表友善的回复…
发表回复
Findss 2014-11-26
  • 打赏
  • 举报
回复
hadoop2.X以后的框架没有jobtracker了。
土豆的天敌 2014-07-18
  • 打赏
  • 举报
回复
引用 3 楼 tntzbzc 的回复:
你的配置有点奇怪 HDFS: fs.default.name => hdfs://masternode:54310 MAPREDUCE: mapred.job.tracker => masternode:54311 但是你的HOST文件 /etc/hosts 却没有masternode 你的NAMENODE和JOBTRACKER到底部署在哪里? 192.168.10.237 node1.node1 node1 192.168.10.238 node2 192.168.10.239 node3 如果你的NAMENODE和JOBTRACKER部署在192.168.10.237 上 那配置应该是这样的 fs.default.name => hdfs://node1:54310 mapred.job.tracker => http://node1:54311 先按照这个方法重新配置,并更新每台机器的HOST,确保没问题。 如果还不行,把/etc/sysconfig/network 文件贴出来再分析
大神,我的这些都对,就是以前部署的hadoop2.0,现在重新部署下1.0 但还是没法起来jobtracker,分析日志文件和楼主一样的
hunhunfang 2013-04-01
  • 打赏
  • 举报
回复
引用 3 楼 tntzbzc 的回复:
你的配置有点奇怪 HDFS: fs.default.name => hdfs://masternode:54310 MAPREDUCE: mapred.job.tracker => masternode:54311 但是你的HOST文件 /etc/hosts 却没有masternode 你的NAMENODE和JOBTRACKER到底部署在哪里……
非常感谢“tntzbzc”大侠,呵呵,就是这里的问题,我的主机名都叫“nodeX”在配置hdfs和mapred的时候却用了master和slave,所以namenode上面应该启动的程序都找不到主机名。 谢谢1!!!!
撸大湿 2013-03-29
  • 打赏
  • 举报
回复
引用 3 楼 tntzbzc 的回复:
你的配置有点奇怪 HDFS: fs.default.name => hdfs://masternode:54310 MAPREDUCE: mapred.job.tracker => masternode:54311 但是你的HOST文件 /etc/hosts 却没有masternode 你的NAMENODE和JOBTRACKER到底部署在哪里……
更具你的日志文件判断 masternode域名指向的是122.72.28.136 这个IP地址 你的集群有4台还是3台? 先确认NAMENODE和JOBTRACKER部署的位置,再重新配置 fs.default.name 和 mapred.job.tracker
撸大湿 2013-03-29
  • 打赏
  • 举报
回复
你的配置有点奇怪 HDFS: fs.default.name => hdfs://masternode:54310 MAPREDUCE: mapred.job.tracker => masternode:54311 但是你的HOST文件 /etc/hosts 却没有masternode 你的NAMENODE和JOBTRACKER到底部署在哪里? 192.168.10.237 node1.node1 node1 192.168.10.238 node2 192.168.10.239 node3 如果你的NAMENODE和JOBTRACKER部署在192.168.10.237 上 那配置应该是这样的 fs.default.name => hdfs://node1:54310 mapred.job.tracker => http://node1:54311 先按照这个方法重新配置,并更新每台机器的HOST,确保没问题。 如果还不行,把/etc/sysconfig/network 文件贴出来再分析
hunhunfang 2013-03-29
  • 打赏
  • 举报
回复
引用 1 楼 tntzbzc 的回复:
贴一下其他配置 HADOOP MASTER HADOOP SLAVER /etc/hosts 文件 /etc/profile 文件 /etc/sysconfig/network 文件
hadoop@node1:~/hadoop/conf$ cat masters node1 hadoop@node1:~/hadoop/conf$ cat slaves node2 node3 hadoop@node1:~/hadoop/conf$ cat /etc/hosts 127.0.0.1 localhost 192.168.10.237 node1.node1 node1 192.168.10.238 node2 192.168.10.239 node3 我的环境变量是放在.bashrc里面的 cat .bashrc export JAVA_HOME=/usr/lib/jvm/sunjdk6 export JRE_HOME=${JAVA_HOME}/jre export CLASS_PATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export HADOOP_HOME=/home/hadoop/hadoop export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
jw_lee 2013-03-29
  • 打赏
  • 举报
回复
将配置文件的masternode改成192.168.10.237试试
撸大湿 2013-03-27
  • 打赏
  • 举报
回复
贴一下其他配置 HADOOP MASTER HADOOP SLAVER /etc/hosts 文件 /etc/profile 文件 /etc/sysconfig/network 文件
1.a1 192.168.9.1 (master) a2 192.168.9.2 (slave1) a3 192.168.9.3 (slave2) 修改/etc/hosts 2.3台机器 创建hadoop 用户 hadoop 密码:123 3.安装JDK (3台都安装) [root@a1 ~]# chmod 777 jdk-6u38-ea-bin-b04-linux-i586-31_oct_2012-rpm.bin [root@a1 ~]# ./jdk-6u38-ea-bin-b04-linux-i586-31_oct_2012-rpm.bin [root@a1 ~]# cd /usr/java/jdk1.6.0_38/ [root@a1 jdk]# vi /etc/profile export JAVA_HOME=/usr/java/jdk1.7.0_25 export JAVA_BIN=/usr/java/jdk1.7.0_25/bin export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export JAVA_HOME JAVA_BIN PATH CLASSPATH 重启你的系统 或 source /etc/profile [root@a1 ~]# /usr/java/jdk1.6.0_38/bin/java -version java version "1.6.0_38-ea" Java(TM) SE Runtime Environment (build 1.6.0_38-ea-b04) Java HotSpot(TM) Client VM (build 20.13-b02, mixed mode, sharing) 4.安装hadoop (3台都安) [root@a1 ~]# tar zxvf hadoop-0.20.2-cdh3u5.tar.gz -C /usr/local 编辑hadoop 配置文件 [root@a1 ~]# cd /usr/local/hadoop-0.20.2-cdh3u5/conf/ [root@a1 conf]# vi hadoop-env.sh 添加 export JAVA_HOME=/usr/java/jdk1.7.0_25 设置namenode启动端口 [root@a1 conf]# vi core-site.xml 添加 fs.default.name hdfs://hadoop1:9000 设置datanode节点数为2 [root@a1 conf]# vi hdfs-site.xml 添加 dfs.replication 2 设置jobtracker端口 [root@a1 conf]# vim mapred-site.xml mapred.job.tracker hadoop1:9001 [root@a1 conf]# vi masters 改为 a1(主机名) [root@a1 conf]# vi slaves 改为 a2 a3 拷贝到其他两个节点 [root@a1 conf]# cd /usr/local/ [root@a1 local]# scp -r ./hadoop-0.20.2-cdh3u5/ a2:/usr/local/ [root@a1 local]# scp -r ./hadoop-0.20.2-cdh3u5/ a3:/usr/local/ 在所有节点上执行以下操作,把/usr/local/hadoop-0.20.2-cdh3u5的所有者,所有者组改为hadoop并su成该用户 [root@a1 ~]# chown hadoop.hadoop /usr/local/hadoop-0.20.2-cdh3u5/ -R [root@a2 ~]# chown hadoop.hadoop /usr/local/hadoop-0.20.2-cdh3u5/ -R [root@a3 ~]# chown hadoop.hadoop /usr/local/hadoop-0.20.2-cdh3u5/ -R [root@a1 ~]# su - hadoop [root@a2 ~]# su - hadoop [root@a3 ~]# su - hadoop 所有节点上创建密钥 [hadoop@a1 ~]$ ssh-keygen -t rsa [hadoop@a2 ~]$ ssh-keygen -t rsa [hadoop@a3 ~]$ ssh-keygen -t rsa [hadoop@a1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a1 [hadoop@a1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a2 [hadoop@a1 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a3 [hadoop@a2 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a1 [hadoop@a2 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a2 [hadoop@a2 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a3 [hadoop@a3 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a1 [hadoop@a3 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a2 [hadoop@a3 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub a3 格式化 namenode [hadoop@a1 ~]$ cd /usr/local/hadoop-0.20.2-cdh3u5/ [hadoop@a1 hadoop-0.20.2-cdh3u5]$ bin/hadoop namenode -format 开启 [hadoop@a1 hadoop-0.20.2-cdh3u5]$ bin/start-all.sh 在所有节点查看进程状态验证启动 [hadoop@a1 hadoop-0.20.2-cdh3u5]$ jps 8602 JobTracker 8364 NameNode 8527 SecondaryNameNode 8673 Jps [hadoop@a2 hadoop-0.20.2-cdh3u5]$ jps 10806 Jps 10719 TaskTracker 10610 DataNode [hadoop@a3 hadoop-0.20.2-cdh3u5]$ jps 7605 Jps 7515 TaskTracker 7405 DataNode [hadoop@a1 hadoop-0.20.2-cdh3u5]$ bin/hadoop dfsadmin -report

20,809

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧