hadoop2.4.0 完全分布式 VM 配置启动问题。。。

wang_mmmm 2014-12-27 11:06:15
我是centos6.5+hadoop2.4.0.用vm虚拟一台master三台slave节点。配置完以后不知道什么原因,哪里配置有问题。节点启动不正确,用jps无法查到进程。具体配置如下,请大神们帮忙指出问题所在,万分感谢,第一次搭建。。如果能解决问题可以追加分数。。。再次感谢
master.hadoop 192.168.122.100
slave1.hadoop 192.168.122.101
slave2.hadoop 192.168.122.102
salve3.hadoop 192.168.122.103
core-site.xml:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://master.hadoop:90000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>
</configuration>
hdfs-iste.xml:
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master.hadoop:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///opt/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///opt/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobrtracker.http.address</name>
<value>master.hadoop:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master.hadoop:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master.hadoop:19888</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>http://192.168.122.100:9001</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master.hadoop:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master.hadoop:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master.hadoop:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master.hadoop:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master.hadoop:8088</value>
</property>
</configuration>
start log : 这里为什么没有jobtracker 和tasktracker启动。。。这两个是哪里配置的??
[hadoop@master sbin]$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
192.168.122.101: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop-namenode-slave1.hadoop.out
192.168.122.103: ssh: connect to host 192.168.122.103 port 22: No route to host
192.168.122.102: ssh: connect to host 192.168.122.102 port 22: No route to host
192.168.122.101: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-slave1.hadoop.out
192.168.122.103: ssh: connect to host 192.168.122.103 port 22: No route to host
192.168.122.102: ssh: connect to host 192.168.122.102 port 22: No route to host
Starting secondary namenodes [master.hadoop]
master.hadoop: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-hadoop-secondarynamenode-master.hadoop.out
starting yarn daemons
resourcemanager running as process 2778. Stop it first.
192.168.122.101: starting nodemanager, logging to /opt/hadoop/logs/yarn-hadoop-nodemanager-slave1.hadoop.out
192.168.122.103: ssh: connect to host 192.168.122.103 port 22: No route to host
192.168.122.102: ssh: connect to host 192.168.122.102 port 22: No route to host

hadoop namenode -format 启动正常 截取一部分:
[hadoop@master bin]$ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/12/26 08:25:44 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master.hadoop/192.168.122.100
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.4.0
..............................
STARTUP_MSG: build = Unknown -r Unknown; compiled by 'root' on 2014-05-22T08:20Z
STARTUP_MSG: java = 1.7.0_51
************************************************************/
14/12/26 08:25:44 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/12/26 08:25:44 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-0970fc92-ee07-4723-8ea4-05630b99e1eb
14/12/26 08:25:48 INFO namenode.FSNamesystem: fsLock is fair:true
14/12/26 08:25:48 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/12/26 08:25:48 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/12/26 08:25:48 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/12/26 08:25:48 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
14/12/26 08:25:48 INFO util.GSet: Computing capacity for map BlocksMap
14/12/26 08:25:48 INFO util.GSet: VM type = 64-bit
14/12/26 08:25:48 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
14/12/26 08:25:48 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/12/26 08:25:48 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/12/26 08:25:48 INFO blockmanagement.BlockManager: defaultReplication = 3
14/12/26 08:25:48 INFO blockmanagement.BlockManager: maxReplication = 512
14/12/26 08:25:48 INFO blockmanagement.BlockManager: minReplication = 1
14/12/26 08:25:48 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/12/26 08:25:48 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/12/26 08:25:48 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/12/26 08:25:48 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/12/26 08:25:48 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
14/12/26 08:25:48 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
14/12/26 08:25:48 INFO namenode.FSNamesystem: supergroup = supergroup
14/12/26 08:25:48 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/12/26 08:25:48 INFO namenode.FSNamesystem: Determined nameservice ID: hadoop-cluster1
14/12/26 08:25:48 INFO namenode.FSNamesystem: HA Enabled: false
14/12/26 08:25:48 INFO namenode.FSNamesystem: Append Enabled: true
14/12/26 08:25:50 INFO util.GSet: Computing capacity for map INodeMap
14/12/26 08:25:50 INFO util.GSet: VM type = 64-bit
14/12/26 08:25:50 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
14/12/26 08:25:50 INFO util.GSet: capacity = 2^20 = 1048576 entries
14/12/26 08:25:50 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/12/26 08:25:50 INFO util.GSet: Computing capacity for map cachedBlocks
14/12/26 08:25:50 INFO util.GSet: VM type = 64-bit
14/12/26 08:25:50 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
14/12/26 08:25:50 INFO util.GSet: capacity = 2^18 = 262144 entries
14/12/26 08:25:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/12/26 08:25:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/12/26 08:25:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/12/26 08:25:50 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/12/26 08:25:50 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/12/26 08:25:50 INFO util.GSet: Computing capacity for map NameNodeRetryCache
14/12/26 08:25:50 INFO util.GSet: VM type = 64-bit
14/12/26 08:25:50 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
14/12/26 08:25:50 INFO util.GSet: capacity = 2^15 = 32768 entries
14/12/26 08:25:50 INFO namenode.AclConfigFlag: ACLs enabled? false
14/12/26 08:25:50 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1519828890-192.168.122.100-1419611150574
14/12/26 08:25:50 INFO common.Storage: Storage directory /opt/hadoop/dfs/name has been successfully formatted.
14/12/26 08:25:51 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/12/26 08:25:51 INFO util.ExitUtil: Exiting with status 0
14/12/26 08:25:51 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master.hadoop/192.168.122.100
************************************************************/
master上jps进程查询:
[hadoop@master Desktop]$ /usr/lib/jdk1.7.0_51/bin/jps
2778 ResourceManager
3510 Jps
求问大神们,我哪里配置有问题,或者哪里配置错误了。。。跪求指教,指教
...全文
1193 6 打赏 收藏 转发到动态 举报
写回复
用AI写文章
6 条回复
切换为时间正序
请发表友善的回复…
发表回复
六哥灬 2018-12-19
  • 打赏
  • 举报
回复
90000端口?最大支持65535,,,你哪造的90000?
qq_14941653 2015-12-22
  • 打赏
  • 举报
回复
问题解决了吗?求分享,谢谢
z15984860238 2014-12-31
  • 打赏
  • 举报
回复
兄弟你学hadoop好久了?留个QQ向你学习下我编译好Eclipse插件后,始终提示内部错误,登陆不了DNFS
skyWalker_ONLY 2014-12-29
  • 打赏
  • 举报
回复
先查看一下ssh配置好了吗,直接ssh一下试试
  • 打赏
  • 举报
回复
hadoop 2.4.0 没有jobtracker 和tasktracker,是ResourceManager和NodeManager,slaves文件配置正确了吗?hosts文件配置了吗?ssh正常吗?
wang_mmmm 2014-12-28
  • 打赏
  • 举报
回复
在线等。。。跪求大神们指教啊~~~

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧