hadoop2.8.5 搭建QJM的HA集群,NameNode迁移不完全生效

金山老师 2019-01-10 10:18:43


使用hadoop2.8.5 jdk1.8 机器Centos7搭建QJM的 HA集群第一个Namenode关闭,能切换到第二个Namenode,但是NN2关闭,NN1还是standby状态。 求助各位是什么原因????

【zoo.cfg】

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/home/zookeeper_data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888



配置文件如下;

【hosts】

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.81.211 node1
192.168.81.212 node2
192.168.81.213 node3
192.168.81.214 node4
192.168.81.215 node5
192.168.81.216 node6




【hadoop-env.sh】

export JAVA_HOME=/home/weblogic/jdk1.8.0_181


【core-site.xml】

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://sxt</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop_tmp</value>
</property>

<property>
<name>ha.zookeeper.quorum</name>
<value>node1:2181,node2:2181,node3:2181</value>
</property>
</configuration>



hdfs-site.xml

<configuration>
<property>
<name>dfs.nameservices</name>
<value>sxt</value>
</property>

<property>
<name>dfs.ha.namenodes.sxt</name>
<value>nn1,nn2</value>
</property>

<property>
<name>dfs.namenode.rpc-address.sxt.nn1</name>
<value>node1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.sxt.nn2</name>
<value>node2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.sxt.nn1</name>
<value>node1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.sxt.nn2</name>
<value>node2:50070</value>
</property>

<!-- journalNode集群-->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://node2:8485;node3:8485;node4:8485/sxt</value>
</property>

<!--代理类,故障迁移 -->
<property>
<name>dfs.client.failover.proxy.provider.sxt</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>



<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>

<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>

<property>
<name>fs.defaultFS</name>
<value>hdfs://sxt</value>
</property>

<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/journal/data</value>
</property>


<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>



</configuration>







【slaves】

node2
node3
node4
...全文
42 回复 打赏 收藏 转发到动态 举报
写回复
用AI写文章
回复
切换为时间正序
请发表友善的回复…
发表回复

1,258

社区成员

发帖
与我相关
我的任务
社区描述
Spark由Scala写成,是UC Berkeley AMP lab所开源的类Hadoop MapReduce的通用的并行计算框架,Spark基于MapReduce算法实现的分布式计算。
社区管理员
  • Spark
  • shiter
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧