在hadoop集群上部署多台Kerberos服务器,实现高可用性。(可行性?)

厉兵秣马的菜鸟 2017-07-04 04:46:03
目前想在hadoop集群上部署多台Kerberos服务器,以防服务器故障整个集群瘫痪,实现集群高可用性。请问有大神做过么?
...全文
598 5 打赏 收藏 转发到动态 举报
写回复
用AI写文章
5 条回复
切换为时间正序
请发表友善的回复…
发表回复
luoyoumou 2017-07-26
  • 打赏
  • 举报
回复
-- 参考文档: —- http://www.tldp.org/HOWTO/Kerberos-Infrastructure-HOWTO/server-replication.html —- http://shanchao7932297.blog.163.com/blog/static/1363624201241725623761/ -- https://community.hortonworks.com/articles/92333/configure-two-kerberos-kdcs-as-a-masterslave.html -- https://www.youtube.com/watch?v=qlUWe75Shno -- kerberos + ipa 视频 -- https://zh.hortonworks.com/blog/enabling-kerberos-hdp-active-directory-integration/ -- https://www.server-world.info/en/note?os=CentOS_7&p=ipa -- yum -y install ipa-server ipa-server-dns bind bind-dyndb-ldap ( 集成 ldap) -------------------------------------------------------------------------------------------------- -- 1. Execute the following command to install the Master and Slave KDC if the KDC is not already installed: yum install krb5-server krb5-libs krb5-workstation ( master ) yum install krb5-server krb5-libs ( slave ) -------------------------------------------------------------------------------------------------- -- 2. The following defines the KDC configuration for both clusters. -- This file, /etc/krb5.conf, must be copied to each node in the cluster. [libdefaults] renew_lifetime = 7d forwardable = true default_realm = YIXIA.COM ticket_lifetime = 24h dns_lookup_realm = false dns_lookup_kdc = false udp_preference_limit=1 [domain_realm] customer.com = YIXIA.COM .customer.com = YIXIA.COM [logging] default = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log kdc = FILE:/var/log/krb5kdc.log [realms] YIXIA.COM = { admin_server = test-nn01.yixia.com kdc = test-nn01.yixia.com kdc = test-nn02.yixia.com } ----------------------------------------------------- -- Contents of /var/kerberos/krb5kdc/kadm5.acl: ( master and slave) */admin@YIXIA.COM * ----------------------------------------------------- -- Contents of the /var/kerberos/krb5kdc/kdc.conf: ( master and slave) [kdcdefaults] kdc_ports = 88,750 kdc_tcp_ports = 88,750 [realms] YIXIA.COM = { kadmind_port = 749 master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal } ----------------------------------------------------- -- Contents of /var/kerberos/krb5kdc/kpropd.acl: ( only slave ) host/test-nn01.yixia.com@YIXIA.COM host/test-nn02.yixia.com@YIXIA.COM ----------------------------------------------------- -- The KDC database is then initialized with the following command, executed from the Master KDC: kdb5_util create -s [root@test-nn01 ~]# kdb5_util create -s Loading random data Initializing database '/var/kerberos/krb5kdc/principal' for realm 'YIXIA.COM', master key name 'K/M@YIXIA.COM' You will be prompted for the database Master Password. It is important that you NOT FORGET this password. Enter KDC database master key: bee#56915 Re-enter KDC database master key to verify: bee#56915 ----------------------------------------------------- -- Now start the KDC and kadmin processes on the Master KDC only: chkconfig krb5kdc on chkconfig kadmin on /etc/rc.d/init.d/krb5kdc start /etc/rc.d/init.d/kadmin start chkconfig krb5kdc off chkconfig kadmin off /etc/rc.d/init.d/krb5kdc stop /etc/rc.d/init.d/kadmin stop -------------------------------------------------------------------------------------------------- -- An administrator must be created to manage the Kerberos realm. -- The following command is used to create the administration principal from the Master KDC: kadmin.local -q "addprinc admin/admin" [root@test-nn01 ~]# kadmin.local -q "addprinc admin/admin" Authenticating as principal root/admin@YIXIA.COM with password. WARNING: no policy specified for admin/admin@YIXIA.COM; defaulting to no policy Enter password for principal "admin/admin@YIXIA.COM": bee#56915 Re-enter password for principal "admin/admin@YIXIA.COM": bee#56915 Principal "admin/admin@YIXIA.COM" created. -------------------------------------------------------------------------------------------------- -- Host keytabs must now be created for the SLAVE KDC. Execute the following commands from the Master KDC: shell% kadmin.local kadmin: addprinc -randkey host/test-nn01.yixia.com kadmin: addprinc -randkey host/test-nn02.yixia.com -------------------------------------------------------------------------------------------------- -- Extract the host key for the Slave KDC and store it on the hosts keytab file, /etc/krb5.keytab: kadmin: ktadd host/test-nn02.yixia.com@YIXIA.COM kadmin.local: ktadd host/test-nn02.yixia.com@YIXIA.COM Entry for principal host/test-nn02.yixia.com@YIXIA.COM with kvno 11, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/test-nn02.yixia.com@YIXIA.COM with kvno 11, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/test-nn02.yixia.com@YIXIA.COM with kvno 11, encryption type des3-cbc-sha1 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/test-nn02.yixia.com@YIXIA.COM with kvno 11, encryption type arcfour-hmac added to keytab FILE:/etc/krb5.keytab. Entry for principal host/test-nn02.yixia.com@YIXIA.COM with kvno 11, encryption type des-hmac-sha1 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/test-nn02.yixia.com@YIXIA.COM with kvno 11, encryption type des-cbc-md5 added to keytab FILE:/etc/krb5.keytab. -------------------------------------------------------------------------------------------------- -- Extract the host key for the Slave KDC and store it on the hosts keytab file, /etc/krb5.keytab: -- Copy /etc/krb5.keytab to test-nn02.yixia.com:/etc/ -- and then master: rm -rf /etc/krb5.keytab kadmin.local: ktadd host/test-nn01.yixia.com@YIXIA.COM Entry for principal host/test-nn01.yixia.com@YIXIA.COM with kvno 3, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/test-nn01.yixia.com@YIXIA.COM with kvno 3, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/test-nn01.yixia.com@YIXIA.COM with kvno 3, encryption type des3-cbc-sha1 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/test-nn01.yixia.com@YIXIA.COM with kvno 3, encryption type arcfour-hmac added to keytab FILE:/etc/krb5.keytab. Entry for principal host/test-nn01.yixia.com@YIXIA.COM with kvno 3, encryption type des-hmac-sha1 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/test-nn01.yixia.com@YIXIA.COM with kvno 3, encryption type des-cbc-md5 added to keytab FILE:/etc/krb5.keytab. kadmin.local: -------------------------------------------------------------------------------------------------- -- Update /etc/services on each KDC host, if not present: -------------------------------------------------------------------------------------------------- -- Install xinetd on the hosts of the Master and Slave KDC, if not already installed, -- to enable kpropd to execute: yum install xinetd -------------------------------------------------------------------------------------------------- -- Create the configuration for kpropd on both the Master and Slave KDC hosts: Create /etc/xinetd.d/krb5_prop with the following contents. service krb_prop { disable = no socket_type = stream protocol = tcp user = root wait = no server = /usr/sbin/kpropd } -------------------------------------------------------------------------------------------------- -- Configure xinetd to run as a persistent service on both the Master and Slave KDC hosts: chkconfig xinetd on service xinetd start -------------------------------------------------------------------------------------------------- -- Copy the following files from the Master KDC host to the Slave KDC host: /etc/krb5.conf /var/kerberos/krb5kdc/kadm5.acl /var/kerberos/krb5kdc/kdc.conf /var/kerberos/krb5kdc/kpropd.acl /var/kerberos/krb5kdc/.k5.YIXIA.COM -------------------------------------------------------------------------------------------------- -- Perform the initial KDC database propagation to the Slave KDC: mkdir -p /usr/local/var/krb5kdc/ kdb5_util dump /usr/local/var/krb5kdc/slave_datatrans -- 在执行下面命令前,先在 Slave 执行:kpropd -S 如果不能执行的话,尝试:service kprop start kprop -f /usr/local/var/krb5kdc/slave_datatrans test-nn02.yixia.com -------------------------------------------------------------------------------------------------- -- 在 slave 创建 kpropd 进程自动启动: kprop stream tcp nowait root /usr/local/sbin/kpropd kpropd service kprop start chkconfig kprop on -------------------------------------------------------------------------------------------------- -- The Slave KDC may be started at this time: chkconfig krb5kdc on /etc/rc.d/init.d/krb5kdc start -------------------------------------------------------------------------------------------------- -- Script to propagate the updates from the Master KDC to the Slave KDC. -- Create a cron job, or the like, to run this script on a frequent basis. #!/bin/sh #/usr/local/bin/krb5prop.sh kdclist = "test-nn02.yixia.com" /usr/sbin/kdb5_util dump /usr/local/var/krb5kdc/slave_datatrans for kdc in $kdclist do /usr/sbin/kprop -f /usr/local/var/krb5kdc/slave_datatrans $kdc done -------------------------------------------------------------------------------------------------- --  创建 crontab 任务,每15分钟同步一次: [root@test-nn01 local]# crontab -l # Sync for kerber slave (test-nn02.yixia.com) , add by luoyoumou 15 * * * * sh /usr/local/bin/krb5prop.sh
luoyoumou 2017-07-26
  • 打赏
  • 举报
回复
当然可行......
  • 打赏
  • 举报
回复
引用 2 楼 tom_fans 的回复:
kerberos + sentry 访问控制方式,是我很讨厌的, 如果说集群刚开始就做了,那还好,如果是后期再做,困难可想而知,不建议使用这种方式管理。
之前的集群使用的是Kerberos+Ldap的访问控制管理方式,但是都是在一台主机是部署KDC,现在是想看看能不能在一个新的集群里部署多台Kerberos服务器,不知您是否了解。
tom_fans 2017-07-04
  • 打赏
  • 举报
回复
kerberos + sentry 访问控制方式,是我很讨厌的, 如果说集群刚开始就做了,那还好,如果是后期再做,困难可想而知,不建议使用这种方式管理。
  • 打赏
  • 举报
回复
不要沉了呀

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧