如何使ssh忽略known_hosts文件

weixin_38055888 2009-11-23 05:35:32

我经常用ssh访问的一台主机上有多个Linux系统,会经常切换,那么这些系统使用同一ip,登录过一次后就会把ssh信息记录在本地的~/.ssh/known_hsots文件中,切换该系统后再用ssh访问这台主机就会出现冲突警告,需要手动删除修改known_hsots里面的内容,这样我的一些脚本就没有办法实现自动化,请问怎么能忽略掉这个known_hosts的访问?

/etc/ssh/sshd_config 配置文件中有个选项 IgnoreUserKnownHosts 改成 yes也不起作用。
...全文
280 5 打赏 收藏 转发到动态 举报
写回复
用AI写文章
5 条回复
切换为时间正序
请发表友善的回复…
发表回复
一、安装前准备 磁盘规划 使用iscsi共享磁盘做成raw设备 Vote_OCR /dev/sdb1 /dev/raw/raw1 Vote_OCR /dev/sdb2 /dev/raw/raw2 Vote_OCR /dev/sdb3 /dev/raw/raw3 Vote_OCR /dev/sdb5 /dev/raw/raw4 Vote_OCR /dev/sdb6 /dev/raw/raw5 DATA / dev/sdb7 /dev/raw/raw6 DATA dev/sdb8 /dev/raw/raw7 DATA dev/sdb9 /dev/raw/raw8 FLR dev/sdb10 /dev/raw/raw9 FLR dev/sdb11 /dev/raw/raw10 以下操作如果没有特殊说明在两个节点都做相同操作 网络及主机名规划 #public 192.168.10.10 node1 192.168.10.20 node2 #vip 192.168.10.100 node1vip 192.168.10.200 node2vip #private 192.168.20.10 node1priv 192.168.20.20 node2priv #scan 192.168.10.101 scanip 修改/etc/hosts文件添加以上内容 # vi /etc/hosts 192.168.10.10 node1 192.168.10.20 node2 #vip 192.168.10.100 node1vip 192.168.10.200 node2vip #private 192.168.20.10 node1priv 192.168.20.20 node2priv #scan 192.168.10.101 scanip 修改以下文件的最后一行 # vi /etc/sysconfig/network HOSTNAME=node1 ---第二台机器修改为node2 命令行修改 # hostname node1 第二节点修改为 node2 执行 [root@localhost ~]# su - [root@node1 ~]# 我们在安装虚拟机时添加了两块网卡 在这里我们把 eth0作为对外访问使用 eth1作为对对内访问使用 配置IP地址 使用setup命令修改IP 分别配置eth0和eth1(我们只演示node1的配置,node2也配置成相应的IP) 保存退出 重启使配置生效 3、关闭没必要的服务 chkconfig autofs off chkconfig acpid off chkconfig sendmail off chkconfig cups-config-daemon off chkconfig cpus off chkconfig xfs off chkconfig lm_sensors off chkconfig gpm off chkconfig openibd off chkconfig iiim off chkconfig pcmcia off chkconfig cpuspeed off chkconfig nfslock off chkconfig ip6tables off chkconfig rpcidmapd off chkconfig apmd off chkconfig sendmail off chkconfig arptables_jf off chkconifg microcode_ctl off chkconfig rpcgssd off 上述服务有不存在的会提示,服务中读取信息时出错:没有那个文件或目录。没问题,忽略。 停用NTP服务 /sbin/service ntpd stop chkconfig ntpd off mv /etc/ntp.conf /etc/ntp.conf.bak 重启所有节点 安装yum源规划 在/etc/yum.repos.d/ 目录下添加rhel5.repo文件 [root@node1 yum.repos.d]# pwd /etc/yum.repos.d [root@node1 yum.repos.d]# ls rhel5.repo [root@node1 yum.repos.d]# [root@node1 yum.repos.d]# vi rhel5.repo [Server] name=server baseurl=file:///mnt/Server/ enabled=1 gpgcheck=0 [ClusterStorage] name=server baseurl=file:///mnt/ClusterStorage/ enabled=1 gpgcheck=0 挂载光驱到/mnt目录 [root@node1 yum.repos.d]# mount /dev/hdc /mnt mount: block device /dev/hdc is write-protected, mounting read-only [root@node1 yum.repos.d]# cd /mnt/ [root@node1 mnt]# ls Server ClusterStorage -ld dr-xr-xr-x 3 root root 8192 2010-03-22 ClusterStorage dr-xr-xr-x 3 root root 557056 2010-03-22 Server [root@node1 mnt]# 刷新yum列表 [root@node1 mnt]# yum clean all Loaded plugins: rhnplugin, security Cleaning up Everything [root@node1 mnt]# 4、软件包规划 安装所有依赖包 binutils-* compat-libstdc++-* elfutils-libelf-* elfutils-libelf-devel-* gcc-* gcc-c++-* glibc-* glibc-common-* glibc-devel-* glibc-headers-* ksh-* libaio-* libaio-devel-* libgcc-* libstdc++-* make-* sysstat-* expat-* pdksh-* unixODBC-* 使用YUM安装 # yum install -y binutils-* compat-libstdc++-* elfutils-libelf-* elfutils-libelf-devel-* gcc-* gcc-c++-* glibc-* glibc-common-* glibc-devel-* glibc-headers-* ksh-* libaio-* libaio-devel-* libgcc-* libstdc++-* make-* sysstat-* expat-* pdksh-* unixODBC-* 5、建立用户和目录 用户及目录规划 /usr/sbin/groupadd -g 501 oinstall /usr/sbin/groupadd -g 502 dba /usr/sbin/groupadd -g 504 asmadmin /usr/sbin/groupadd -g 506 asmdba /usr/sbin/groupadd -g 507 asmoper /usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper,dba grid /usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle # passwd grid Changing password for user grid. New UNIX password: BAD PASSWORD: it is too short Retype new UNIX password: passwd: all authentication tokens updated successfully. [root@node1 /]# passwd oracle Changing password for user oracle. New UNIX password: BAD PASSWORD: it is based on a dictionary word Retype new UNIX password: passwd: all authentication tokens updated successfully. 创建grid目录结构 mkdir -p /u01/app/oraInventory chown -R grid:oinstall /u01/app/oraInventory chmod -R 775 /u01/app/oraInventory mkdir -p /u01/app/grid chmod -R 775 /u01/app/grid chown -R grid:oinstall /u01/app/grid mkdir -p /u01/app/11.2.0/grid chown -R grid:oinstall /u01/app/11.2.0/grid chmod -R 775 /u01/app/11.2.0/grid 创建oracle目录结构 mkdir -p /u01/app/oracle mkdir /u01/app/oracle/cfgtoollogs chown -R oracle:oinstall /u01/app/oracle chmod -R 775 /u01/app/oracle mkdir -p /u01/app/oracle/product/11.2.0/db_1 chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1 chmod -R 775 /u01/app/oracle/product/11.2.0/db_1 mkdir –p /u01/software chmod -R 775 /u01 iscsi共享存储规划 (分为server端和client端,我们这里是实验环境,为了节省机器,有一台机器既做服务器同时也做客户端,另一台只有客户端。注:node1这个虚拟机尽量内存稍微大一些),下面我们开始配置。 在node1上准备要使用的磁盘,并做成相应的raw设备 我们在配置虚拟机的时候加了一块20G的盘 sdb 安装和配置 Node1+iscsi端: [root@node1 mnt]# yum install -y *scsi* [root@node1 /]# vi /etc/tgt/targets.conf #添加以下几行 #注:红色部分是在我们存储客户端登录的时候需要 backing-store /dev/sdb # Becomes LUN 1 [root@node1 /]# /etc/init.d/tgtd restart [root@node1 /]# chkconfig tgtd on [root@node1 /]#cd /etc/rc.d/rc5.d/ [root@node1 rc5.d]# mv S13iscsi S40iscsi [root@node1 rc5.d]# [root@node1 ~]# /etc/init.d/iscsi start [root@node1 ~] iscsiadm -m discovery -t st -p 192.168.10.10 <-----(存储地址) 192.168.10.10:3260,1 iqn.2012-09.com.example:server.target4 [root@node1 ~]#iscsiadm -m node -T iqn.2012-09.com.example:server.target4 -p 192.168.10.10 -l Logging in to [iface: default, target: iqn.2012-09.com.example:server.target4, portal: 192.168.10.10,3260] Login to [iface: default, target: iqn.2012-09.com.example:server.target4, portal: 192.168.10.10,3260]: successful 查看磁盘信息 [root@node1 /]# fdisk -l Node2端: [root@node2 ~]# yum install *scsi* -y [root@node2 ~]# /etc/init.d/iscsi start [root@node2 ~]# iscsiadm -m discovery -t st -p 192.168.10.10 <-----(存储地址) 192.168.10.10:3260,1 iqn.2012-09.com.example:server.target4 [root@node2 ~]#iscsiadm -m node -T iqn.2012-09.com.example:server.target4 -p 192.168.10.10 -l Logging in to [iface: default, target: iqn.2012-09.com.example:server.target4, portal: 192.168.10.10,3260] Login to [iface: default, target: iqn.2012-09.com.example:server.target4, portal: 192.168.10.10,3260]: successful [root@node2 /]# 查看磁盘信息 # fdisk -l 在其中一个节点上分区 /dev/sdb 分成10个2G大小的分区 Device Boot Start End Blocks Id System /dev/sdb1 1 1908 1953776 83 Linux /dev/sdb2 1909 3816 1953792 83 Linux /dev/sdb3 3817 5724 1953792 83 Linux /dev/sdb4 5725 20480 15110144 5 Extended /dev/sdb5 5725 7632 1953776 83 Linux /dev/sdb6 7633 9540 1953776 83 Linux /dev/sdb7 9541 11448 1953776 83 Linux /dev/sdb8 11449 13356 1953776 83 Linux /dev/sdb9 13357 15264 1953776 83 Linux /dev/sdb10 15265 17172 1953776 83 Linux /dev/sdb11 17173 19080 1953776 83 Linux [root@node2 ~]# fdisk -l /dev/sdb Disk /dev/sdb: 21.4 GB, 21474836480 bytes 64 heads, 32 sectors/track, 20480 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 1908 1953776 83 Linux /dev/sdb2 1909 3816 1953792 83 Linux /dev/sdb3 3817 5724 1953792 83 Linux /dev/sdb4 5725 20480 15110144 5 Extended /dev/sdb5 5725 7632 1953776 83 Linux /dev/sdb6 7633 9540 1953776 83 Linux /dev/sdb7 9541 11448 1953776 83 Linux /dev/sdb8 11449 13356 1953776 83 Linux /dev/sdb9 13357 15264 1953776 83 Linux /dev/sdb10 15265 17172 1953776 83 Linux /dev/sdb11 17173 19080 1953776 83 Linux [root@node2 ~]# 做完分区之后 在两个节点上分别执行以下命令: # partprobe 配置raw设备(每个节点都操作) #vi /etc/udev/rules.d/60-raw.rules ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N" ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw2 %N" ACTION=="add", KERNEL=="sdb3", RUN+="/bin/raw /dev/raw/raw3 %N" ACTION=="add", KERNEL=="sdb5", RUN+="/bin/raw /dev/raw/raw4 %N" ACTION=="add", KERNEL=="sdb6", RUN+="/bin/raw /dev/raw/raw5 %N" ACTION=="add", KERNEL=="sdb7", RUN+="/bin/raw /dev/raw/raw6 %N" ACTION=="add", KERNEL=="sdb8", RUN+="/bin/raw /dev/raw/raw7 %N" ACTION=="add", KERNEL=="sdb9", RUN+="/bin/raw /dev/raw/raw8 %N" ACTION=="add", KERNEL=="sdb10", RUN+="/bin/raw /dev/raw/raw9 %N" ACTION=="add", KERNEL=="sdb11", RUN+="/bin/raw /dev/raw/raw10 %N" KERNEL=="raw*", OWNER="grid" GROUP="asmadmin", MODE="0660" # start_udev  启动 udev: [确定] [root@node1 ~]# ll /dev/raw/ 总计 0 crw-rw---- 1 grid asmadmin 162, 1 09-29 18:13 raw1 crw-rw---- 1 grid asmadmin 162, 10 09-29 18:13 raw10 crw-rw---- 1 grid asmadmin 162, 2 09-29 18:13 raw2 crw-rw---- 1 grid asmadmin 162, 3 09-29 18:13 raw3 crw-rw---- 1 grid asmadmin 162, 4 09-29 18:13 raw4 crw-rw---- 1 grid asmadmin 162, 5 09-29 18:13 raw5 crw-rw---- 1 grid asmadmin 162, 6 09-29 18:13 raw6 crw-rw---- 1 grid asmadmin 162, 7 09-29 18:13 raw7 crw-rw---- 1 grid asmadmin 162, 8 09-29 18:13 raw8 crw-rw---- 1 grid asmadmin 162, 9 09-29 18:13 raw9 [root@node1 ~]# 各类参数及所需配置设置 配置内核参数 vi /etc/sysctl.conf kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 6553600 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 # sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.file-max = 6553600 fs.file-max = 6815744 fs.aio-max-nr = 1048576 修改limits文件,添加如下内容 vi /etc/security/limits.conf grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 配置hangcheck-timer # modprobe hangcheck-timer hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1 修改pam文件 #vi /etc/pam.d/login 添加一行 session required pam_limits.so 修改profile文件 #vi /etc/profile if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi 配置ssh密钥互认(oracle和grid两个用户) [root@node1 u01]# su - oracle [oracle@node1 ~]$ [oracle@node1 ~]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Created directory '/home/oracle/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: 5d:94:12:23:9c:5b:ae:e8:5e:23:fb:ce:65:bc:a6:23 oracle@node1 [oracle@node1 ~]$ cd .ssh/ [oracle@node1 .ssh]$ mv id_rsa.pub authorized_keys [oracle@node1 .ssh]$ ll 总计 8 -rw-r--r-- 1 oracle oinstall 394 09-27 21:52 authorized_keys -rw------- 1 oracle oinstall 1675 09-27 21:52 id_rsa [oracle@node1 .ssh]$ chmod 600 authorized_keys [oracle@node1 .ssh]$ cd .. [oracle@node1 ~]$ scp -r .ssh/ 192.168.10.20:/home/oracle The authenticity of host '192.168.10.20 (192.168.10.20)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.10.20' (RSA) to the list of known hosts. oracle@192.168.10.20's password: id_rsa 100% 1675 1.6KB/s 00:00 authorized_keys 100% 394 0.4KB/s 00:00 known_hosts 100% 395 0.4KB/s 00:00 [oracle@node1 ~]$ [oracle@node1 ~]$ su - grid 口令: [grid@node1 ~]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/grid/.ssh/id_rsa): Created directory '/home/grid/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/grid/.ssh/id_rsa. Your public key has been saved in /home/grid/.ssh/id_rsa.pub. The key fingerprint is: 09:b9:21:44:1e:fc:a1:94:6f:9a:e6:5e:ee:d2:76:e4 grid@node1 [grid@node1 ~]$ cd .ssh/ [grid@node1 .ssh]$ ls id_rsa id_rsa.pub [grid@node1 .ssh]$ mv id_rsa.pub authorized_keys [grid@node1 .ssh]$ chmod 600 authorized_keys [grid@node1 .ssh]$ cd .. [grid@node1 ~]$ scp -r .ssh/ 192.168.10.20:/home/grid The authenticity of host '192.168.10.20 (192.168.10.20)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.10.20' (RSA) to the list of known hosts. grid@192.168.10.20's password: id_rsa 100% 1671 1.6KB/s 00:00 authorized_keys 100% 392 0.4KB/s 00:00 known_hosts 100% 395 0.4KB/s 00:00 [grid@node1 ~]$ 测试: #su - oracle [oracle@node1 ~]$ ssh node2 date The authenticity of host 'node2 (192.168.10.20)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 21:57:17 CST [oracle@node1 ~]$ ssh node2priv date The authenticity of host 'node2priv (192.168.20.20)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2priv,192.168.20.20' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 21:57:28 CST [oracle@node1 ~]$ ssh node1 date The authenticity of host 'node1 (192.168.10.10)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,192.168.10.10' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 21:57:45 CST [oracle@node1 ~]$ ssh node1priv date The authenticity of host 'node1priv (192.168.20.10)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1priv,192.168.20.10' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 21:58:02 CST [oracle@node1 ~]$ [oracle@node1 ~]$ ssh node2 [oracle@node2 ~]$ id uid=502(oracle) gid=501(oinstall) groups=501(oinstall),502(dba),506(asmdba) [oracle@node2 ~]$ ssh node2 date The authenticity of host 'node2 (192.168.10.20)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 21:58:41 CST [oracle@node2 ~]$ ssh node2priv date The authenticity of host 'node2priv (192.168.20.20)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2priv,192.168.20.20' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 21:58:49 CST [oracle@node2 ~]$ ssh node1 date The authenticity of host 'node1 (192.168.10.10)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,192.168.10.10' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 21:59:01 CST [oracle@node2 ~]$ ssh node1priv date The authenticity of host 'node1priv (192.168.20.10)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1priv,192.168.20.10' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 21:59:12 CST [oracle@node2 ~]$ [root@node1 /]# su - grid [grid@node1 ~]$ ssh node1 date The authenticity of host 'node1 (192.168.10.10)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,192.168.10.10' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 21:59:53 CST [grid@node1 ~]$ ssh node1priv date The authenticity of host 'node1priv (192.168.20.10)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1priv,192.168.20.10' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 21:59:59 CST [grid@node1 ~]$ ssh node2 date The authenticity of host 'node2 (192.168.10.20)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 22:00:09 CST [grid@node1 ~]$ ssh node2priv date The authenticity of host 'node2priv (192.168.20.20)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? Host key verification failed. [grid@node1 ~]$ ssh node2priv date The authenticity of host 'node2priv (192.168.20.20)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2priv,192.168.20.20' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 22:00:16 CST [grid@node1 ~]$ ssh node2 [grid@node2 ~]$ ssh node2 date The authenticity of host 'node2 (192.168.10.20)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 22:00:29 CST [grid@node2 ~]$ ssh node2priv date The authenticity of host 'node2priv (192.168.20.20)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2priv,192.168.20.20' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 22:00:38 CST [grid@node2 ~]$ ssh node1 date The authenticity of host 'node1 (192.168.10.10)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,192.168.10.10' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 22:00:49 CST [grid@node2 ~]$ ssh node1priv date The authenticity of host 'node1priv (192.168.20.10)' can't be established. RSA key fingerprint is 95:ed:6d:87:61:00:27:ed:38:17:6c:e9:6c:c3:8a:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1priv,192.168.20.10' (RSA) to the list of known hosts. 2012年 09月 27日 星期四 22:00:58 CST [grid@node2 ~]$ 最后 在每个节点的 oracle用户 grid用户都执行一个命令 $ ssh node1 date && ssh node1priv date && ssh node2 date && ssh node2priv date $ id uid=501(grid) gid=501(oinstall) groups=501(oinstall),502(dba),504(asmadmin),506(asmdba),507(asmoper) 最后是这个效果: [grid@node2 ~]$ ssh node1 date && ssh node1priv date && ssh node2 date && ssh node2priv date 2012年 09月 27日 星期四 22:04:03 CST 2012年 09月 27日 星期四 22:04:03 CST 2012年 09月 27日 星期四 22:04:00 CST 2012年 09月 27日 星期四 22:04:01 CST [grid@node2 ~]$ su - oracle 口令: [oracle@node2 ~]$ ssh node1 date && ssh node1priv date && ssh node2 date && ssh node2priv date 2012年 09月 27日 星期四 22:04:14 CST 2012年 09月 27日 星期四 22:04:14 CST 2012年 09月 27日 星期四 22:04:11 CST 2012年 09月 27日 星期四 22:04:12 CST [oracle@node2 ~]$ ssh node1 [oracle@node1 ~]$ ssh node1 date && ssh node1priv date && ssh node2 date && ssh node2priv date 2012年 09月 27日 星期四 22:04:22 CST 2012年 09月 27日 星期四 22:04:22 CST 2012年 09月 27日 星期四 22:04:19 CST 2012年 09月 27日 星期四 22:04:20 CST [oracle@node1 ~]$ su - grid 口令: [grid@node1 ~]$ ssh node1 date && ssh node1priv date && ssh node2 date && ssh node2priv date 2012年 09月 27日 星期四 22:04:45 CST 2012年 09月 27日 星期四 22:04:45 CST 2012年 09月 27日 星期四 22:04:42 CST 2012年 09月 27日 星期四 22:04:42 CST [grid@node1 ~]$ 8、配置远程图形界面管理 可以使用xmanager,也可以使用vnc。对于远程链接跨越互联网的建议用vnc,如果在局域网中我们使用xmanager即可。 Xmanager配置说明: 配置文件位置: vi /usr/share/gdm/defaults.conf找到下面内容修改配置选项: [xdmcp] DisplaysPerHost=10 Enable=true //257行 Port=177 //284行 [security] AllowRemoteRoot =true //214行 /etc/inittab文件最后一行添加 x:5:respawn:/usr/sbin/gdm 使用命令 # gdm-restart 重启gdm; 查看gdm监听端口开启成功 netstat -nltpu | grep 177 [root@node1 /]# netstat -nltpu | grep 177 udp 0 0 0.0.0.0:177 0.0.0.0:* 3184/gdm-binary [root@node1 /]# 安装前检测 将软件上传到服务器中的我们之前建好的/u01/software 中并解压 [root@node1 /]# chown –R grid.oinstall /u01/software/ #su -grid #cd /u01/software/grid #./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose > check.txt 最后查看check.txt文件,如果有错误,可以尝试使用root身份运行/tmp/CVU_11.2.0.1.0_grid/runfixup.sh。最后我们可能看到关于NTP的错误,我们忽略。 二、图形界面开始安装clusterware [root@node1 ~]# su – grid [grid@node1 ~]$ cd /u01/software/grid [grid@node1 grid]$ ls check1.txt doc response runcluvfy.sh sshsetup welcome.html check.txt install rpm runInstaller stage [grid@node1 grid]$ ./runInstaller 选择语言支持 配置集群名称,第二行的“SCAN名称”要和hosts文件里面的scan那项要对应 添加修改节点信息,一定要和hosts文件中的名字对应。在此步骤可以配置ssh互信。因为我们事先已经配置好,这里就不需要修改了。 网络配置 选择磁盘管理方式,我们选择ASM表决盘管理 创建ASM磁盘组 搜索路径 选择需要的盘 设置密码 选择“是” 不使用 使用默认用户即可 设置grid的base和home目录 设置日志目录 检查环境 这项可以忽略 摘要 开始安装 将两个脚本分别在两个节点上以root身份运行 每个脚本等node1执行完了,在去下一个节点执行 执行完成之后点击确定 等待完成,clusterware安装成功。 三、安装数据库软件 #su - oracle $cd /u01/software/database $ cd database/ $ ls doc install response rpm runInstaller sshsetup stage welcome.html $ ./runInstaller 关闭电子邮件接收 选择是 仅安装数据库软件 选择RAC数据库安装 选择语言支持 企业版安装 选择安装目录 选择操作组 点击完成,直到安装结束。 建库(dbca) 以oracle用户身份执行dbca 点击完成直到结束。

433

社区成员

发帖
与我相关
我的任务
社区描述
其他技术讨论专区
其他 技术论坛(原bbs)
社区管理员
  • 其他技术讨论专区社区
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧