Oracle 彻底删除 11gR2 GI
环境:RHEL 6.5 + Oracle 11.2.0.4 GI
需求:在搭建 Standby RAC 时,安装 GI 软件期间由于 GI 安装遇到一些问题,root 脚本执行 hang 住,且无任何报错(跟踪 / opt/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_jystdrac1.log 也无任何报错,几小时都不再刷新),因为 11.2 后的 root 脚本是可以重复执行的,所以反复尝试执行 root 脚本, 但结果均未成功。
由于这个虚拟的系统环境是直接从很久前自己做的实验直接复制过来的,只能是怀疑环境本身有问题,现在想完全重新安装 GI,在这之前需要 Oracle 彻底删除 11g GI,这个操作可以参考 MOS 文档:
How to completely remove 11.2 and 12.1 Grid Infrastructure, CRS and/or Oracle Restart - IBM: Linux on System z (文档 ID 1413787.1)
注:我这里的实验环境由于是 GI 并未完全成功安装成功,所以有些命令的输出可能并不是标准输出。
主要步骤如下:
删除 CRS 配置
使用 root 用户,执行
/opt/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose
- [root@jystdrac1 install]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose
- Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
- PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
- PRCR-1068 : Failed to query resources
- Cannot communicate with crsd
- PRCR-1070 : Failed to check if resource ora.gsd is registered
- Cannot communicate with crsd
- PRCR-1070 : Failed to check if resource ora.ons is registered
- Cannot communicate with crsd
- CRS-4535: Cannot communicate with Cluster Ready Services
- CRS-4000: Command Stop failed, or completed with errors.
- CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'jystdrac1'
- CRS-2673: Attempting to stop 'ora.crf' on 'jystdrac1'
- CRS-2673: Attempting to stop 'ora.mdnsd' on 'jystdrac1'
- CRS-2677: Stop of 'ora.mdnsd' on 'jystdrac1' succeeded
- CRS-2677: Stop of 'ora.crf' on 'jystdrac1' succeeded
- CRS-2673: Attempting to stop 'ora.gipcd' on 'jystdrac1'
- CRS-2677: Stop of 'ora.gipcd' on 'jystdrac1' succeeded
- CRS-2673: Attempting to stop 'ora.gpnpd' on 'jystdrac1'
- CRS-2677: Stop of 'ora.gpnpd' on 'jystdrac1' succeeded
- CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'jystdrac1' has completed
- CRS-4133: Oracle High Availability Services has been stopped.
- Removing Trace File Analyzer
- error: package cvuqdisk is not installed
- Successfully deconfigured Oracle clusterware stack on this node
在 GI 的最后一个节点,你需要在 rootcrs.pl 命令后添加–lastnode 参数
/opt/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -lastnode -verbose -force
- [root@jystdrac2 ~]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -lastnode -verbose -force
- Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
- Adding Clusterware entries to upstart
- crsexcl failed to start
- Failed to start the Clusterware. Last 20 lines of the alert log follow:
- ****Unable to retrieve Oracle Clusterware home.
- Start Oracle Clusterware stack and try again.
- ****Unable to retrieve Oracle Clusterware home.
- Start Oracle Clusterware stack and try again.
- ****Unable to retrieve Oracle Clusterware home.
- Start Oracle Clusterware stack and try again.
- ****Unable to retrieve Oracle Clusterware home.
- Start Oracle Clusterware stack and try again.
- ****Unable to retrieve Oracle Clusterware home.
- Start Oracle Clusterware stack and try again.
- Failure in execution (rc=-1, 0, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl stop res ora.registry.acfs -n jystdrac2 -f
- Failure in execution (rc=-1, 0, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl delete res ora.registry.acfs -f
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- sh: /opt/app/11.2.0/grid/bin/crsctl: No such file or directory
- Failure in execution (rc=-1, 32512, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl delete res ora.drivers.acfs -init -f
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Failure in execution (rc=-1, 32512, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl stop crs -f
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- ################################################################
- # You must kill processes or reboot the system to properly #
- # cleanup the processes started by Oracle clusterware #
- ################################################################
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Either /etc/oracle/ocr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Can't exec "/opt/app/11.2.0/grid/bin/clsecho": No such file or directory at /opt/app/11.2.0/grid/lib/acfslib.pm line 1464.
- Either /etc/oracle/olr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Either /etc/oracle/olr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
- error: package cvuqdisk is not installed
- Successfully deconfigured Oracle clusterware stack on this node
使用 root 用户,执行:
/opt/app/11.2.0/grid/crs/install/roothas.pl -deconfig -verbose -force
- [root@jystdrac1 app]# /opt/app/11.2.0/grid/crs/install/roothas.pl -deconfig -verbose -force
- Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
- CRS-4047: No Oracle Clusterware components configured.
- CRS-4000: Command Stop failed, or completed with errors.
- CRS-4047: No Oracle Clusterware components configured.
- CRS-4000: Command Delete failed, or completed with errors.
- CRS-4047: No Oracle Clusterware components configured.
- CRS-4000: Command Stop failed, or completed with errors.
- You must kill ohasd processes or reboot the system to properly
- cleanup the processes started by Oracle clusterware
- ACFS-9313: No ADVM/ACFS installation detected.
- Either /etc/oracle/olr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
- Successfully deconfigured Oracle Restart stack
第二个节点:
- [root@jystdrac2 ~]# /opt/app/11.2.0/grid/crs/install/roothas.pl -deconfig -verbose -force
- Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
- Failure in execution (rc=-1, 256, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl stop resource ora.CSSd -f
- Failure in execution (rc=-1, 256, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl delete resource ora.cssd -f
- Failure in execution (rc=-1, 256, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl stop has -f
- You must kill ohasd processes or reboot the system to properly
- cleanup the processes started by Oracle clusterware
- Can't exec "/opt/app/11.2.0/grid/bin/clsecho": No such file or directory at /opt/app/11.2.0/grid/lib/acfslib.pm line 1464.
- Either /etc/oracle/olr.loc does not exist or is not readable
- Make sure the file exists and it has read and execute access
- Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
- Successfully deconfigured Oracle Restart stack
修改 / etc/inittab,移除相关配置信息:
- tail /etc/inittab
- #h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
- init q
我这里的实验环境这个文件还没有这个内容,所以不需要做操作,继续往下。
按照 MOS 的描述,清除所有存在的相关文件:
- If the Oracle Grid root.sh script has been run on any of the nodes previously, then the
- Linux inittab file should be modified to remove the lines that were added.
- Deconfig should remove this line but it is best to verify.
- tail /etc/inittab
- #h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
- init q
- Clean up files
- The following commands are used to remove all Oracle Grid and database
- software. You can also use the Oracle de-installer to remove the necessary software
- components.
- #
- #WARNING - You should verify this script before running this script as this
- #script will remove everything for all Oracle systems on the Linux system where
- #the script is run.
- #
- rm -f /etc/init.d/init.ohasd
- #
- rm -f /etc/inittab.crs
- rm -rf /etc/oracle
- #
- # Oracle Bug Note:429214.1
- #
- rm -f /usr/tmp/.oracle/*
- rm -f /tmp/.oracle/*
- rm -f /var/tmp/.oracle/*
- ###
- WARNING: BE VERY CAREFUL - THIS WILL REMOVE THE ORATAB ENTRIES FOR ALL DATABASES RUNNING ON THIS SERVER AND ALSO THE CENTRAL INVENTORY FOR ANY ORACLE HOMES/GRID HOMES WHICH ARE CURRENTLY INSTALLED ON THIS SERVER.
- rm -f /etc/oratab
- rm -rf /var/opt/oracle
- #
- # Remove Oracle software directories *these may change based on your install en
- # You need to modify the following to map to your install environment.
- rm -rf </u01/base/*> *********this is $ORACLE_BASE
- rm -rf </u01/oraInventory> ****this is the central inventory loc pointed to by oraInst.loc
- rm -rf </u01/grid/*> **********this is the Grid Home
- rm -rf </u01/oracle> **********this is the DB Home#
确认目录权限正确:
- mkdir - p / opt / app / &&chown - R oracle: oinstall / opt / app / &&chmod 775 / opt / app && ls - lh / opt
安装 cvuqdisk 的 rpm 包
- rpm - ivh / opt / app / media / grid / rpm / cvuqdisk - 1.0.9 - 1.rpm
清除 ocr/voting 磁盘信息:
dd if=/dev/zero of=/dev/
- dd if=/dev/zero of=/dev/asm-diskb bs=1M count=100
- dd if=/dev/zero of=/dev/asm-diskc bs=1M count=100
- dd if=/dev/zero of=/dev/asm-diskd bs=1M count=100
然后所有节点重启主机,准备在清空所有配置后的环境下进行一次全新的 GI 安装。
来源: http://www.linuxidc.com/Linux/2017-08/146100.htm