170 likes | 351 Views
Setting up ACFS. In 11GR2.3 ( an implementation ) Mathijs Bruggink. General. Purpose : Central/ Shared Logging . Shared Location for Exports ( Dumps ). REQUIREMENTS. Setting up standards: One ACFS Disk Group per Database. (<Database>_ACFS01).
E N D
Setting up ACFS In 11GR2.3 ( an implementation ) Mathijs Bruggink.
General Purpose: • Central/SharedLogging. • Shared Location for Exports (Dumps)
REQUIREMENTS • Setting up standards: • One ACFS Disk Group per Database. (<Database>_ACFS01). • ACFS Disk Group set up: NORMAL redundancy. • Set up a volume. The volume name: <Database>_ACFS. • In ASMCA: • create ACFS as a Database Home File system. • Cluster File system:/opt/oracle/<DATABASE>. • In Grid Infra: • Create dependency between Database and ACFS FS. • Create dependency between Listener and the ACFS FS. • Otherwise your Db and Listener will NOT start auto via Cluster at cluster reboot/ node reboot.
Volume • ShowingVolumeNameentered, • The volume deviceon Linux and • the Asm Disk groupyou have selected.
ASM Cluster File System Note: even though the acfs is usedforLoggingonly. I onlygotit to workwhenselecting database home file systems. Causethatwayit was registerdproperly as resource in the clusterware.
ASM Cluster File System Node64r
ASM Cluster File System RUN the script as ROOT user on only One Node. The prompt will return with info similar to below: cd /opt/oracle/cfgtoollogs/asmca/scripts ./ACFS_script.sh ACFS file system is running on Node64r, Node65r, Node66r
Dependency for the DB: diagnostic_dest: add dependency similar to this: • Removed current db resource in Grid Infra: srvctl remove database -d DBNAME • Registered the database again: srvctl add database -d DBNAME -o /opt/oracle/product/11203_ee_64/db -c RAC -m test.nl -p +DATA01/dbname/spfiledbname.ora -t IMMEDIATE -a “DBNAME_ACFS01, DBNAME_DATA01, DBNAME_FRA01" -j "/opt/oracle/DBNAME"
Dependency for listener: • Dependency for the (local) Listener. • Modify the ACFS resource: • crsctl modify resource ora.dbname_ACFS01.dbname_ACFS1.ACFS -attr"START_DEPENDENCIES=hard(ora.DBNAME_ACFS01.dg) pullup(ora.DBNAME_ACFS01.dg) pullup:always(ora.asm)" • Modify thelistener: • crsctl modify resource ora.LISTENER_DBNAME.lsnr -attr"START_DEPENDENCIES='hard(type:ora.cluster_vip_net1.type,ora.dbname_ACFS01.dbname_ACFS1.ACFS) pullup(type:ora.cluster_vip_net1.type,ora.dbname_ACFS01.dbname_ACFS1.ACFS)'"!!!!Note there is single quote just after = and another one just before last."
Removing ACFS file System: !!!! Tip: You need to do a dismount on each node of the cluster as root User: /bin/umount -t ACFS -a !!!! Note if you you cannot do this cause the device is busy as rootuse: lsof |grep <SID> (most likely you still have running listener(s) writing into ACFS or database activity (audit files etc.) or you still have a session in that mount point which you should leave.) !!!! Tip: asroot /opt/crs/product/112_ee_64/crs/bin/srvctl remove filesystem -d /dev/asm/ACFS_dbname-378 f.e.
Disable / Delete the volume 3 2 1 1 2 Remove the ACFS (unregister and Delete) Diablean delete the Volume Drop Diskgroup (whichwill drop disks too)