Using the Oracle ASM Cluster File System (Oracle ACFS) on Linux,Part Three
This article continues on withan exploration of ACFS snapshots and managing ACFSand then concludes this three part series on using ACFS on Linux.
ACFS Snapshots
Oracle ASM Cluster File System includes a feature called snapshots. An Oracle ACFS snapshot is an online,read-only,point in time copy of an Oracle ACFS file system. The snapshot process uses Copy-On-Write functionality which makes efficient use of disk space. Note that snapshots work at the block level instead of the file level. Before an Oracle ACFS file extent is modified or deleted,its current value is copied to the snapshot to maintain the point-in-time view of the file system. (Note: When a file is modified,only the changed blocks are copied to the snapshot location which helps conserve disk space.)
Once an Oracle ACFS snapshot is created,all snapshot files are immediately available for use. Snapshots are always available as long as the file system is mounted. This provides support for online recovery of files inadvertently modified or deleted from a file system. You can have up to 63 snapshot views supported for each file system. This provides for a flexible online file recovery solution which can span multiple views. You can also use an Oracle ACFS snapshot as the source of a file system backup,as it can be created on demand to deliver a current,consistent,online view of an active file system. Once the Oracle ACFS snapshot is created,simply backup the snapshot to another disk or tape location to create a consistent backup set of the files. (Note: Oracle ACFS snapshots can be created and deleted on demand without the need to take the file system offline. ACFS snapshots provide a point-in-time consistent view of the entire file system which can be used to restore deleted or modified files and to perform backups.)
All storage for Oracle ACFS snapshots are maintained within the file system which eliminates the need for separate storage pools for file systems and snapshots. As shown in the next section,Oracle ACFS file systems can be dynamically re-sized to accommodate addition file and snapshot storage requirements.
Oracle ACFS snapshots are administered with the acfsutil snap command. This section will provide an overview on how to create and retrieve Oracle ACFS snapshots.
Oracle ACFS Snapshot Location
Whenever you create an Oracle ACFS file system,a hidden directory is created as a sub-directory to the Oracle ACFS file system named .ACFS. (Note that hidden files and directories in Linux start with leading period.)
[oracle@racnode1~]$ls-lFA/documents3total2851148drwxr-xr-x5rootroot4096Nov2617:57.ACFS/-rw-r--r--1oracleoinstall1239269270Nov2716:02linux.x64_11gR2_database_1of2.zip -rw-r--r--1oracleoinstall1111416131Nov2716:03linux.x64_11gR2_database_2of2.zip -rw-r--r--1oracleoinstall555366950Nov2716:03linux.x64_11gR2_examples.zip drwx------2rootroot65536Nov2617:57lost+found/
If you don't have the ORACLE_HOME environment variable set to the Oracle grid infrastructure home as explained in the prerequisites section to this guide,the mount command will fail as shown above. In order to mount the new cluster file system,the Oracle ASM ACFS binaries need access to certain shared libraries in the ORACLE_HOME for grid infrastructure. An easy workaround to get past this error is to set the ORACLE_HOME environment variable for grid infrastructure in the file /sbin/mount.acfs on all Oracle RAC nodes. The ORACLE_HOME should be set at the beginning of the file after the header comments as follows:
Found in the .ACFSare two directories named repl and snaps. All Oracle ACFS snapshots are stored in the snaps directory.
[oracle@racnode1~]$ls-lFA/documents3/.ACFStotal12 drwx------2rootroot4096Nov2617:57.fileid/ drwx------6rootroot4096Nov2617:57repl/drwxr-xr-x2rootroot4096Nov2715:53snaps/
SincenoOracleACFSsnapshotsexist,thesnapsdirectoryisempty. [oracle@racnode1~]$ls-lFA/documents3/.ACFS/snapstotal0
Create Oracle ACFS Snapshot
Let's start by creating an Oracle ACFS snapshot namedsnap1for the Oracle ACFS mounted on /documents3. This operation should be performed as root or the Oracle grid infrastructure owner:
[root@racnode1~]#/sbin/acfsutilsnapcreatesnap1/documents3acfsutilsnapcreate:Snapshotoperationiscomplete.
The data for the new snap1 snapshot will be stored in /documents3/.ACFS/snaps/snap1. Once the snapshot is created,any existing files and/or directories in the file system are automatically accessible from the snapshot directory. For example,when I created the snap1 snapshot,the three Oracle ZIP files were made available from the snapshot /documents3/.ACFS/snaps/snap1:
[oracle@racnode1~]$ls-lFA/documents3/.ACFS/snaps/snap1total2851084 drwxr-xr-x5rootroot4096Nov2617:57.ACFS/-rw-r--r--1oracleoinstall1239269270Nov2716:02linux.x64_11gR2_database_1of2.zip -rw-r--r--1oracleoinstall1111416131Nov2716:03linux.x64_11gR2_database_2of2.zip -rw-r--r--1oracleoinstall555366950Nov2716:03linux.x64_11gR2_examples.zip?---------?????lost+found
It is important to note that when the snapshot gets created,nothing is actually stored in the snapshot directory,so there is no additional space consumption. The snapshot directory will only containmodifiedfile blocks when a file is updated or deleted.
Restore Files From an Oracle ACFS Snapshot
When a file is deleted (or modified),this triggers an automatic backup of all modified file blocks to the snapshot. For example,if I delete the file/documents3/linux.x64_11gR2_examples.zip,the prevIoUs images of the file blocks are copied to thesnap1snapshot where it can be restored from at a later time if necessary:
[oracle@racnode1~]$rm/documents3/linux.x64_11gR2_examples.zip
If you were looking for functionality in Oracle ACFS to perform arollbackof the current file system to a snapshot,then I have bad news; one doesn't exist. Hopefully this will be a feature introduced in future versions!
In the case where you accidentally deleted a file from the current file system,it can be restored by copying it from the snapshot,back to the the current file system:
[oracle@racnode1 ~]$cp /documents3/.ACFS/snaps/snap1/linux.x64_11gR2_examples.zip /documents3
Display Oracle ACFS Snapshot Information
The '/sbin/acfsutil info fs' command can provide file system information as well as limited information on any Oracle ACFS snapshots:
[oracle@racnode1~]$/sbin/acfsutilinfofs/documents3/documents3 ACFSVersion:11.2.0.1.0.0 flags:MountPoint,Available mounttime:SatNov2703:07:502010 volumes:1 totalsize:26843545600 totalfree:23191826432 primaryvolume:/dev/asm/docsvol3-300 label:DOCSVOL3 flags:Primary,Available on-diskversion:39.0 allocationunit:4096 major,minor:252,153603 size:26843545600 free:23191826432numberofsnapshots:1 snapshotspaceusage:560463872
From the example above,you can see that I have only one active snapshot that is consuming approximately 560MB of disk space. This coincides with the size of the file I removed earlier (/documents3/linux.x64_11gR2_examples.zip) which triggered a back up of all modified file image blocks.
To query all snapshots,simply list the directories under '<ACFS_MOUNT_POINT>/.ACFS/snaps'. Each directory under the snaps directory is an Oracle ACFS snapshot.
Another useful technique used to obtain information about Oracle ACFS snapshots is to query the view V$ASM_ACFSSNAPSHOTS from the Oracle ASM instance:
columnsnap_nameformata15heading"SnapshotName" columnfs_nameformata15heading"FileSystem" columnvol_deviceformata25heading"VolumeDevice" columncreate_timeformata20heading"CreateTime" ====================================================================== sql>selectsnap_name,fs_name,vol_device,2to_char(create_time,'DD-MON-YYYYHH24:MI:SS')ascreate_time 3fromv$asm_acfssnapshots 4orderbysnap_name;SnapshotNameFileSystemVolumeDeviceCreateTime --------------------------------------------------------------------------- snap1/documents3/dev/asm/docsvol3-30027-NOV-201016:11:29
Delete Oracle ACFS Snapshot
Use the 'acfsutil snap delete' command to delete an existing Oracle ACFS snapshot:
[root@racnode1~]#/sbin/acfsutilsnapdeletesnap1/documents3acfsutilsnapdelete:Snapshotoperationiscomplete.
Managing ACFS
Oracle ACFS and Dismount or Shutdown Operations
If you take anything away from this article,know and understand the importance of dismounting any active file system configured with an Oracle ASM Dynamic Volume Manager (ADVM) volume device,BEFORE shutting down an Oracle ASM instance or dismounting a disk group! Failure to do so will result in I/O failures and very angry users!
After the file system(s) have been dismounted,all open references to Oracle ASM files are removed and associated disk groups can then be dismounted or the Oracle ASM instance shut down.
If the Oracle ASM instance or disk group is forcibly shut down or fails while an associated Oracle ACFS is active,the file system is placed into an offline error state. When the file system is placed in an offline error state,applications will start to encounter I/O failures and any Oracle ACFS user data and Metadata being wrien at the time of the termination may not be flushed to ASM storage before it is fenced. If a SHUTDOWN ABORT operation on the Oracle ASM instance is required and you are not able to dismount the file system,issue two sync command to flush any cached file system data and Metadata to persistent storage:
[root@racnode1~]#sync[root@racnode1~]#sync
Using a two-node Oracle RAC,I forced an Oracle ASM instance shutdown on node 1 to simulate a failure: (Note: This should go without saying,but I'll say it anyway.DO NOTattempt the following on a production environment.)
sql>shutdownabortASMinstanceshutdown
Any subsequent attempt to access an offline file system on that node will result in an I/O error:
[oracle@racnode1~]$ls-l/documents3ls:/documents3:Input/outputerror[oracle@racnode1~]$df-k Filesystem1K-blocksUsedAvailableUse%Mountedon /dev/mapper/VolGroup00-LogVol00 1453449922245939611538336417%/ /dev/sdb11513514241920721433469481%/local /dev/sda1101086126328323514%/boot tmpfs2019256020192560%/dev/shmdf:`/documents1':Input/outputerror df:`/documents2':Input/outputerror df:`/documents3':Input/outputerrordomo:PUBLIC47994571521901758592289769856040%/domo
Recovering a file system from an offline error state requires dismounting and remounting the Oracle ACFS file system. Dismounting an active file system,even one that is offline,requires stopping all applications using the file system,including any shell references. For example,I had a shell session that prevIoUsly changed directory (cd) into the /documents3 file system before the forced shutdown:
[root@racnode1~]#umount/documents1[root@racnode1~]#umount/documents2[root@racnode1~]#umount/documents3umount:/documents3:deviceisbusy umount:/documents3:deviceisbusy
Use the Linux fuser or lsof command to identify processes and kill if necessary:
[root@racnode1~]#fuser/documents3/documents3:16263c [root@racnode1~]#kill-916263[root@racnode1~]#umount/documents3
Restart the Oracle ASM instance (or in my case,all Oracle grid infrastructure services were stopped as a result of me terminating the Oracle ASM instance):
[root@racnode1~]#/u01/app/11.2.0/grid/bin/crsctlstopcluster[root@racnode1~]#/u01/app/11.2.0/grid/bin/crsctlstartcluster
All of my Oracle ACFS volumes were added to the Oracle ACFS mount registry and will therefore automatically mount when Oracle grid infrastructure starts. If you need to manually mount the file system,verify the volume is enabled before attempting to mount:
[root@racnode1~]#mount/dev/mapper/VolGroup00-LogVol00on/typeext3(rw) procon/proctypeproc(rw) sysfson/systypesysfs(rw) devptson/dev/ptstypedevpts(rw,gid=5,mode=620) /dev/sdb1on/localtypeext3(rw) /dev/sda1on/boottypeext3(rw) tmpfson/dev/shmtypetmpfs(rw) noneon/proc/sys/fs/binfmt_misctypebinfmt_misc(rw) sunrpcon/var/lib/nfs/rpc_pipefstyperpc_pipefs(rw) oracleasmfson/dev/oracleasmtypeoracleasmfs(rw) domo:PUBLICon/domotypenfs(rw,addr=192.168.1.121)/dev/asm/docsvol1-300on/documents1typeacfs(rw) /dev/asm/docsvol2-300on/documents2typeacfs(rw) /dev/asm/docsvol3-300on/documents3typeacfs(rw)
Resize File System
With Oracle ACFS,as long as there exists free space within the ASM disk group,any of the ASM volumes can be dynamically expanded which means the file system gets expanded as a result. Note that if you are using another file system other than Oracle ACFS,as long as that file system can support online resizing,they too can be dynamically re-sized. The one exception to 3rd party file systems is online shrinking. Ext3,for example,supports online resizing but does not support online shrinking.
Use the following Syntax to add space to an Oracle ACFS on the fly without the need to take any type of outage.
First,verify there is enough space in the current Oracle ASM disk group to extend the volume:
sql>selectname,total_mb,free_mb,round((free_mb/total_mb)*100,2)pct_free 2fromv$asm_diskgroup 3wheretotal_mb!=0 4orderbyname;DiskGroupTotal(MB)Free(MB)%Free ---------------------------------------------- CRS2,2051,80982.04DOCSDG198,30312,18712.40FRA33,88722,79567.27 RACDB_DATA33,88730,58490.25
The same task can be accomplished using the ASMCMD command-line utility:
[grid@racnode1~]$asmcmdlsdg
From the 12GB of free space in the DOCSDG1 ASM disk group,let's extend the file system (volume) by another 5GB. Note that this can be performed while the file system is online and accessible by clients no outage is required:
[root@racnode1~]#/sbin/acfsutilsize+5G/documents3acfsutilsize:newfilesystemsize:26843545600(25600MB)
Verify the new size of the file system from all Oracle RAC nodes:
[root@racnode1~]#df-kFilesystem1K-blocksUsedAvailableUse%Mountedon /dev/mapper/VolGroup00-LogVol00 1453449922195271211589004816%/ /dev/sdb11513514241920721433469481%/local /dev/sda1101086126328323514%/boot tmpfs2019256113585288340457%/dev/shm domo:PUBLIC47994571521901103872289835328040%/domo /dev/asm/docsvol1-300 33554432197668333567641%/documents1 /dev/asm/docsvol2-300 33554432197668333567641%/documents2/dev/asm/docsvol3-300 26214400183108260312921%/documents3[root@racnode2~]#df-kFilesystem1K-blocksUsedAvailableUse%Mountedon /dev/mapper/VolGroup00-LogVol00 1453449921380308412403967611%/ /dev/sdb11513514241920721433469481%/local /dev/sda1101086126328323514%/boot tmpfs2019256113585288340457%/dev/shm domo:Public47994571521901103872289835328040%/domo /dev/asm/docsvol1-300 33554432197668333567641%/documents1 /dev/asm/docsvol2-300 33554432197668333567641%/documents2/dev/asm/docsvol3-300 26214400183108260312921%/documents3
Useful ACFS Commands
This section contains several useful commands that can be used to administer Oracle ACFS. Note that many of the commands described in this section have already been discussed throughout this guide.
ASM Volume Driver
Load the Oracle ASM volume driver:
[root@racnode1~]#/u01/app/11.2.0/grid/bin/acfsloadstart-s
Unload the Oracle ASM volume driver:
[root@racnode1~]#/u01/app/11.2.0/grid/bin/acfsloadstop
Check if Oracle ASM volume driver is loaded:
[root@racnode1~]#lsmod|greporacleoracleacfs8773204 oracleadvm2217608 oracleoks2768802oracleacfs,oracleadvm oracleasm841361
ASM Volume Management
Create new Oracle ASM volume using ASMCMD
[grid@racnode1~]$asmcmdvolcreate-Gdocsdg1-s20G--redundancyunprotecteddocsvol3
Resize Oracle ACFS file system (add 5GB):
[root@racnode1~]#/sbin/acfsutilsize+5G/documents3acfsutilsize:newfilesystemsize:26843545600(25600MB)
Delete Oracle ASM volume using ASMCMD:
[grid@racnode1~]$asmcmdvoldelete-Gdocsdg1docsvol3
Disk Group / File System / Volume Information
Get detailed Oracle ASM disk group information:
[grid@racnode1~]$asmcmdlsdg
Format an Oracle ASM cluster file system:
[grid@racnode1~]$/sbin/mkfs-tacfs-b4k/dev/asm/docsvol3-300-n"DOCSVOL3"mkfs.acfs:version=11.2.0.1.0.0 mkfs.acfs:on-diskversion=39.0 mkfs.acfs:volume=/dev/asm/docsvol3-300 mkfs.acfs:volumesize=21474836480 mkfs.acfs:Formatcomplete.
Get detailed file system information: :
[root@racnode1~]#/sbin/acfsutilinfofs/documents1 ACFSVersion:11.2.0.1.0.0 flags:MountPoint,Available mounttime:FriNov2618:38:482010 volumes:1 totalsize:34359738368 totalfree:34157326336 primaryvolume:/dev/asm/docsvol1-300 label: flags:Primary,Available,ADVM on-diskversion:39.0 allocationunit:4096 major,153601 size:34359738368 free:34157326336 ADVMdiskgroupDOCSDG1 ADVMresizeincrement:268435456 ADVMredundancy:unprotected ADVMstripecolumns:4 ADVMstripewidth:131072 numberofsnapshots:0 snapshotspaceusage:0 /documents2 ACFSVersion:11.2.0.1.0.0 flags:MountPoint,Available mounttime:FriNov2618:38:482010 volumes:1 totalsize:34359738368 totalfree:34157326336 primaryvolume:/dev/asm/docsvol2-300 label: flags:Primary,153602 size:34359738368 free:34157326336 ADVMdiskgroupDOCSDG1 ADVMresizeincrement:268435456 ADVMredundancy:unprotected ADVMstripecolumns:4 ADVMstripewidth:131072 numberofsnapshots:0 snapshotspaceusage:0
Get ASM volume information:
[grid@racnode1~]$asmcmdvolinfo-aDiskgroupName:DOCSDG1 VolumeName:DOCSVOL1 VolumeDevice:/dev/asm/docsvol1-300 State:ENABLED Size(MB):32768 ResizeUnit(MB):256 Redundancy:UNPROT StripeColumns:4 StripeWidth(K):128 Usage:ACFS Mountpath:/documents1 VolumeName:DOCSVOL2 VolumeDevice:/dev/asm/docsvol2-300 State:ENABLED Size(MB):32768 ResizeUnit(MB):256 Redundancy:UNPROT StripeColumns:4 StripeWidth(K):128 Usage:ACFS Mountpath:/documents2 VolumeName:DOCSVOL3 VolumeDevice:/dev/asm/docsvol3-300 State:ENABLED Size(MB):25600 ResizeUnit(MB):256 Redundancy:UNPROT StripeColumns:4 StripeWidth(K):128 Usage:ACFS Mountpath:/documents3
Get volume status using ASMCMD command:
[grid@racnode1~]$asmcmdvolstatDISKGROUPNUMBER/NAME:2/DOCSDG1 --------------------------------------- VOLUME_NAME READSBYTES_READREAD_TIMEREAD_ERRS WRITESBYTES_WRIENWRITE_TIMEWRITE_ERRS ------------------------------------------------------------- DOCSVOL1 51740857616180 1700769280768634560 DOCSVOL2 51240601625470 1700769280768661470 DOCSVOL3 13961545259521720070 1095654410240417490
Enable a volume using the ASMCMD command:
[grid@racnode1~]$asmcmdvolenable-Gdocsdg1docsvol3
Disable a volume using the ASMCMD command
[root@racnode1~]#umount/documents3[root@racnode2~]#umount/documents3[grid@racnode1~]$asmcmdvoldisable-Gdocsdg1docsvol3
Mount Commands
Mount single Oracle ACFS volume on the local node:
[root@racnode1~]#/bin/mount-tacfs/dev/asm/docsvol3-300/documents3
Unmount single Oracle ACFS volume on the local node:
[root@racnode1~]#umount/documents3
Mount all Oracle ACFS volumes on the local node using the Metadata found in the Oracle ACFS mount registry:
[root@racnode1~]#/sbin/mount.acfs-oall
Unmount all Oracle ACFS volumes on the local node using the Metadata found in the Oracle ACFS mount registry:
[root@racnode1~]#/bin/umount-tacfs-a
Oracle ACFS Mount Registry
Register new mount point in the Oracle ACFS mount registry:
[root@racnode1~]#/sbin/acfsutilregistry-f-a/dev/asm/docsvol3-300/documents3acfsutilregistry:mountpoint/documents3successfullyaddedtoOracleRegistry
Query the Oracle ACFS mount registry:
[root@racnode1~]#/sbin/acfsutilregistryMountObject: Device:/dev/asm/docsvol1-300 MountPoint:/documents1 DiskGroup:DOCSDG1 Volume:DOCSVOL1 Options:none Nodes:all MountObject: Device:/dev/asm/docsvol2-300 MountPoint:/documents2 DiskGroup:DOCSDG1 Volume:DOCSVOL2 Options:none Nodes:allMountObject: Device:/dev/asm/docsvol3-300 MountPoint:/documents3 DiskGroup:DOCSDG1 Volume:DOCSVOL3 Options:none Nodes:all
Unregister volume and mount point from the Oracle ACFS mount registry:
[root@racnode1~]#acfsutilregistry-d/documents3acfsutilregistry:successfullyremovedACFSmountpoint/documents3fromOracleRegistry
Oracle ACFS Snapshots
Use the 'acfsutil snap create' command to create an Oracle ACFS snapshot named snap1 for an Oracle ACFS mounted on /documents3: :
[root@racnode1~]#/sbin/acfsutilsnapcreatesnap1/documents3acfsutilsnapcreate:Snapshotoperationiscomplete.
Use the 'acfsutil snap delete' command to delete an existing Oracle ACFS snapshot:
[root@racnode1~]#/sbin/acfsutilsnapdeletesnap1/documents3acfsutilsnapdelete:Snapshotoperationiscomplete.
Oracle ASM / ACFS Dynamic Views
This section contains information about using dynamic views to display Oracle Automatic Storage Management (Oracle ASM),Oracle Automatic Storage Management Cluster File System (Oracle ACFS),and Oracle ASM Dynamic Volume Manager (Oracle ADVM) information. These views are accessible from the Oracle ASM instance.
View Name | Description |
---|---|
V$ASM_ALIAS | Contains one row for every alias present in every disk group mounted by the Oracle ASM instance. |
V$ASM_ATTRIBUTE | Displays one row for each attribute defined. In addition to attributes specified by CREATE DISKGROUP andALTER DISKGROUPstatements,the view may show other attributes that are created automatically. Attributes are only displayed for disk groups where COMPATIBLE.ASM is set to 11.1 or higher. |
V$ASM_CLIENT | In an Oracle ASM instance,identifies databases using disk groups managed by the Oracle ASM instance. In a DB instance,contains information about the Oracle ASM instance if the database has any open Oracle ASM files. |
V$ASM_DISK | Contains one row for every disk discovered by the Oracle ASM instance,including disks that are not part of any disk group. This view performs disk discovery every time it is queried. |
V$ASM_DISK_IOSTAT | Displays information about disk I/O statistics for each Oracle ASM client. In a DB instance,only the rows for that instance are shown. |
V$ASM_DISK_STAT | Contains the same columns asV$ASM_DISK,but to reduce overhead,does not perform a discovery when it is queried. It only returns information about any disks that are part of mounted disk groups in the storage system. To see all disks,use V$ASM_DISK instead. |
V$ASM_DISKGROUP | Describes a disk group (number,name,size related info,state,and redundancy type). This view performs disk discovery every time it is queried. |
V$ASM_DISKGROUP_STAT | Contains the same columns asV$ASM_DISKGROUP,does not perform a discovery when it is queried. It does not return information about any disks that are part of mounted disk groups in the storage system. To see all disks,use V$ASM_DISKGROUP instead. |
V$ASM_FILE | Contains one row for every Oracle ASM file in every disk group mounted by the Oracle ASM instance. |
V$ASM_OPERATION | In an Oracle ASM instance,contains one row for every active Oracle ASM long running operation executing in the Oracle ASM instance. In a DB instance,contains no rows. |
V$ASM_TEMPLATE | Contains one row for every template present in every disk group mounted by the Oracle ASM instance. |
V$ASM_USER | Contains the effective operating system user names of connected database instances and names of file owners. |
V$ASM_USERGROUP | Contains the creator for each Oracle ASM File Access Control group. |
V$ASM_USERGROUP_MEMBER | Contains the members for each Oracle ASM File Access Control group. |
View Name | Description |
---|---|
V$ASM_ACFSSNAPSHOTS | Contains snapshot information for every mounted Oracle ACFS file system. |
V$ASM_ACFSVOLUMES | Contains information about mounted Oracle ACFS volumes,correlated withV$ASM_FILESYSTEM. |
V$ASM_FILESYSTEM | Contains columns that display information for every mounted Oracle ACFS file system. |
V$ASM_VOLUME | Contains information about each Oracle ADVM volume that is a member of an Oracle ASM instance. |
V$ASM_VOLUME_STAT | Contains information about statistics for each Oracle ADVM volume. |
Use fsck to Check and Repair the Cluster File System
Use the regular Linux fsck command to check and repair the Oracle ACFS. This only needs to be performed from one of the Oracle RAC nodes:
[root@racnode1~]#/sbin/fsck-tacfs/dev/asm/docsvol3-300fsck1.39(29-May-2006) fsck.acfs:version=11.2.0.1.0.0 fsck.acfs:ACFS-00511:/dev/asm/docsvol3-300ismountedonatleastonenodeofthecluster.fsck.acfs:ACFS-07656:Unabletocontinue
The fsck operating cannot be performed while the file system is online. Unmount the cluster file system from all Oracle RAC nodes:
[root@racnode1~]#umount/documents3[root@racnode2~]#umount/documents3
Now check the cluster file system with the file system unmounted:
[root@racnode1~]#/sbin/fsck-tacfs/dev/asm/docsvol3-300fsck1.39(29-May-2006) fsck.acfs:version=11.2.0.1.0.0 OracleASMClusterFileSystem(ACFS)On-DiskStructureVersion:39.0 ***************************** **********Pass1:********** ***************************** TheACFSvolumewascreatedatFriNov2617:20:272010 Checkingprimaryfilesystem... Filescheckedinprimaryfilesystem:100% Checkingifanyfilesareorphaned... 0orphansfound fsck.acfs:Checkercompletedwithnoerrors.
Remount the cluster file system on all Oracle RAC nodes:
[root@racnode1~]#/bin/mount-tacfs/dev/asm/docsvol3-300/documents3[root@racnode2~]#/bin/mount-tacfs/dev/asm/docsvol3-300/documents3
Drop ACFS / ASM Volume
Unmount the cluster file system from all Oracle RAC nodes:
[root@racnode1~]#umount/documents3[root@racnode2~]#umount/documents3
Log in to the ASM instance and drop the ASM dynamic volume from one of the Oracle RAC nodes:
[grid@racnode1~]$sqlplus/assysasmsql>ALTERDISKGROUPdocsdg1DROPVOLUMEdocsvol3;Diskgroupaltered.
The same task can be accomplished using theASMCMDcommand-line utility:
[grid@racnode1~]$asmcmdvoldelete-Gdocsdg1docsvol3
Unregister the volume and mount point from the Oracle ACFS mount registry from one of the Oracle RAC nodes:
[root@racnode1~]#acfsutilregistry-d/documents3acfsutilregistry:successfullyremovedACFSmountpoint/documents3fromOracleRegistry
Finally,remove the mount point directory from all Oracle RAC nodes (if necessary):
[root@racnode1~]#rmdir/documents3[root@racnode2~]#rmdir/documents3原文链接:https://www.f2er.com/oracle/212049.html