In keeping with today's storage-based replication technologies, such as "mirror splitting" within a storage array or "snapshot replication" in a file system, and ASM with Flex Disk Groups in Oracle Grid Infrastructure, 18c and 19c provide the ability for creating near instantaneous copies of databases. These quick database copies are typically leveraged as Development and Test environments. They can also be used to create a read-only master for an Exadata snapshot copy (when used with Exadata). The greatest advantages of using this "ASM Database Clone" feature are:
Prerequisites and requirements exist to leverage this database cloning innovation. First, it is supported only in Oracle ASM flex and extended disk groups. The feature is supported only with Oracle Database 18c, version 18.1 or higher. The disk group compatibility attributes COMPATIBLE.ASM and COMPATIBLE.RDBMS need to be set to 18.0 or higher. Lastly, the source database (parent) must be a pluggable database, and the database clone (child) must be created as a pluggable database in the same container database.
When an ASM database clone is made, all the files associated with the database are split together to provide an independent database. The following diagram represents the splitting of the files for the database "DB3," providing a separate and independent database "DB3a.".
To utilize the ASM Database Cloning feature, we must first prepare a Mirror Copy. During this step, Oracle ASM allocates space for the additional copies of data. This process involves creating the cloned files and linking them with the source files. Note: that the data is not copied in this step; the copying is done during re-mirroring (shown later). Remirroring occurs during the prepare phase of rebalance, which is initiated as part of this step.
Currently, we have:
In this example, the prepare mirror copy command produced an error because one of the required parameters, "compatible.asm," was not set to 18.0. After advancing the "compatible.asm" attribute, the command completed successfully:
We can check the status of the database clone by querying V$ASM_DBCLONE_INFO view, particularly paying attention to the DBCLONE_STATUS column as shown below:
After the PREPARE phase completes successfully, we can connect to the CBD root container. Essentially, we can snapshot clone from the mirrored copy and create the database clone leveraging the USING MIRROR COPY syntax:
We can query the V$ASM_DBCLONE_INFO view again and look at the DBCLONE_STATUS to confirm the status of SPLIT COMPLETED.
If done with the mirror copy of the PDB, we can remove the PDB with the DROP MIRROR COPY clause. The DROP MIRROR COPY clause triggers a rebalance on the appropriate disk group.
f the clone process fails for some reason, the DBCLONE_STATUS column of the V$ASM_DBCLONE_INFO view will display a status as FAILED. A REBALANCE can be initiated against the disk group to clean up the file group. For example, after connecting to the Oracle ASM instance, we can run the following:
In Oracle ASM 18c, we can convert a conventional disk group (disk group created before Oracle ASM 18c) to an Oracle ASM flex disk group without using the restrictive mount (MOUNTED RESTRICTED) option. We can also drop a file group and its associated files (drop including content) using the CASCADE keyword with ALTER DISKGROUP ... DROPFILEGROUP SQL statement.
Oracle Grid Infrastructure 19c provides another important feature for reducing the total cost of storage management. With Oracle ASM disk group, we have two-way mirroring (normal redundancy) and three-way mirroring (high redundancy) for database files and for write-once files like Archive logs and backup sets, which is wasteful of space. To reduce the storage overhead to write-once type of files, Oracle introduced single parity settings in Oracle Database 19c. Parity protection is provided for ASM disk group setup as flex disk groups. The parity setting is intended for write-once files and is not supported on data files and read/write files.
To establish parity, we need a minimum of three regular (not quorum) failure groups in the flex disk group. If there are three or four failure groups when the parity file is created, then each parity extent set has two data extents. That scenario incurs 50% redundancy overhead, rather than 100% redundancy overhead, for two-way mirror files. If there are five or more failure groups when the parity file is created, then each parity extent set has four data extents. That scenario incurs a 25% redundancy overhead.
Below is an example of changing the group property of archive logs to utilize parity protection for newly created archive files.
We can also do this for database backups. Imagine the storage savings from setting the parity to be equivalent to external redundancy on Oracle-engineered systems; that only leverage normal or high redundancy for disk storage.
Happy Holidays!