12c: Careful with DISABLE_ARCHIVE_LOGGING in Data Guard


You’ve probably been reminded very often that force logging is important in a Data Guard configuration and have also been warned about the dramatic consequences that might happen in a database which is not in force logging mode. In this article, I am going to present you the behavior of a Data Guard (Standby) database when the parameter DISABLE_ARCHIVE_LOGGING is set to true during an import (IMPDP) process in both enabled and disabled force logging mode.

NOLOGGING versus Force Logging

NOLOGGING is, basically, the process that do not generate redo and hence no redo changes will be recorded into redo log files. Of course there are occasions when setting tables in NOLOGGING mode is a good option as when loading temporary data and again those tables are recycled/truncate. This feature enables by the usage of supplementary clause “NOLOGGING”

Example: create table test(t1 number) nologging;

NOLOGGING has the ability to speed up operations but it has also several disadvantages that we should be aware of.

Force Logging, when enabled ensures that all changes must be written into the online redo logs and if any objects are suppose to create NOLOGGING but still force logging suppresses the NOLOGGING.

DISABLE_ARCHIVE_LOGGING

As we all know that from 12c, During Import operations we can disable archive logging, hence no redo will be generated for the whole impoer job. The ideology behind this feature is, when import process is going on with logging and “we can see huge redo log generations which leads to frequent log switches [depending on the exported objects] and there are many” Yes to avoid such stress on database during import Oracle 12c introduced DISABLE_ARCHIVE_LOGGING feature, so that no more waiting to write into redo logs, which is ultimately performance benefited”

Based on the above situation if you consider to go with NOLOGGING of import operation straight away, then in some cases you will realize that you did mistake 🙂 Probably now you are more interested to know what are the disadvantages? There can be many but I’ve tested specifically in Data Guard how it works and impacts if there is any import performed with DISABLE_FORCE_LOGGING.

Before starting with the impact on Data Guard while using NOLOGGING in import, take a look of the syntax to use in Import operation.

DISABLE_ARCHIVE_LOGGING we must use with TRANSFORM clause by specifying “Y”, where by default it will be “N”, To know the syntax you always can get help from impdp utility as

$impdp help=y
TRANSFORM
Metadata transform to apply to applicable objects.
Valid keywords are: DISABLE_ARCHIVE_LOGGING, LOB_STORAGE, OID, PCTSPACE,
SEGMENT_ATTRIBUTES, STORAGE, and TABLE_COMPRESSION_CLAUSE.

Import with DISABLE_ARCHIE_LOGGING when Force Logging Enabled:

For the below test, I have used three machines, where one machine has been used to export job and others two are under Data Guard configuration with Primary and Physical standby database.

SQL> select database_role,protection_mode,force_logging from v$database;

DATABASE_ROLE PROTECTION_MODE FORCE_LOGGING
—————- ——————– —————————————
PRIMARY MAXIMUM AVAILABILITY YES

SQL>
-bash-3.2$ expdp system/***** directory=data_pump_dir dumpfile=testobj.dmp logfile=testobj.log tables=scott.testobj

Export: Release 12.1.0.1.0 – Production on Sun Sep 21 03:05:23 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_TABLE_01″: system/******** directory=data_pump_dir dumpfile=testobj.dmp logfile=testobj.log tables=scott.testobj
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 13 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported “SCOTT”.”TESTOBJ” 10.42 MB 91431 rows
Master table “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/u01/app/oracle/admin/orcl/dpdump/testobj.dmp
Job “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully completed at Sun Sep 21 03:06:08 2014 elapsed 0 00:00:37

Now we will import into a Primary database and this database have also physical standby database which is up to date sync with primary database.

ID STATUS DB_MODE TYPE PROTECTION_MODE ARCHIVED_SEQ#
———- ——— ————— ———- ——————– ————-
1 VALID OPEN ARCH MAXIMUM PERFORMANCE 143
2 VALID OPEN_READ-ONLY LGWR MAXIMUM AVAILABILITY 143

Performed import operations with additional features such as tracing and high level details of import process(metrics). The trace files can be found in diag destination with the files *dw00* in order to analyze or review.
-bash-3.2$ impdp system/free2go directory=pythian_dir dumpfile=testobj.dmp logfile=sh.log remap_schema=scott:c##nassyam transform=disable_archive_logging:y trace=1FF0300 metrics=y

Import: Release 12.1.0.1.0 – Production on Sun Sep 21 03:10:48 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

WARNING: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Startup took 1 seconds
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
import done in AL32UTF8 character set and UTF8 NCHAR character set
export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
WARNING: possible data loss in character set conversions
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″: system/******** directory=pythian_dir dumpfile=testobj.dmp logfile=sh.log remap_schema=scott:c##nassyam transform=disable_archive_logging:y trace=1FF0300 metrics=y
Processing object type TABLE_EXPORT/TABLE/TABLE
Completed 1 TABLE objects in 0 seconds
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “C##NASSYAM”.”TESTOBJ” 10.42 MB 91431 rows in 5 seconds
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Completed 1 TABLE_STATISTICS objects in 0 seconds
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
;;; Ext Tbl Shadow: worker id 1.
Completed 1 MARKER objects in 19 seconds
Completed 1 TABLE_EXPORT/TABLE/TABLE_DATA objects in 5 seconds
Job “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully completed at Sun Sep 21 03:11:20 2014 elapsed 0 00:00:29

-bash-3.2$

[root@CKPT-ORA-03 trace]# cat mcdb_dw00_1766.trc | grep DISABLE_ARCHIVE_LOGGING
KUPW:11:57:34.203: 1: 13 Name – DISABLE_ARCHIVE_LOGGING
KUPW:11:57:37.350: 1: l_valid: -3 – Not Loading DISABLE_ARCHIVE_LOGGING with type INDEX
KUPW:11:57:38.401: 1: DISABLE_ARCHIVE_LOGGING for TABLES called.
KUPW:11:57:38.401: 1: DISABLE_ARCHIVE_LOGGING for TABLES returned. In procedure CHANGE_ARCHIVE_LOGGING
KUPW:11:57:38.401: 1: operation DISABLE_ARCHIVE_LOGGING
KUPW:11:57:38.403: 1: In INIT_MD_ARCHIVE_LOGGING for DISABLE_ARCHIVE_LOGGING
KUPW:11:57:38.420: 1: l_valid: -1 – Not Loading DISABLE_ARCHIVE_LOGGING with type NULL
KUPW:11:57:38.423: 1: l_valid: -3 – Not Loading DISABLE_ARCHIVE_LOGGING with type INDEX
KUPW:11:57:38.423: 1: DBMS_METADATA.SET_TRANSFORM_PARAM returned. l_valid : 1 – Loading DISABLE_ARCHIVE_LOGGING with type NULL
KUPW:11:57:41.926: 1: l_valid: -3 – Not Loading DISABLE_ARCHIVE_LOGGING with type INDEX
[root@CKPT-ORA-03 trace]#

So we have imported into primary database successfully with the DISABLE_ARCHIVE_LOGGING feature, but keen to know what happens to standby database?

SQL> select database_role,protection_mode from v$database;
DATABASE_ROLE PROTECTION_MODE
—————- ——————–
PHYSICAL STANDBY MAXIMUM AVAILABILITY

SQL> select count(*) from c##nassyam.testobj;
COUNT(*)
———-
91431

So even though you mentioned DISABLE_ARCHIVE_LOGGING during import operations still the redo written into redo logs and they are successfully transmitted to standby database, Hence DISABLE_ARCHIVE_LOGGING will be ignored when you set Force logging at database level.

Import with DISABLE_ARCHIE_LOGGING when Force Logging Not Enabled:

There are very few cases where the database will be functioning into No Force logging but still configuration works well probably there is no explicit nologging was been used to create the objects.

SQL> select database_role,protection_mode,force_logging from v$database;

DATABASE_ROLE PROTECTION_MODE FORCE_LOGGING
—————- ——————– —————————————
PRIMARY MAXIMUM AVAILABILITY NO

SQL>
-bash-3.2$ impdp system/***** directory=pythian_dir dumpfile=testobj.dmp logfile=testobj.log remap_schema=scott:c##nassyam transform=disable_archive_logging:y trace=1FF0300 metrics=y

Import: Release 12.1.0.1.0 – Production on Sun Sep 21 11:57:16 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

WARNING: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Startup took 1 seconds
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
import done in AL32UTF8 character set and UTF8 NCHAR character set
export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
WARNING: possible data loss in character set conversions
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″: system/******** directory=pythian_dir dumpfile=testobj.dmp logfile=testobj.log remap_schema=scott:c##nassyam transform=disable_archive_logging:y trace=1FF0300 metrics=y
Processing object type TABLE_EXPORT/TABLE/TABLE
Completed 1 TABLE objects in 0 seconds
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “C##NASSYAM”.”TESTOBJ” 10.42 MB 91431 rows in 3 seconds
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Completed 1 TABLE_STATISTICS objects in 0 seconds
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
;;; Ext Tbl Shadow: worker id 1.
Completed 1 MARKER objects in 50 seconds
Completed 1 TABLE_EXPORT/TABLE/TABLE_DATA objects in 3 seconds
Job “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully completed at Sun Sep 21 11:58:33 2014 elapsed 0 00:01:02

-bash-3.2$
[root@CKPT-ORA-03 trace]# cat mcdb_dw00_1766.trc | grep DISABLE_ARCHIVE_LOGGING
KUPW:11:57:34.203: 1: 13 Name – DISABLE_ARCHIVE_LOGGING
KUPW:11:57:37.350: 1: l_valid: -3 – Not Loading DISABLE_ARCHIVE_LOGGING with type INDEX
KUPW:11:57:38.401: 1: DISABLE_ARCHIVE_LOGGING for TABLES called.
KUPW:11:57:38.401: 1: DISABLE_ARCHIVE_LOGGING for TABLES returned. In procedure CHANGE_ARCHIVE_LOGGING
KUPW:11:57:38.401: 1: operation DISABLE_ARCHIVE_LOGGING
KUPW:11:57:38.403: 1: In INIT_MD_ARCHIVE_LOGGING for DISABLE_ARCHIVE_LOGGING
KUPW:11:57:38.420: 1: l_valid: -1 – Not Loading DISABLE_ARCHIVE_LOGGING with type NULL
KUPW:11:57:38.423: 1: l_valid: -3 – Not Loading DISABLE_ARCHIVE_LOGGING with type INDEX
KUPW:11:57:38.423: 1: DBMS_METADATA.SET_TRANSFORM_PARAM returned. l_valid : 1 – Loading DISABLE_ARCHIVE_LOGGING with type NULL
KUPW:11:57:41.926: 1: l_valid: -3 – Not Loading DISABLE_ARCHIVE_LOGGING with type INDEX
[root@CKPT-ORA-03 trace]#

So we are done with importing object into the database by usage of DISABLE_ARCHIVE_LOGGING and when Force logging is disabled, Now let’s verify whether the object is created and with how many rows.

SQL> select count(*) from c##nassyam.testobj;
select count(*) from c##nassyam.testobj
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 6, block # 179)
ORA-01110: data file 6: ‘/u02/app/oracle/oradata/mcdb/users01.dbf’
ORA-26040: Data block was loaded using the NOLOGGING option
SQL>

Sun Sep 21 12:00:33 2014
Errors in file /u01/app/oracle/diag/rdbms/india/drmcdb/trace/drmcdb_ora_407.trc (incident=4969):
ORA-01578: ORACLE data block corrupted (file # 6, block # 179)
ORA-01110: data file 6: ‘/u02/app/oracle/oradata/mcdb/users01.dbf’
ORA-26040: Data block was loaded using the NOLOGGING option
Incident details in: /u01/app/oracle/diag/rdbms/india/drmcdb/incident/incdir_4969/drmcdb_ora_407_i4969.trc
Sun Sep 21 12:00:35 2014
Checker run found 1 new persistent data failures
Sun Sep 21 12:00:36 2014
Dumping diagnostic data in directory=[cdmp_20140921120036], requested by (instance=1, osid=407), summary=[incident=4969].
Sun Sep 21 12:00:36 2014
Sweep [inc][4969]: completed
Sweep [inc2][4969]: completed

SQL> SELECT tablespace_name, segment_type, owner, segment_name,partition_name FROM dba_extents WHERE file_id = 6 and 179 between block_id AND block_id + blocks – 1;
TABLESPACE_NAME SEGMENT_TYPE OWNER SEGMENT_NA PARTITION_
————— —————— ———- ———- ———-
USERS TABLE C##NASSYAM TESTOBJ
SQL>

Yes, block corruption found and the corresponding object is “TESTOBJ”, Hence your standby is stuck and lag with primary database because of using DISABLE_ARCHIVE_LOGGING in case of force logging is disabled. Now it’s one more risk to your standby database in order to recover.

Additional Touch:

Again recovery is also become more simpler in 12c on standby with ” RMAN> recover database from service <Oracle Net Service> using backupset;”

RMAN> recover database from service canada using backupset;
Starting recover at 21-SEP-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK
skipping datafile 5; already restored to SCN 1913352
skipping datafile 7; already restored to SCN 1913352
……………………………………….
name=/u02/app/oracle/fast_recovery_area/INDIA/archivelog/2014_09_21/o1_mf_1_135_b1wv7g6d_.arc thread=1 sequence=135
archived log file name=/u02/app/oracle/fast_recovery_area/INDIA/archivelog/2014_09_21/o1_mf_1_136_b1wwbk2s_.arc thread=1 sequence=136
archived log file name=/u02/app/oracle/fast_recovery_area/INDIA/archivelog/2014_09_21/o1_mf_1_137_b1wwbkqm_.arc thread=1 sequence=137
archived log file name=/u02/app/oracle/fast_recovery_area/INDIA/archivelog/2014_09_21/o1_mf_1_138_b1wwbqnq_.arc thread=1 sequence=138
media recovery complete, elapsed time: 00:00:01
Finished recover at 21-SEP-14
RMAN>

Conclusion:

In a Data Guard configuration when importing objects into the primary database, be sure to do not set the new 12c DISABLE_ARCHIVE_LOGGING parameter (set its value from the default one which is ‘N’ (or) do not mention at all), particularly when your database is running in a non-force logging mode. If you do so then redo will be applied in Standby database which might end up by generating corrupted objects. In other words, I highly recommend people to enable force logging in a Data Guard environment; as such they will completely annihilate the effect of the new 12c DISABLE_ARCHIVE_LOGGING parameter whatever its value is.

Reference: http://www.toadworld.com/  By Oracle ACE director

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s