DROP PLUGGABLE DATABASE


once you issued the DROP PLUGGABLE DATABASE command you can’t reuse a previously taken backup of this particular PDB anymore and recover the PDB into this existing CDB. Actually only the meta information in the controlfile or the RMAN catalog will be deleted. But archive logs and backup still persist.

This is the sample error message you’ll see when you try to recover a dropped pluggable database:

RMAN> restore pluggable database pdb2drop;

Starting restore at 01-JUN-15
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 06/01/2015 10:10:40
RMAN-06813: could not translate pluggable database pdb2drop

A more convenient way to restore a PDB once the magic command (DROP PLUGGABLE DATABASE) has been issued is  recovering it into an auxiliary container database and unplug/plug it.

MOS Note  explains how to workaround this issue:

MOS Note: 2034953.1
How to Restore Dropped PDB in Multitenant

In brief this MOS Note describes how to:

  • Create an auxiliary container database
  • Recover the backup (yes, you will have to have a backup of your container database) including this particular PDB
  • Unplug the PDB after recovery has been finished and plug it back into the original CDB

Reference: https://blogs.oracle.com/

Advertisements

12c: Careful with DISABLE_ARCHIVE_LOGGING in Data Guard


You’ve probably been reminded very often that force logging is important in a Data Guard configuration and have also been warned about the dramatic consequences that might happen in a database which is not in force logging mode. In this article, I am going to present you the behavior of a Data Guard (Standby) database when the parameter DISABLE_ARCHIVE_LOGGING is set to true during an import (IMPDP) process in both enabled and disabled force logging mode.

NOLOGGING versus Force Logging

NOLOGGING is, basically, the process that do not generate redo and hence no redo changes will be recorded into redo log files. Of course there are occasions when setting tables in NOLOGGING mode is a good option as when loading temporary data and again those tables are recycled/truncate. This feature enables by the usage of supplementary clause “NOLOGGING”

Example: create table test(t1 number) nologging;

NOLOGGING has the ability to speed up operations but it has also several disadvantages that we should be aware of.

Force Logging, when enabled ensures that all changes must be written into the online redo logs and if any objects are suppose to create NOLOGGING but still force logging suppresses the NOLOGGING.

DISABLE_ARCHIVE_LOGGING

As we all know that from 12c, During Import operations we can disable archive logging, hence no redo will be generated for the whole impoer job. The ideology behind this feature is, when import process is going on with logging and “we can see huge redo log generations which leads to frequent log switches [depending on the exported objects] and there are many” Yes to avoid such stress on database during import Oracle 12c introduced DISABLE_ARCHIVE_LOGGING feature, so that no more waiting to write into redo logs, which is ultimately performance benefited”

Based on the above situation if you consider to go with NOLOGGING of import operation straight away, then in some cases you will realize that you did mistake 🙂 Probably now you are more interested to know what are the disadvantages? There can be many but I’ve tested specifically in Data Guard how it works and impacts if there is any import performed with DISABLE_FORCE_LOGGING.

Before starting with the impact on Data Guard while using NOLOGGING in import, take a look of the syntax to use in Import operation.

DISABLE_ARCHIVE_LOGGING we must use with TRANSFORM clause by specifying “Y”, where by default it will be “N”, To know the syntax you always can get help from impdp utility as

$impdp help=y
TRANSFORM
Metadata transform to apply to applicable objects.
Valid keywords are: DISABLE_ARCHIVE_LOGGING, LOB_STORAGE, OID, PCTSPACE,
SEGMENT_ATTRIBUTES, STORAGE, and TABLE_COMPRESSION_CLAUSE.

Import with DISABLE_ARCHIE_LOGGING when Force Logging Enabled:

For the below test, I have used three machines, where one machine has been used to export job and others two are under Data Guard configuration with Primary and Physical standby database.

SQL> select database_role,protection_mode,force_logging from v$database;

DATABASE_ROLE PROTECTION_MODE FORCE_LOGGING
—————- ——————– —————————————
PRIMARY MAXIMUM AVAILABILITY YES

SQL>
-bash-3.2$ expdp system/***** directory=data_pump_dir dumpfile=testobj.dmp logfile=testobj.log tables=scott.testobj

Export: Release 12.1.0.1.0 – Production on Sun Sep 21 03:05:23 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_TABLE_01″: system/******** directory=data_pump_dir dumpfile=testobj.dmp logfile=testobj.log tables=scott.testobj
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 13 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported “SCOTT”.”TESTOBJ” 10.42 MB 91431 rows
Master table “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/u01/app/oracle/admin/orcl/dpdump/testobj.dmp
Job “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully completed at Sun Sep 21 03:06:08 2014 elapsed 0 00:00:37

Now we will import into a Primary database and this database have also physical standby database which is up to date sync with primary database.

ID STATUS DB_MODE TYPE PROTECTION_MODE ARCHIVED_SEQ#
———- ——— ————— ———- ——————– ————-
1 VALID OPEN ARCH MAXIMUM PERFORMANCE 143
2 VALID OPEN_READ-ONLY LGWR MAXIMUM AVAILABILITY 143

Performed import operations with additional features such as tracing and high level details of import process(metrics). The trace files can be found in diag destination with the files *dw00* in order to analyze or review.
-bash-3.2$ impdp system/free2go directory=pythian_dir dumpfile=testobj.dmp logfile=sh.log remap_schema=scott:c##nassyam transform=disable_archive_logging:y trace=1FF0300 metrics=y

Import: Release 12.1.0.1.0 – Production on Sun Sep 21 03:10:48 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

WARNING: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Startup took 1 seconds
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
import done in AL32UTF8 character set and UTF8 NCHAR character set
export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
WARNING: possible data loss in character set conversions
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″: system/******** directory=pythian_dir dumpfile=testobj.dmp logfile=sh.log remap_schema=scott:c##nassyam transform=disable_archive_logging:y trace=1FF0300 metrics=y
Processing object type TABLE_EXPORT/TABLE/TABLE
Completed 1 TABLE objects in 0 seconds
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “C##NASSYAM”.”TESTOBJ” 10.42 MB 91431 rows in 5 seconds
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Completed 1 TABLE_STATISTICS objects in 0 seconds
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
;;; Ext Tbl Shadow: worker id 1.
Completed 1 MARKER objects in 19 seconds
Completed 1 TABLE_EXPORT/TABLE/TABLE_DATA objects in 5 seconds
Job “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully completed at Sun Sep 21 03:11:20 2014 elapsed 0 00:00:29

-bash-3.2$

[root@CKPT-ORA-03 trace]# cat mcdb_dw00_1766.trc | grep DISABLE_ARCHIVE_LOGGING
KUPW:11:57:34.203: 1: 13 Name – DISABLE_ARCHIVE_LOGGING
KUPW:11:57:37.350: 1: l_valid: -3 – Not Loading DISABLE_ARCHIVE_LOGGING with type INDEX
KUPW:11:57:38.401: 1: DISABLE_ARCHIVE_LOGGING for TABLES called.
KUPW:11:57:38.401: 1: DISABLE_ARCHIVE_LOGGING for TABLES returned. In procedure CHANGE_ARCHIVE_LOGGING
KUPW:11:57:38.401: 1: operation DISABLE_ARCHIVE_LOGGING
KUPW:11:57:38.403: 1: In INIT_MD_ARCHIVE_LOGGING for DISABLE_ARCHIVE_LOGGING
KUPW:11:57:38.420: 1: l_valid: -1 – Not Loading DISABLE_ARCHIVE_LOGGING with type NULL
KUPW:11:57:38.423: 1: l_valid: -3 – Not Loading DISABLE_ARCHIVE_LOGGING with type INDEX
KUPW:11:57:38.423: 1: DBMS_METADATA.SET_TRANSFORM_PARAM returned. l_valid : 1 – Loading DISABLE_ARCHIVE_LOGGING with type NULL
KUPW:11:57:41.926: 1: l_valid: -3 – Not Loading DISABLE_ARCHIVE_LOGGING with type INDEX
[root@CKPT-ORA-03 trace]#

So we have imported into primary database successfully with the DISABLE_ARCHIVE_LOGGING feature, but keen to know what happens to standby database?

SQL> select database_role,protection_mode from v$database;
DATABASE_ROLE PROTECTION_MODE
—————- ——————–
PHYSICAL STANDBY MAXIMUM AVAILABILITY

SQL> select count(*) from c##nassyam.testobj;
COUNT(*)
———-
91431

So even though you mentioned DISABLE_ARCHIVE_LOGGING during import operations still the redo written into redo logs and they are successfully transmitted to standby database, Hence DISABLE_ARCHIVE_LOGGING will be ignored when you set Force logging at database level.

Import with DISABLE_ARCHIE_LOGGING when Force Logging Not Enabled:

There are very few cases where the database will be functioning into No Force logging but still configuration works well probably there is no explicit nologging was been used to create the objects.

SQL> select database_role,protection_mode,force_logging from v$database;

DATABASE_ROLE PROTECTION_MODE FORCE_LOGGING
—————- ——————– —————————————
PRIMARY MAXIMUM AVAILABILITY NO

SQL>
-bash-3.2$ impdp system/***** directory=pythian_dir dumpfile=testobj.dmp logfile=testobj.log remap_schema=scott:c##nassyam transform=disable_archive_logging:y trace=1FF0300 metrics=y

Import: Release 12.1.0.1.0 – Production on Sun Sep 21 11:57:16 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

WARNING: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Startup took 1 seconds
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
import done in AL32UTF8 character set and UTF8 NCHAR character set
export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
WARNING: possible data loss in character set conversions
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″: system/******** directory=pythian_dir dumpfile=testobj.dmp logfile=testobj.log remap_schema=scott:c##nassyam transform=disable_archive_logging:y trace=1FF0300 metrics=y
Processing object type TABLE_EXPORT/TABLE/TABLE
Completed 1 TABLE objects in 0 seconds
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “C##NASSYAM”.”TESTOBJ” 10.42 MB 91431 rows in 3 seconds
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Completed 1 TABLE_STATISTICS objects in 0 seconds
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
;;; Ext Tbl Shadow: worker id 1.
Completed 1 MARKER objects in 50 seconds
Completed 1 TABLE_EXPORT/TABLE/TABLE_DATA objects in 3 seconds
Job “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully completed at Sun Sep 21 11:58:33 2014 elapsed 0 00:01:02

-bash-3.2$
[root@CKPT-ORA-03 trace]# cat mcdb_dw00_1766.trc | grep DISABLE_ARCHIVE_LOGGING
KUPW:11:57:34.203: 1: 13 Name – DISABLE_ARCHIVE_LOGGING
KUPW:11:57:37.350: 1: l_valid: -3 – Not Loading DISABLE_ARCHIVE_LOGGING with type INDEX
KUPW:11:57:38.401: 1: DISABLE_ARCHIVE_LOGGING for TABLES called.
KUPW:11:57:38.401: 1: DISABLE_ARCHIVE_LOGGING for TABLES returned. In procedure CHANGE_ARCHIVE_LOGGING
KUPW:11:57:38.401: 1: operation DISABLE_ARCHIVE_LOGGING
KUPW:11:57:38.403: 1: In INIT_MD_ARCHIVE_LOGGING for DISABLE_ARCHIVE_LOGGING
KUPW:11:57:38.420: 1: l_valid: -1 – Not Loading DISABLE_ARCHIVE_LOGGING with type NULL
KUPW:11:57:38.423: 1: l_valid: -3 – Not Loading DISABLE_ARCHIVE_LOGGING with type INDEX
KUPW:11:57:38.423: 1: DBMS_METADATA.SET_TRANSFORM_PARAM returned. l_valid : 1 – Loading DISABLE_ARCHIVE_LOGGING with type NULL
KUPW:11:57:41.926: 1: l_valid: -3 – Not Loading DISABLE_ARCHIVE_LOGGING with type INDEX
[root@CKPT-ORA-03 trace]#

So we are done with importing object into the database by usage of DISABLE_ARCHIVE_LOGGING and when Force logging is disabled, Now let’s verify whether the object is created and with how many rows.

SQL> select count(*) from c##nassyam.testobj;
select count(*) from c##nassyam.testobj
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 6, block # 179)
ORA-01110: data file 6: ‘/u02/app/oracle/oradata/mcdb/users01.dbf’
ORA-26040: Data block was loaded using the NOLOGGING option
SQL>

Sun Sep 21 12:00:33 2014
Errors in file /u01/app/oracle/diag/rdbms/india/drmcdb/trace/drmcdb_ora_407.trc (incident=4969):
ORA-01578: ORACLE data block corrupted (file # 6, block # 179)
ORA-01110: data file 6: ‘/u02/app/oracle/oradata/mcdb/users01.dbf’
ORA-26040: Data block was loaded using the NOLOGGING option
Incident details in: /u01/app/oracle/diag/rdbms/india/drmcdb/incident/incdir_4969/drmcdb_ora_407_i4969.trc
Sun Sep 21 12:00:35 2014
Checker run found 1 new persistent data failures
Sun Sep 21 12:00:36 2014
Dumping diagnostic data in directory=[cdmp_20140921120036], requested by (instance=1, osid=407), summary=[incident=4969].
Sun Sep 21 12:00:36 2014
Sweep [inc][4969]: completed
Sweep [inc2][4969]: completed

SQL> SELECT tablespace_name, segment_type, owner, segment_name,partition_name FROM dba_extents WHERE file_id = 6 and 179 between block_id AND block_id + blocks – 1;
TABLESPACE_NAME SEGMENT_TYPE OWNER SEGMENT_NA PARTITION_
————— —————— ———- ———- ———-
USERS TABLE C##NASSYAM TESTOBJ
SQL>

Yes, block corruption found and the corresponding object is “TESTOBJ”, Hence your standby is stuck and lag with primary database because of using DISABLE_ARCHIVE_LOGGING in case of force logging is disabled. Now it’s one more risk to your standby database in order to recover.

Additional Touch:

Again recovery is also become more simpler in 12c on standby with ” RMAN> recover database from service <Oracle Net Service> using backupset;”

RMAN> recover database from service canada using backupset;
Starting recover at 21-SEP-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK
skipping datafile 5; already restored to SCN 1913352
skipping datafile 7; already restored to SCN 1913352
……………………………………….
name=/u02/app/oracle/fast_recovery_area/INDIA/archivelog/2014_09_21/o1_mf_1_135_b1wv7g6d_.arc thread=1 sequence=135
archived log file name=/u02/app/oracle/fast_recovery_area/INDIA/archivelog/2014_09_21/o1_mf_1_136_b1wwbk2s_.arc thread=1 sequence=136
archived log file name=/u02/app/oracle/fast_recovery_area/INDIA/archivelog/2014_09_21/o1_mf_1_137_b1wwbkqm_.arc thread=1 sequence=137
archived log file name=/u02/app/oracle/fast_recovery_area/INDIA/archivelog/2014_09_21/o1_mf_1_138_b1wwbqnq_.arc thread=1 sequence=138
media recovery complete, elapsed time: 00:00:01
Finished recover at 21-SEP-14
RMAN>

Conclusion:

In a Data Guard configuration when importing objects into the primary database, be sure to do not set the new 12c DISABLE_ARCHIVE_LOGGING parameter (set its value from the default one which is ‘N’ (or) do not mention at all), particularly when your database is running in a non-force logging mode. If you do so then redo will be applied in Standby database which might end up by generating corrupted objects. In other words, I highly recommend people to enable force logging in a Data Guard environment; as such they will completely annihilate the effect of the new 12c DISABLE_ARCHIVE_LOGGING parameter whatever its value is.

Reference: http://www.toadworld.com/  By Oracle ACE director

Dual format of Oracle 12c (Boosting analytical queries) PART-1


Oracle Database In-Memory is the introduction of a new in-memory column store into the Oracle database. It’s going to allow you to populate data in-memory in a column format and have your queries automatically take advantage of that column format.What types of queries would those be? They’re typically analytical style queries, queries that are going to scan a lot of data, apply some WHERE clause predicates or filters to that data, and return a subset of rows. The Oracle optimizer is fully aware of this new in-memory column store and will automatically direct the queries that will benefit from it to the column store. It does not replace the existing row format in-memory that we’ve always had, or the buffer cache.

7

The column format is purely in-memory. It’s not persisted on disk in any way so nothing changes in terms of the disk capacity required to host your database when you use the in-memory store.The in-memory column store is a new component, or pool, inside of the SGA, or the Shared Global Area of your database. The in-memory column store is a static pool, which means it will not grow or shrink during the lifetime of the database.

When you’re allocating the total SGA, using the SGA target parameter, you need to ensure that it’s large enough to encompass the existing components within the SGA, things like the buffer cache, the shared pool, as well as the new in-memory area. You can specify the size of your in-memory area by using the new in-memory size parameter. If you look at V$SGA you’ll see that new pool and how much memory from your total SGA has been allocated to that pool.

Remember, it is a static pool, which means it is not controlled or managed using the new automatic memory management algorithm that was introduced in 11g. It is a static pool, which means it will neither grow nor shrink over the lifetime of the database. So you want to make sure, when you’re specifying your INMEMORY_SIZE, that you make it large enough to accommodate all of the objects you need in the column store. You’ll also need to remember that the SGA_TARGET is set large enough to accommodate that new in-memory area, as well as all the other existing components in your SGA.

You get to decide which objects you want to populate into the column store. And those objects are going to be compressed as they’re being populated.You can either specify the new in-memory attribute at a table, partition, subpartition, or materialized view. That new in-memory attribute can be specified simply by saying

ALTER TABLE sales INMEMORY;

If you want to remove an object from the column store you simply say

ALTER TABLE sales NO INMEMORY;

Because you have the ability to specify the in-memory attribute at a segment level, so either a partition or a subpartition, you can have an object that only has your performance-sensitive data in the column store, while the rest of the object resides on disk. You can do this by specifying a different in-memory attribute on each of the partitions in that object. When a query comes in to access that object, we’ll pick up the data from wherever it resides, whether that’s the column store, or disk, or a combination of both.

The IOTs (index organised tables)and hash clusters, both of which are more OLTP focused features, do not get populated into the column store. The reason being is that these structures typically enhance access to an individual record in the database rather than an analytical style query that’s going to access lots of rows in the database. So they’re not really useful to be put in the column store.

The other component, or objects, that doesn’t always get populated into the column store is a LOB column. LOB columns are typically used to allow you to put unstructured data into the Oracle database. And they come in two forms, an in-line LOB and an out-of-line LOB.

An in-line LOB is one that’s 4k or less in size, and is stored co-located with the other column values of a particular row. Those in-line LOBs will be populated into the column store when the other rows and the other columns of that table are being populated into the column store. An out-of-line LOB, on the other hand, is a LOB that is greater than 4k in size and is stored separately from the other columns in that row. Those out-of-line LOBs, or larger LOBs, are not populated into the column store. We believe that they’ll waste a lot of space, and they’re frequently not accessed along with the rest of the columns.

So if you mark table that’s got a LOB column in-memory then we’ll populate any in-line LOBs automatically into the column store along with the other column values for that table. But we’ll exclude any out-of-line LOBs when we’re doing that population. It’ll happen automatically for you. So if you forget, and you mark a table in-memory that has only out-of-line LOBs, we’ll automatically exclude that column and populate all of the others into the column store.Not all of the columns in all of the tables need to be populated into the column store. You do have the ability to exclude some of the columns from a table during the population. You’re able to do that by simply doing

ALTER TABLE sales INMEMORY,

NO INMEMORY (prod_id, client_id);

That is going to allow you to save space in the column store by not having the frequently accessed columns be populated into the column store. You also get to control when the objects get populated into the column store. You’re going to do this by specifying a PRIORITY attribute on the objects.

create table order(c1 number, c2 varchar(20), c3 number)

inmemory priority critical

no inmemory(c1);

There are five PRIORITY attributes that you can specify, critical being the most important, high, medium, low, and none. None is the default, and that means the objects are only populated into the column store on first access. If you specify any of the other priorities on an object then, when the database starts up, we will process the objects that need to be populated into the column store in a prioritized list starting with the most critical objects first, then the hot objects with a high priority, medium priority, and low priority, until all of the objects that have been marked with the in-memory priority have been populated into the in-memory column store.

By default, all of the objects populated in the in-memory column store are going to be compressed. They’re going to use a new type of compression technique that’s different from the compression techniques we’ve used to date in the Oracle database. These compression techniques are focused on performance, and by that I mean query performance. So these new compression algorithms allow us to be able to apply the WHERE clause predicates, or filter predicates, directly on the compressed data.By default, you’re getting MEMCOMPRESS FOR QUERY. But you can change that if you wish. You can change the types of compression that’s used on the objects in the column store by specifying a different MEMCOMPRESS attribute when you’re either altering or creating an object. You’ve got various choices for the different types of compression we have, all the way, starting from no compression whatsoever to 1.compress for DML 2.compress for query, low and high 3.and compress for capacity, low and high.

Compress for capacity is going to allow you to use a higher level of compression to get more data into the column store. There is a slight got you by using MEMCOMPRESS FOR CAPACITY, though, because it will require additional CPU to decompress some of the data before applying your WHERE clause predicates to that data. So it will allow you to save space, but it comes with the payment of requiring additional CPU when you query it.

So when you are populating the objects into the column store you need to bear in mind that trade-off between saving space and getting a great compression ratio and being able to apply the WHERE clause predicates of your queries efficiently against the data stored in the column store. You do have the option to specify a different compression technique for different columns within a single object, and of course, a different compression technique on all the different objects that are stored in the in-memory column store, should you wish.

How do I know which tables are marked in-memory and for in-memory, and which are not? Well, there’s a new in-memory column in USER_TABLES.

6

You’ll see there that this column is a Boolean. It’s either enabled OR disabled. In this example here, the PRODUCTS table has been ENABLED for the column store.

this is a Boolean column that either has one of two values, enabled OR disabled. So why do two of the tables listed here have no value specified whatsoever? Both the COSTS and the SALES table are partitioned tables. That means that the table level, or the partition table, is just a logical object. The actual physical object that has the in-memory attribute specified on it are the underlying partitions or subpartitions of those objects. So if you want to see whether or not those individual partitions have been marked for in-memory, you’d need to look at user tab partitions or user tab subpartitions to see the INMEMORY attribute specified on those underlying physical objects.

by looking in a new v$ view called v$IM_SEGMENTS. It has a status column in that new view that will tell you whether or not the object has completed population AND is in the column store, or it can tell you if the object has been started.

Objects that have a start status means they’re in the middle of population right now. There is an additional column in this view called BYTES NOT POPULATED that will indicate to you how much more of that object is left to be populated into the column store. Once that column reaches zero then the status will change from STARTED to COMPLETED.

So now you know how to get data into the column store, and you know how to size the column store, but how much space do you really need? How well are your particular objects going to compress when you run MEMCOMPRESS on them? Well, you can find out by running the Compression Advisor.

alter materialized view mv1

inmemory memcompress for query;  //by default

create table trades(name varchar(20), desc varchar(200)) inmemory

memcompress for DML(desc);

The Compression Advisor has been extended in Oracle Database 12c to make it MEMCOMPRESS-aware. So you’ll be able to run the Compression Advisor, just as you do today, by calling DBMS_COMPRESSION and specifying the new MEMCOMPRESS attributes there, and have it tell you what compression ratio you can expect for your objects should they be populated into the in-memory column store. If you’re not sure which objects you want to populate into the column store there is a new In-Memory Advisor. It will look at your existing AWR and ASH repositories and determine, based on the information we find there, which objects would be good candidates for the column store. It’s going to give you a prioritized list of those objects, let you know what kind of performance benefit we believe you’ll get by populating those objects into the column store, and it will also tell you what kind of compression we believe you’ll get for those objects.

Why is the row format not as efficient as the column format

1

2

Take for example, our query here. I’m looking for an individual column in the table, in this case, column four. And my table has been populated in-memory in the row format into the buffer cache. When this query begins, we’ll need to find the offset for the first row in that table. We’ll walk down that row until we find the fourth column. We’ll extract the value that we’re interested in, and we’ll move to the offset of the next row, and so forth, until we’ve scanned all of the rows in the table returning the values just for column four.

If we were to execute this same query against the table in a column format, this is what you’d see. Inside in the column store each of the columns in the table is being stored separately. So we go directly to column four, we don’t scan any additional information, and we’re able to get all of the values that we want directory from column four.

There’s another component of this, of course, is that the data in the column store is compressed, so the volume of data we’re actually scanning in column four is a lot less than it would be scanning the same column in the buffer cache in the row format. So we’re getting several advantages there, only access the data we’re interested in and access that data in a compressed format.

But there are some other advantages to scanning data in the column store, as well. One of those advantages is the fact that each of the columns in the in-memory column store actually has something called a storage index created on it. That storage index is keeping track of the min/max values for each of the extents, or IMCUs, within that column.

.3

We’re able to use that storage index to provide data pruning, or data elimination, as part of the scan. So in other words, we’ll check the value that we’re looking for against that min/max range for each of the IMCUs in that column, and we will only scan the IMCUs where we find a match.

So say, for example, I’m looking for the store_ids where the store_id is eight or higher. I’ll look at the first extent, or IMCU, in this column. I’ll see that I don’t have any store_ids that are eight or higher, so I’m not going to even bother scanning that component, or extent, of this column. I’ll skip it.

5

I’ll move on to the second one. It also has no entries that are eight or higher, so I’ll skip that also. I’ll move on to the third IMCU in the column. I find that I do have the potential to find a match for a store_id eight or higher, so I will scan that particular IMCU. And I’ll move on and do the same for the fourth, because, again, it has the potential to have some values with store_id eight or higher. By getting data elimination, or not scanning, data we’re able to improve the scanned performance by, again, reducing the amount of data that needs to be scanned as part of this query.

ADVANCED COMPRESSION WITH ORACLE 12C


FIRST THERE ARE OFFICIAL STATEMENTS AND THEN FOR OUR UNDERSTANDING I HAVE TRIED TO EXPLAIN BY MY NOTEBOOK.

What are the benefits of using Advanced Row Compression?

Advanced Row Compression uses a unique compression algorithm specifically designed to work with OLTP applications. The algorithm works by eliminating duplicate values within a database block, even across multiple columns. Compressed blocks contain a structure called a symbol table that maintains compression metadata. When a block is compressed, duplicate values are eliminated by first adding a single copy of the duplicate value to the symbol table. Each duplicate value is then replaced by a short reference to the appropriate entry in the symbol table. Through this innovative design, compressed data is self-contained within the database block, as the metadata used to translate compressed data into its original state is stored in the block header. When compared with competing compression algorithms that maintain a global database symbol table, Oracle’s approach offers significant performance benefits by not introducing additional I/O when accessing compressed data.The compression ratio achieved in a given environment depends on the data being compressed, specifically the cardinality of the data. In general, organizations can expect to reduce their storage space consumption by a factor of 2x to 4x by using Advanced Row Compression. That is, the amount of space consumed by uncompressed data will be two to four times larger than that of the compressed data. The benefits of Advanced Row Compression go beyond just on-disk storage savings. One significant advantage is Oracle’s ability to read compressed blocks directly without uncompressing the blocks. This helps improve performance due to the reduction in I/O, and the reduction in system calls related to the I/O operations. Further, the buffer cache becomes more efficient by storing more data without having to add memory.

———————————————————————————————–

Oracle Database 9i introduced Basic Compression which only compressed data that was loaded using bulk load operations. Advanced Row Compression, a feature of Advanced Compression, allows data to be compressed during all types of data manipulation operations, including conventional DML such as INSERT and UPDATE. In addition, Advanced Row Compression reduces the associated compression overhead of write operations making it suitable for transactional/OLTP environments. Advanced Row Compression, therefore, extends the benefits of compression to all application workloads. Although storage cost savings and optimization across servers (production, development, QA, Test, Backup and etc…) are often seen as the most tangible benefits, additional innovative technologies included in Advanced Compression are designed improve performance and to reduce CapEx and OpEx costs for all components of your IT infrastructure, including memory and network bandwidth as well as heating, cooling and floor-space costs.

QUESTION:

I am already using the basic RMAN backup compression feature. In what way is the RMAN backup compression included in Advanced Compression superior to this?

Advanced Compression provides three levels of RMAN Compression: LOW, MEDIUM, and HIGH. The amount of storage savings increases from LOW to HIGH, while potentially consuming more CPU resources. Compression Level LOW provides the fastest compression algorithm and is best suited when backup is constrained by CPU. Compression Level MEDIUM provides a balance between CPU usage and compression ratio and finally, Compression LEVEL HIGH provides the best compression ratio and highest CPU utilization and is best suited when backup is constrained by network or I/O.

What is the performance impact of using Advanced Row Compression?

For DML operations on a compressed table, Advanced Row Compression’s specialized batch algorithm keeps the performance overhead to a minimum. Internal tests at Oracle showed a minimal overhead of less than 5% CPU for a DML workload. It is important to note that Oracle compresses blocks in batch mode rather than compressing data every time a write operation takes place. When a transaction causes the data in the block to reach an internal threshold, all contents of the block are compressed. Subsequently, as more data is added to the block and the threshold is again reached, the entire block is recompressed to achieve the highest level of compression. This process repeats until Oracle determines that the block can no longer benefit from further compression. Only transactions that trigger the compression of the block will experience the slight compression overhead. Therefore, a majority of OLTP transactions on compressed blocks will have the exact same performance as they would with uncompressed blocks.

In today’s Oracle database management systems, physical disk I/O remains one of the foremost bottlenecks. Even at relatively fast speeds of 10 milliseconds, many data intensive Oracle applications can still choke on I/O by having disk enqueues, Oracle block read tasks waiting to pull information from the spinning platters.  Data compression is certainly useful for reducing the amount of physical disk I/O but there are some caveats that need to be followed by the Oracle database administrator.

———————————————————————————————–

Oracle 11g compression syntax:

The 11g docs note that the new COMPRESS keyword works for tables, table partitions and entire tablespaces.  Oracle has implemented their data compression at the table level, using new keywords within the “create table” DDL:

1. create table fred (col1 number) NOCOMPRESS;

2. create table fred (col1 number) COMPRESS FOR DIRECT_LOAD OPERATIONS;

3. create table fred (col1 number) COMPRESS FOR ALL OPERATIONS;
4. alter table fred move COMPRESS;

1

342

DRCP – Database Resident Connection Pooling


DATABASE RESIDENT CONNECTION POOLING:

Points we will cover:
what is it?
when to use?
configuration
Monitoring

when should anything be pooled?
1. when something is expensive to create and tear down
**CPU intensive
2. when u don’t need the thing all the time
** memory and process space wasted
AND Dtabase connection satisfy all of the above criteria

DATABASE CONNECTIONS:

1.Use non persistent connections??
high connection times
unnecessary connect/disconnect CPU load
2. use persistent connections??
too many connections hanging around
majority not doing any work

database connection resources:

1.Physical coonection creation
**listener spawns server process,initialization,socket handoff (it handoff socket from client to the DBA)
** capability and protocol negotiation
** consumes PGA(around 4MB)
2. Logon
**authentication
**session creation
** UGA(around 400KB)

DATABASE connection in middle tiers

Homogenity:
1. connection to the same DB
2.login as same user/schema
3. fixed set of application
4. repeat queries
5. uniform technology(mainly java,php etc.)
6. business processing(being performed at client side) and database activity
7. SO LARGE SCOPE OF RESOURCE REUSE AND OPTIMIZATION

SO what are the OPTIONS we have

1. Appplication level connection pooling (applications can be multithreaded and each thread perform DB activity)
** OCI,JDBC/OCI, custom
** multiple threads share physical connections
** multiplex user sessions over fewer connections/servers
** stateful sessions, stateless connection **MOST IMPORTANT**

2. Application level session pooling
** OCI,JDBC,custom
** multiple threads share connection+sessions, stateless

WHAT ELSE???

application could be single threaded and they run as N no of processes instead of threads and these application process could be running from different hosts as well
so all of these are actually going to make a connection to the database. these no of process could be very large so in this case client side connection pooling is not possible because they are not going to share the sessions among the threads and we don’t have infrastructure to make sharibility across different hosts.

OPTIONS:

application process can repeatedly connect/disconnect
application process can hold onto it’s connection

but we know the drawbacks with this approach.

SHARED SERVERS can be an option
**server side
** clients connect to dispatchers
** multiplex user sessions over few shared server process
** CAN’T REALLY SHARE SESSIONS **IMPORTANT**

DATABASE RESIDENT CONNECTION POOLING(DRCP)
“oracle database 11g feature
“pool of dedicated server+session on database machine
“pool shared across applicationprocess and middle tier nodes
“reduces memory, can improve performance
“co-exist with dedicated,shared server approach and even with RAC
“most of the drivers support it like OCI,JDBC,OCCI,opensource drivers

we have set of application server which are going to make use of DB server.From DRCP on the DB(it runs on the DB)

there is process called connection broker. and there are pool servers which are actual server process
that run the workload and if few of them are designated as AUTH server whose sole purpose is to authenticate the client connection
and then only clients to the DB. so what happens wheen DCRP is up and DB listener knows about it??
automatically DRCP registers itself with the listenerand when client asks for connection Listener redirects it to the connection broker.
since it’s a fresh connection and has not yet been authenticated, what the broker does is it hands it off to the auth server, and then authentication happens
b/w client and authentication server. once it gets successful client (socket) is now connected to broker AND THIS IS THE CONNECTION WE CAN SAY CAN BE CONNECTED PERSISTENTLY because its just a connection,it’s not occupying any kind of resources on the DB server in terms of server process or the session memory. so on this persistent connection clients makes a connection request. the broker in that case picks a free pool server which is satisfying the criteria and hands Off the
connection to one of the pool server. then the pool server is marked as BUSY.

SO HERE WHEN CLIENT PERFORM SUCH TRANSACTION IT CONNECTION TO THE SERVER PROCESS JUST LIKE AN DEDICATED SERVER PROCESS. THERE IS NO OTHER PROCESS COMING IN BETWEEN WHEN SQL TRAFFIC IS GOING ON.
and once when the application is done with DB activity it releases the connection.the connection is released back to the pool. THus connection(busy) is handed back to the broker.
the server process is freed up for reuse.

AND THE CYCLE GOES ON AND ON..ISN’T IS EASY??

1

2

3

4

5

6

7

8

9

10

11

12

13

ONE IMPORTANT THING HERE IS ONLY “CURRENT(ACTIVE CONNECTIONS) SERVER PROCESS MEMORY PLUS SESSION MEMORY” CONSUME MEMORY ON DB SIDE.

COMPARISON OF CRSP VS DEDICATED AND SHARED SERVER process.
Senario with multiple application servers
For example, in a middle-tier connection pool, if the minimum pool size is 100, then the connection pool has 100 connections to the server,
and the Database server has 100 server processes associated with these connections. If there are 30 middle tiers with a connection pool of minimum size 100,
then the server has 3000 (100 * 30) corresponding server processes running. Typically, on an average only 5% of the connections, and in turn, server processes
are in use at any given time. So, out of the 3,000 server processes, only 150 server processes are active at any given time. This leads to over 2,850 unused server
processes on the server. These unused processes are the wasted resources on the server.

———————————————————————————————–
Start DRCP
Use the dbms_connection_pool.start_pool to start DRCP with default settings
sqlplus /nolog
connect / as sysdba
execute dbms_connection_pool.start_pool();
Stop DRCP
Use the dbms_connection_pool.stop_pool to stop DRCP
sqlplus /nolog
connect / as sysdba
execute dbms_connection_pool.stop_pool();
Routing Client Connection Requests to the Connection Pool
server01.joords.nl:1521/books.joords.nl:POOLED
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=server01.joords.nl)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=book.joords.nl)(SERVER=POOLED)))
jdbc:oracle:thin:@//server01.joords.nl:5221/book.joords.nl:POOLED
RAC and DRCP
Database Resident Connection Pool is certified to work with Real Acppliaction Cluster. Each database instance has its own connection broker
and pool of servers. Each pool has the identical configuration. For example all pools will start with minsize server processes.
A single dbms_connection_pool command will alter the pool of each instance at the same time.

———————————————————————————————–
CLIENT CONNECT STRING
hostname/service:POOLED TNS MUST BE UPDATED THIS WAY
(server=pooled)
this way client is directed to DRCP.

Using In-Database Row Archiving


Connect to pdb1 as the HR user.

. oraenv
[Enter cdb1 at the prompt.]

sqlplus hr/hr@localhost:1521/pdb1

Create a copy of the HR.employees table. Call the copy HR.emp_arch, and only copy over 4 rows.

create table emp_arch
as select employee_id, first_name from employees where rownum <= 4;

Enable row archival on HR.emp_arch.

alter table emp_arch row archival;

There is a hidden column in row-archival-enabled tables called ora_archive_state which indicates whether a row is archived or not. The hidden column is only displayed if specified in a query. First, describe the table structure ofHR.emp_arch. Notice that the ora_archive_state column is not listed.

desc emp_arch

Now, query the HR.emp_arch table. Display the ora_archive_state column in the query result.

select employee_id, first_name, ora_archive_state from emp_arch;

A value of 0 in the ora_archive_state column means the row is not archived; in other words, the row is active and visible via a standard query.

Confirm that all rows are active by issuing a standard query. You should see all 4 rows.

select * from emp_arch;

Setting the Archive State of Some Rows

Use the dbms_ilm.archivestatename procedure to update the ora_archive_state value for employee_id 102 and 103.

update emp_arch
set ora_archive_state=dbms_ilm.archivestatename(1)
where employee_id in (102, 103);

commit;

Confirm that now only 2 rows are visible via a standard query. Include the ora_archive_state column in the query. The ora_archive_state value for the 2 rows should be 0.

select employee_id, first_name, ora_archive_state from emp_arch;

Enabling a Session to View Archived Rows

Set the archival visibility to ALL for the session. This will allow this session to view all rows, archived or not.

alter session set row archival visibility = all;

Issue the same query as before, but now you should see all 4 rows.

select employee_id, first_name, ora_archive_state from emp_arch;

Set the archival visibility to ACTIVE for the session. This will allow this session to view only active (unarchived) rows.

alter session set row archival visibility = active;

Issue the same query as before, but now you should see only the 2 active rows.

select employee_id, first_name, ora_archive_state from emp_arch;

Verifying that Copying a Row-Archival Table Will Not Propagate the Source Table’s Archival State to the Target Table

Use CTAS to make a copy of the HR.emp_arch table.

create table emp_arch_copy as select employee_id, first_name from emp_arch;

Verify that HR.emp_arch_copy does not have row archival enabled, and therefore does not contain the ora_archive_state hidden column. You should see an “invalid identifier” error for this query.

select employee_id, first_name, ora_archive_state from emp_arch_copy;

Query the table again, but this time omit the hidden column. Notice that all 4 rows are copied even though you set the row archival visibility for this session to active rows only.

select employee_id, first_name from emp_arch_copy;

Set the archival visibility to ALL for the session. From this point forward, this session should now see all rows, whether archived or not.

alter session set row archival visibility = all;

Enable row archival for HR.emp_arch_copy.

alter table emp_arch_copy row archival;

Verify that HR.emp_arch_copy now has row archival enabled, and therefore contains the ora_archive_state hidden column.

select employee_id, first_name, ora_archive_state from emp_arch_copy;

Use the dbms_ilm.archivestatename procedure to update the ora_archive_state value for employee_id 102 and 103.

update emp_arch_copy
set ora_archive_state=dbms_ilm.archivestatename(1)
where employee_id in (102, 103);

commit;

Confirm that now 2 rows are active, and 2 rows are inactive, via a standard query. Include the ora_archive_state column in the query. The ora_archive_state value for the 2 active rows should be 0, and the 2 inactive rows should be 1.

select employee_id, first_name, ora_archive_state from emp_arch_copy;

Recall that the HR.emp_arch table you created earlier contains 2 active and 2 inactive rows. Confirm this via a standard query. Include the ora_archive_state column in the query. The ora_archive_state value for the 2 active rows should be 0, and the 2 inactive rows should be 1.

select employee_id, first_name, ora_archive_state from emp_arch;

Copy all rows from HR.emp_arch to HR.emp_arch_copy. To distinguish between rows that were previously copied to HR.emp_arch_copy via CTAS, and the rows being copied again now, append ‘_New’ to the values in thefirst_name column values.

insert into emp_arch_copy select employee_id, first_name || ‘_New’ from emp_arch;

commit;

What row archival state do you think the newly inserted rows have? Remember that the archival state is not copied along with the rows.

Query HR.emp_arch_copy to determine the ora_archive_state value for each row.

select employee_id, first_name, ora_archive_state from emp_arch_copy;

You should see that the ora_archive_state values from HR.emp_arch are not propagated to HR.emp_arch_copy. All of the _New rows in HR.emp_arch_copy have ora_archive_state set to the default value of 0 (active). The 2 inactive rows resulted from the update you performed in the HR.emp_arch_copy table in step 7.

Resetting Your Environment

Perform the following steps to reset your environment prior to repeating the activities covered in this OBE or starting another OBE.

Drop the tables created in this tutorial.

drop table emp_arch;

drop table emp_arch_copy;

 —————————————————————————————–

So what’s the use of doing all of this. suppose I have a table that has financial records of my company of last 10 YR. that’s terabytes of data. I don’t want to store such large amount of data in my expensive and finite memory, One thing we can do is move the past 9 yr of data (compress it) at low cost storage(Like TAP) and when we require this data i can get it again thus improving the efficiency of data retrieval.