primary key with a multi-column index

DB version: Oracle 12.1

SQL> create table system.test (col1 number,col2 number, col3 number);

Table created.

SQL> create index system.idx1 on system.test (col2,col1);

Index created.

SQL> alter table system.test ADD PRIMARY KEY (col1) using index;

Table altered.

SQL> select count(*) from dba_indexes where table_name=’TEST’ and owner=’SYSTEM’;


But if we do the same steps but just change the index order
SQL> create index system.idx1 on system.test (col1,col2);

Index created.

we get total no of indexes for table=1

SQL> select count(*) from dba_indexes where table_name=’TEST’ and owner=’SYSTEM’;


why oracle doesn’t use the existing index in first case ?

Answer is:  Multiple reasons, To use a multi-column index for a primary key, the key should be the leading index columns. Remember, Oracle support only “Index Skip Scan” for non-leading columns. ROWID mapping will be always based on leading columns. (Skip scanning is advantageous if there are few distinct values in the leading column of the composite index and many distinct values in the nonleading key of the index.)

For more information about index skip scan please follow:




Find size of a table and Associated segments ( index , lob )


owner, table_name,segment_type, TRUNC(sum(bytes)/1024/1024/1024) Size_GB
(SELECT segment_name table_name, owner, bytes ,segment_type
FROM dba_segments  WHERE segment_type = ‘TABLE’
SELECT i.table_name, i.owner, s.bytes ,segment_type FROM dba_indexes i, dba_segments s
WHERE s.segment_name = i.index_name AND   s.owner = i.owner
AND   s.segment_type = ‘INDEX’
SELECT l.table_name, l.owner, s.bytes ,segment_type
FROM dba_lobs l, dba_segments s
WHERE s.segment_name = l.segment_name
AND   s.owner = l.owner
AND   s.segment_type = ‘LOBSEGMENT’
SELECT l.table_name, l.owner, s.bytes ,segment_type
FROM dba_lobs l, dba_segments s
WHERE s.segment_name = l.index_name
AND   s.owner = l.owner
AND   s.segment_type = ‘LOBINDEX’)
WHERE owner =’&owner’
and table_name=’&table’
GROUP BY owner,table_name,segment_type
ORDER BY SUM(bytes) desc

Enter value for owner: SYS
old 22: WHERE owner =’&owner’
new 22: WHERE owner =’SYS’
Enter value for table: T1
old 23: and table_name=’&table’
new 23: and table_name=’T1′

Flash Recovery Area Usage

———- ——————————– —————— ———-
SYS           T1                     TABLE                                1.2
SYS            T1                     INDEX                                1


Bad SQL design

Important point

If the statement is designed poorly, nothing much can be done by optimizer or indexes

Few known thumb rules

–Avoid Cartesian joins

–Use UNION ALL instead of UNION – if possible

–Use EXIST clause instead of IN – (Wherever appropiate)

–Use order by when you really require it – Its very costly

–When joining 2 views that themselves select from other views, check that the 2 views that you are using do not join the same tables!

–Avoid NOT in or NOT = on indexed columns. They prevent the optimizer from using indexes. Use where amount > 0 instead of where amount != 0
– Avoid writing where is not null. nulls can prevent the optimizer from using an index

– Avoid calculations on indexed columns. Write WHERE amount > 26000/3 instead of WHERE approved_amt/3 > 26000

– The query below will return any record where bmm_code = cORE, Core, CORE, COre, etc.

select appl_appl_id where upper(bmm_code) LIKE ‘CORE%’

But this query can be very inefficient as it results in a full table scan. It cannot make use of the index on bmm_code.

Instead, write it like this:

select appl_appl_id from nci_appl_elements_t where (bmm_code like ‘C%’ or bmm_code like ‘c%’) and upper(bmm_code) LIKE ‘CORE%’

This results in Index Range Scan.

You can also make this more efficient by using 2 characters instead of just one:

where ((bmm_code like ‘CO%’ or bmm_code like ‘Co%’ or bmm_code like ‘cO%’ or bmm_code like ‘co%’) and upper(bmm_code) LIKE ‘CORE%’)

40 Tips From Tom

1.Row-level locking has no overhead, not for 1 lock, not for 1 billion locks
2. Reads are never blocked, and reads don’t block
3. The best UI is a command line
4. Always use packages, for all good, real, production code
4a. Never use a standalone procedure except for demos, tests and standalone utilities that call nothing and are called by nothing.
4b. Packages break the dependency chain, support encapsulation, increase namespace, support overloading and session variables, and promote overall good coding techniques
5. Use the same source control techniques for PL/SQL as you would for any other language (C, Java, VB)
6. Use dbms_stats instead of Analyze
6a. Easier to automate, the stated/preferred way of collecting statistics, can analyze external tables, gathers statistics need for CBO (and nothing extra)
7. Serializable does not imply serial ordering
8. To be useful, know more than just the technology, know the business and the goals of your organisation
9. Joins are not evil. Databases were born to join.
10. Never compare strings to dates, and dates to strings. Compare dates to dates, strings to strings, numbers to numbers and never any other combination.
11. Beware of implicit conversions of dates and numbers. Avoid implicit like the plague, explicit is good
12. Never TO_NUMBER a number, never TO_DATE a date
13. Stop using YY in your date formatting, now and forever
14. Autonomous transactions as a feature was a huge mistake, they are rarely used in a manner that is safe.
15. Never copy online redolog when your database is in archivelog mode
16. Never restore the online redo log files
17. Rollback is not just for modifications, this is fundamental about Oracle
18. Stop committing until your transaction is complete
19. Don’t use a loop when a single query will do
19a. Use bulk processing instead of slow-by-slow processing
20. For performance reasons, call PL/SQL (or worse, Java) from SQL if and where there is quite simply no other way
21. Large databases are neither slow nor scary
22. Analytical functions are for those who crave, desire or just sometimes NEED speed.
23. The way to understand is to do
24. Soft parse percentage? 99% and above is good.
25. Select only the columns you need, not *
26. When designing a table, put the most frequently access columns first, those most likely to be NULL last, but don’t overnalyse it.
27. Disk is cheap. Data integrity, priceless.
28. You cannot put CPU in the bank and save it for later. If you are at 99.99% utilization — pat yourself on the back, you’ve done well.
29. Analytics rock. Analytics roll.
30. Don’t use the slower UNION when UNION ALL will do
31. Never never never do dynamically what you can do statitically, in PL/SQL.
32. You want to scale? Use static SQL in PL/SQL, all binds, all the time.
33. Triggers are evil.
34. Magic should be avoided. Experience tells me this.
35. Never create stuff as “sys” or “sysdba”, thats a special magical account not to be used by you or me.
36. Be careful doing anything non-transactional in a trigger. It won’t rollback when the transaction rolls back.
36a. Be very careful using any routine that starts with UTL_ in a trigger, they’re generally non-transactional
37. Use dbms_application_info package to allow you to show a procedure’s progress in v$session.
38. The sheer beauty of PL/SQL is that your concerns with other languages, like “parse once; execute many” and using bind variables are done automatically and transparently.
39. Cursor sharing is a bad thing, if you don’t need it
40. Not binding is actually harder to do than binding (and less secure)

Db file sequential read while doing full table scan?

These days, we are working on data warehouse in which we have a master table which will have 1.5m (approx) rows inserted every half hour and we have few fast refresh materialized view based on it. These mviews have some aggregate functions on it, which makes it a bit complex.
To start the experiment, each mview refreshes used to take some 18-20 mins, which is totally against the business requirement. Then we tried to figure out on why the mview refresh is taking so much time, in spite of dropping all the bitmap indexes on the mview (generally b-map indexes are not good for inserts/updates).

The 10046 trace (level 12) highlighted that there were many “db file sequential reads” on mview because of optimizer using “I_SNAP$_mview” to fetch the rows from mview and merge the rows with that of master table to make the aggregated data for the mview.

Good part of the story is access to master table was quite fast because we used direct load (using sqlldr direct=y) to insert the data in it. When you use direct load to insert the data, oracle maintains the list of rowids added to table in a view called “SYS.ALL_SUMDELTA”. So while doing fast mview refresh, news rows inserted are picked directly from table using the rowids given from ALL_SUMDELTA view and not from Mview log, so this saves time.

Concerned part was still Oracle using I_SNAP$ index while fetching the data from mview and there were many “db file sequential read” waits and it was clearly visible that Oracle waited on sequential read the most. We figured it out that full table scan (which uses scattered read, and multi block read count) was very fast in comparison to index access by running simple test against table. Also the tables are dependent mviews are only for the day. End of the day the master table and mview’s data will be pushed to historical tables and master table and mviews will be empty post midnight.

I gathered the stats of mview and then re-ran the mview refresh, and traced the session, and this time optimizer didn’t use the index which was good news.

Now the challenge was to run the mview stats gathering job every half an hour or induce wrong stats to table/index to ensure mview refresh never uses index access or may be to lock the stats using DBMS_STATS.LOCK_TABLE_STATS.

But we found another solution by creating the mview with “USING NO INDEX” clause. This way “I_SNAP$” index is not created with “CREATE MATERIALIZED VIEW’ command. As per Oracle the “I_SNAP$” index is good for fast refresh but it proved to be reverse for us because our environment is different and the data changes is quite frequent.

Now, we ran the tests again, we loaded 48 slices of data (24 hrs x 2 times within hour) and the results were above expectations. We could load the data with max 3 mins per load of data.

This is not the end of story. In the trace we could see the mview refresh using “MERGE” command and using full table scan access to mview (which we wanted) and rowid range access to master table.

The explain plan looks like:

Rows     Row Source Operation
——-  —————————————————
2  MERGE  SF_ENV_DATA_MV (cr=4598 pr=5376 pw=0 time=47493463 us)
263052   VIEW  (cr=3703 pr=3488 pw=0 time=24390284 us)
263052    HASH JOIN OUTER (cr=3703 pr=3488 pw=0 time=24127224 us)
263052     VIEW  (cr=1800 pr=1790 pw=0 time=14731732 us)
263052      SORT GROUP BY (cr=1800 pr=1790 pw=0 time=14205624 us)
784862       VIEW  (cr=1800 pr=1790 pw=0 time=3953958 us)
784862        NESTED LOOPS  (cr=1800 pr=1790 pw=0 time=3169093 us)
1         VIEW  ALL_SUMDELTA (cr=9 pr=0 pw=0 time=468 us)
1          FILTER  (cr=9 pr=0 pw=0 time=464 us)
1           MERGE JOIN CARTESIAN (cr=9 pr=0 pw=0 time=459 us)
1            NESTED LOOPS  (cr=6 pr=0 pw=0 time=99 us)
1             TABLE ACCESS BY INDEX ROWID OBJ$ (cr=3 pr=0 pw=0 time=56 us)
1              INDEX UNIQUE SCAN I_OBJ1 (cr=2 pr=0 pw=0 time=23 us)(object id 36)
1             TABLE ACCESS CLUSTER USER$ (cr=3 pr=0 pw=0 time=40 us)
1              INDEX UNIQUE SCAN I_USER# (cr=1 pr=0 pw=0 time=7 us)(object id 11)
1            BUFFER SORT (cr=3 pr=0 pw=0 time=354 us)
1             INDEX RANGE SCAN I_SUMDELTA$ (cr=3 pr=0 pw=0 time=243 us)(object id 158)
0           NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us)
0            INDEX RANGE SCAN I_OBJAUTH1 (cr=0 pr=0 pw=0 time=0 us)(object id 103)
0            FIXED TABLE FULL X$KZSRO (cr=0 pr=0 pw=0 time=0 us)
0           FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=0 us)
784862         TABLE ACCESS BY ROWID RANGE SF_ENV_SLICE_DATA (cr=1791 pr=1790 pw=0 time=2383760 us)
708905     MAT_VIEW ACCESS FULL SF_ENV_DATA_MV (cr=1903 pr=1698 pw=0 time=6387829 us)

You can see the access pattern as above.

Interesting twist in the story is when I saw the wait events in trace file.
Event waited on                             Times   Max. Wait  Total Waited
—————————————-   Waited  ———-  ————
db file sequential read                      2253        0.74          7.73
db file scattered read                        240        1.05          6.77
log file switch completion                 6        0.98          3.16
log file switch                                    8        0.98          2.47
rdbms ipc reply                                 6        0.00          0.00
log buffer space                                3        0.42          0.61
Again, even when we are doing full table scan, there are “db file sequential reads”?

To confirm I opened the raw trace file (before tkprof), and checked the obj# on sequential read wait event, it was the mview (SF_ENV_DATA_MV) !! and there were many. To further investigate I checked if there were any scattered reads to mview or not. I found there were scattered reads but there were many sequential reads also on which Oracle waited more than that of scattered read which did most of the data fetching.

After giving some thought, I realized that we created the mviews without storage clause, which means Oracle created the mview with default storage clause.

So assuming there are 17 blocks in an mview (container table) extent and Multi block read count is 16, Oracle will use scattered read mechanism (multiple blocks) to read the first 16 blocks and for the rest 1 it will use sequential read mechanism (one block), so you will find many sequential reads wait events sandwiched between scattered reads. To overcome this we created the mview with larger extent sizes and also multiple of MBCR (multi block read count).

Also, another cause of sequential read is chained or migrated rows, if your mview (or table) rows are migrated, the pointer to the next row is maintained in old (original) block, which will always be read by a single i/o call i.e. by sequential read.You can check the count of chained rows using DBA_TABLES.CHAIN_CNT after analysing the table . So to overcome this, we created the mview with genuine pctfree so when the merge runs (as a part of mview refresh) and updates few rows, the rows are not moved to a different block and hence avoiding sequential read.


Mview creation with “USING NO INDEX” does not create “I_SNAP$” index which sometimes help in fast refresh when the data changes are quite frequent and you cannot afford to collect stats after every few mins.
Create mview with storage clause suiting to your environment. Default extent sizes may not be always good.
PCTFREE can be quite handy to avoid sequential reads and extra block read.

Reference:// Sachin Arora’s Blog

getting CPU usage of oracle session

— Show CPU Usage for Active Sessions

SET PAUSE ‘Press Return to Continue’

COLUMN username FORMAT A30
COLUMN sid FORMAT 999,999,999
COLUMN serial# FORMAT 999,999,999
COLUMN “cpu usage (seconds)” FORMAT 999,999,999.0000

SUM(VALUE/100) as “cpu usage (seconds)”
v$session s,
v$sesstat t,
v$statname n
NAME like ‘%CPU used by this session%’
t.SID = s.SID
s.username is not null
GROUP BY username,t.sid,s.serial#

Press Return to Continue

in my case..i have just opened a session so cpu usage would be very low, as you can see

USERNAME SID     SERIAL#         cpu usage (seconds)
—————————— ———— ———— ——————-
SYS               8,752      11,975            .0000

Alter session force parallel query doesn’t really force anything

The word FORCE in the ALTER SESSION FORCE PARALLEL QUERY syntax. Force should mean that some behavior would always happen (when possible), right? Let’s test:


Table created.

SQL> CREATE INDEX i ON t(owner);

Index created.

SQL> @gts t
Gather Table Statistics for table t...

PL/SQL procedure successfully completed.

Now let’s “force” the parallel query in my session, run the query and check the execution plan:


Session altered.

SQL> SELECT SUM(object_id) FROM t WHERE owner LIKE 'S%';


SQL> @x

| Id  | Operation                    | Name | E-Rows |E-Bytes| Cost (%CPU)|
|   0 | SELECT STATEMENT             |      |        |       |   186 (100)|
|   1 |  SORT AGGREGATE              |      |      1 |    12 |            |
|   2 |   TABLE ACCESS BY INDEX ROWID| T    |   6741 | 80892 |   186   (0)|
|*  3 |    INDEX RANGE SCAN          | I    |   6741 |       |    18   (0)|

Predicate Information (identified by operation id):
   3 - access("OWNER" LIKE 'S%')
       filter("OWNER" LIKE 'S%')

The output shows a regular, serial execution plan!

Hmm, let’s increase the “forced” parallelism from 2 to 3 and run exactly the same query again:


Session altered.

SQL> SELECT SUM(object_id) FROM t WHERE owner LIKE 'S%';


SQL> @x

| Id  | Operation              | Name     | E-Rows |E-Bytes| Cost (%CPU)|    TQ  |IN-OUT| PQ Distrib |
|   0 | SELECT STATEMENT       |          |        |       |   128 (100)|        |      |            |
|   1 |  SORT AGGREGATE        |          |      1 |    12 |            |        |      |            |
|   2 |   PX COORDINATOR       |          |        |       |            |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |      1 |    12 |            |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |      1 |    12 |            |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          |   6741 | 80892 |   128   (0)|  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS FULL| T        |   6741 | 80892 |   128   (0)|  Q1,00 | PCWP |            |

Predicate Information (identified by operation id):
   6 - access(:Z>=:Z AND :Z<=:Z)
       filter("OWNER" LIKE 'S%')

Now the query will get a parallel plan!

The reason for this behavior is that the FORCE parallel query syntax doesn’t really force Oracle to use a parallel plan, but rather just reduces optimizer cost estimates for full table scans (the higher the parallelism, the lower the FTS costs ). But the optimizer is still free to choose some other, non-parallel execution plan if that has a lower cost than the best parallel one!

So what happened above is that with “forced” parallel degree 2, the parallel full table scan plan must have had a higher cost than the serial index range scan (186), but once I increased the parallelism “factor” to 3, then the final cost of the parallel full table scan plan ended up being lower (128) than the best serial plan found.

This is a good example showing that both the PARALLEL hints and the FORCE PARALLEL session settings really just adjust a narrow set of optimizer cost computation inputs and don’t really fix the resulting execution plan. If you really want to fix an execution plan, you need to tie optimizer “hands” in every aspect with a full set of hints just like the stored profiles do. That way, even if there is a lower cost plan available, the optimizer doesn’t know about it as you’ve prohibited it from doing any calculations other than your hints direct it to.

Note that when testing this, your mileage may vary, depending on how much data you have in your test table (or rather in the optimizer stats for that table) plus system stats.