RAC TEMP tablespace caching tips

Space allocation in TEMP tablespace

TEMP tablespaces are divided in to extents (In 11.2, extent size is 1M, not sure whether the size of an extent is controllable or not). These extent maps are cached in local SGA, essentially, soft reserving those extents for the use of sessions connecting to that instance. But, note that, extents in a temporary tablespace are not cached at instance startup, instead instance caches the extents as the need arises. We will explore this with a small example:

This database has two instances and a TEMP tablespace. TEMP tablespace has two temp files, 300M each.

Listing 1-1: dba_temp_files

  1* select file_name, bytes/1024/1024 sz_in_mb from dba_temp_files

FILE_NAME                                                      SZ_IN_MB
------------------------------------------------------------ ----------
+DATA/solrac/tempfile/temp.266.731449235                            300
+DATA/solrac/tempfile/temp.448.775136163                            300

Initially, no extents were cached, and no extents were in use as shown from the output of gv$temp_extent_pool view in Listing 1-2.

Listing 1-2: Initial view of temp extents

select inst_id, file_id, extents_cached, extents_used from gv$temp_extent_pool order by 1,2;

---------- ---------- -------------- ------------
         1          1              0            0
         1          2              0            0
         2          1              0            0
         2          2              0            0

We are ready to start a test case

Listing 1-3: Script in execution

select inst_id, file_id, extents_cached, extents_used from gv$temp_extent_pool order by 1,2;
---------- ---------- -------------- ------------
         1          1              0            0
         1          2              0            0
         2          1             22           22
         2          2             23           23
---------- ---------- -------------- ------------
         1          1              0            0
         1          2              0            0
         2          1            108          108
         2          2            111          111

I started a small SQL script that joins multiple tables with hash join so as to induce disk based sorting. After starting the SQL script execution in instance 2, you can see that extents are cached and used in the instance 2, as shown in Listing 1-3. Initially, 45 extents were in use, few seconds later, temp tablespace usage grew to 219 extents.

Listing 1-4: script completion

---------- ---------- -------------- ------------
         1          1              0            0
         1          2              0            0
         2          1            163            0
         2          2            166            0

After the completion of the script,as shown in Listing 1-4, extents_used column is set to 0, But extents_cached is still at maximum usage level (319 extents). Meaning that, extents are cached (soft reserved) in an instance and not released (until another instance asks for it, as we see later).

You should also note that extents are equally spread between two files in that temporary tablespace. If you have more files in that temporary tablespace, then the extents will be uniformly allocated in all those temp files.

Space reallocation

Even if the cached extents are free, these extents are not available to use in other instance(s) immediately. An instance will request the owning instance to uncache the extents and then only those extents are available for use in the requesting instance. We will demonstrate this concept with the same test case, except that we will execute that test case in instance 1.

Listing 1-5: script in instance #1 execution

---------- ---------- -------------- ------------
         1          1             42           42
         1          2             42           42
         2          1            163            0
         2          2            166            0

At the start of SQL execution, instance started to reserve extents by caching them. My session was using those extents as visible from gv$temp_extent_pool. Number of extents used by the instance #1 was slowly growing.See Listing 1-5.

Listing 1-6: instance #1 stole the extents from instance #2

---------- ---------- -------------- ------------
         1          1            195           71
         1          2            133          116
         2          1             63            0 <-- note here
         2          2            166            0

It gets interesting. Notice that 329 extents were reserved In Listing 1-5. Since my SQL script needs 329M of space in the temp tablespace, instance 1 needs to steal space from instance 2.

In Listing 1-6, Instance 1 needed more extents and so, Instance 2 uncached 100 extents as the extents_cached column went down from a value of 163 to 63 extents (third row in the output above). Essentially, in this example, instance 1 requested instance 2 to uncache the extents and instance 2 obliged and uncached 100 extents. Prior to 11g, un-caching of extents used to be at one extent per request. From 11g onwards, 100 extents are released for a single request and all 100 extents are acquired by the requesting instance. Instance 1 acquired those 100 extents, cached those extents, and then the session continued to use those temp extents.

Listing 1-7: script completion and node #1 has more extents cached.

---------- ---------- -------------- ------------
         1          1            195            0
         1          2            133            0
         2          1             63            0
         2          2            166            0

After the completion of the script execution, instance 1 did not release the extents. Cached extents are not released (extents are soft reserved )until another instance asks for those extents to be un-cached.

I also enabled sql trace in my session from instance 1 while executing the script. SQL trace file spills out the details about un-reserving of these extents.

Listing 1-8: SQL Trace
#1: nam='enq: TS - contention' ela= 4172867 name|mode=1414725636 tablespace ID=3 dba=2 obj#=0 tim=6322835898
#2: nam='enq: SS - contention' ela= 608 name|mode=1397948417 tablespace #=3 dba=2 obj#=0 tim=6322837101
#3: nam='enq: SS - contention' ela= 414 name|mode=1397948422 tablespace #=3 dba=2 obj#=0 tim=6322837710
#4: nam='DFS lock handle' ela= 389 type|mode=1128857605 id1=14 id2=1 obj#=0 tim=6322838264
#5: nam='DFS lock handle' ela= 395 type|mode=1128857605 id1=14 id2=3 obj#=0 tim=6322838788
#6: nam='DFS lock handle' ela= 260414 type|mode=1128857605 id1=14 id2=2 obj#=0 tim=6323099335

Line #1 above shows a tablespace level lock (TS enqueue) is taken on TEMP tablespace (ID=3 is ts# column in sys.ts$ table). Then SS locks were acquired on that tablespace, first with mode=1 and then with mode=6 (line #2 and #3). In Line #4, Cross Invocation Call (CIC) was used to ask remote SMON process to un-reserve the cached extents using CI type locks and DFS lock handle mechanism with lock types CI-14-1, CI-14-2, and CI-14-3.

Listing 1-9: Enqueue type

select chr(bitand(&&p1,-16777216)/16777215) || chr(bitand(&&p1,16711680)/65535) type,
mod(&&p1, 16) md from dual;
Enter value for p1: 1397948422

TY         MD
-- ----------
SS          6

Enter value for p1: 1128857605
TY         MD
-- ----------
CI          5

From Listing 1-8, Approximately, 4.5 seconds were spent to move the cached extents from the one instance to another instance. Prior to 11g, this test case will run much longer, since the extents were un-cached 1 extent per request. Hundreds of such request would trigger tsunami of SS, CI enqueue requests leading to massive application performance issues. In 11g, Oracle Development resolved this issue by un-caching 100 extents per request.


In Real Application Clusters (RAC), a common TEMP tablespace is shared between nodes in a shared ASM file. You can see this activity via the gv$temp_extent_pool view. The gv$temp_extent_pool shows how temporary tablespace extents are cached. Instances try to cache extents equally from all files of the TEMP tablespace.

In RAC you should create your TEMP tablespace with one data file for each RAC node, to facilitate caching of the TEMP data.

In RAC, the TEMP tablespace is divided into one megabyte extents, and these extent maps are cached in local SGA on each RAC node. This serves to reserve TEMP extents for the use of sessions that are connecting to each RAC node.

Because TEMP tablespace is shared on a RAC cluster, it is possible for a single sort or hash join transaction to consume an inordinate amount of TEMP on the entire cluster, leading to errors.

However, extents in the TEMP tablespace are not cached at instance startup, and instead each node caches extents as the need arises.

As RAC nodes allocate TEMP extents, they soft reserve the space in their private SGA. When a process runs out of space in the temp tablespace, it requests the other instances in the RAC cluster to free up their unused TEMP space.

This is done through a Cross Instance Call. the CIC is protected by the SS enqueue. The SS enqueue is taken on the tablespace (P1) and instance (P2). The request for free space is done in a two step process:

1: Taking an SS enqueue: This places an enqueue on the tablespace and RAC node. This enqueue means that any process trying to steal space on behalf of this TEMP tablespace will block on this enqueue.

2: Cross Instance Call. The space-release cross instance call (CIC) serializes on the CIC type. Under the control of the cross-instance call, the SMON background process on the RAC nodes release any free TEMP space (if any) and acknowledge the completion of the CIC. After all of the SMONs send an “ack” to the CIC, the CIC completes processing.

Steve Adams notes this on the gv$temp_extent_pool:

“The sort extent pool Oracle metric is an instance wide SGA data structure used to direct the extents of sort segments within temporary contents tablespaces. . .

The allocation and freeing of extents within sort segments and queries against V$SORT_SEGMENT and V$TEMP_EXTENT_POOL are performed directly under the protection of the sort extent pool latch

Adams also notes how the extent pool works during an Oracle sort:

“During a disk sort, sessions track the use of space within their sort segment extents for particular sort runs using a sort space map.

The sort space map is a small part of the sort area. The major parts of the sort area are the sort/read buffers and the write buffers.

Sort areas are part of the cursor work heap for a cursor. The cursor work heap is part of the runtime memory of the private SQL area for a cursor.

For DML statements that complete in a single call, the private SQL area is allocated in the CGA which is always a sub-heap of the PGA, even for calls from multi-threaded sessions.

For SELECT statements, the private SQL area must be located in the UGA because it has to remain accessible to subsequent fetch calls.

For dedicated connections the UGA resides in the PGA. For multi-threaded connections the fixed UGA resides in the shared pool, and the variable part of the UGA (including private SQL areas) resides in the large pool if it exists, and otherwise in the shared pool. ”

References: https://orainternals.wordpress.com


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s