// // Latching in Oracle

A technical troubleshooting blog about Oracle with other Databases & Cloud Technologies.

Latching in Oracle

4 min read
System Locks

Oracle Database uses various types of system locks to protect internal database and memory structures. These mechanisms are inaccessible to users because users have no control over their occurrence or duration.

Latches

A latch is a simple, low-level serialization mechanism that coordinates multiuser access to shared data structures, objects, and files.

Latches protect shared memory resources from corruption when accessed by multiple processes. Specifically, latches protect data structures from the following situations:

• Concurrent modification by multiple sessions
• Being read by one session while being modified by another session
• Deallocation (aging out) of memory while being accessed

Latches are a type of locking device that are quickly acquired and released. They prevent multiple processes from running the same application code in the database more than once at the same time, thus making available a serialization mechanism to provide data integrity for transaction management with Oracle database environments.
Latches provide a low level serialization mechanism protecting shared data structures in the SGA. A latch is a type of a lock that can be very quickly acquired and freed. Latches are typically used to prevent more than one process from executing the same piece of code at a given time. When a latch is acquired, it is acquired at a particular level in order to prevent deadlocks. Once a process acquires a latch at a certain level, it cannot subsequently acquire a latch at a level that is equal to or less than that level (unless it acquires it nowait). Associated with each latch is a cleanup procedure that will be called if a process dies while holding the latch. This cleaning is done using the services of PMON. The underlying implementation of latches is operating system dependent, particularly in regard to whether a process will wait for a latch and for how long.


The goal of latches is to manage concurrent access to shared data structures such that only one process can access the structure at a time. Blocked processes (processes waiting to execute a part of the code for which a latch has already been obtained by some other process) will wait until the latch is released. Oracle uses atomic instructions like “test and set” for operating on latches. Since the instructions to set and free latches are atomic, the OS guarantees that only one process gets it and because it is only a single instruction, it is quite fast. 

Latch requests can be made in two modes:

willing-to-wait : A “willing-to-wait” mode request will loop, wait, and request again until the latch is obtained.

no wait : In “no wait” mode, the process will request the latch and if it is not available, instead of waiting, another one is requested. Only when all fail does the server process have to wait.
Let’s look at the differences between a lock and a latch:

Latches differ from locks in a number of ways:

1. Locks are kept in the lock table and located via hash tables; latches reside in memory near the resources they protect, and are accessed via direct addressing.

2. Lock acquisition is entirely driven by data access, and hence the order and lifetime of lock acquisitions is largely in the hands of applications and the query optimizer. Latches are acquired by specialized code inside the DBMS, and the DBMS internal code issues latch requests and releases strategically.

3. Locks are allowed to produce deadlock, and lock deadlocks are detected and resolved via transactional restart. Latch deadlock must be avoided; the occurrence of a latch deadlock represents a bug in the DBMS code.

4. Latches are implemented using an atomic hardware instruction or, in rare cases, where this is not available, via mutual exclusion in the OS kernel.

5. Latch calls take at most a few dozen CPU cycles whereas lock requests take hundreds of CPU cycles.
When Latch will occur?
Oracle sessions need to update or read from the SGA for almost all database operations.  For instance:

1) When a session reads a block from disk, it must modify a free block in the buffer cache and adjust the buffer cache LRU (Least Recently Used) chain.

2) When a session reads a block from the SGA, it will modify the LRU chain.

3) When a new SQL statement is parsed, it will be added to the library cache within the SGA.

4) As modifications are made to blocks, entries are placed in the redo buffer.

5) The database writer periodically writes buffers from the cache to disk (and must update their status from “dirty” to “clean”).
Latch and Concurrency:
An increase in latching means a decrease in concurrency. For example, excessive hard parse operations create contention for the library cache latch.  Latches are a type of lightweight lock. Locks are serialization devices. Serialization devices inhibit concurrency.  To build applications that have the potential to scale, ones that can service 1 user as well as 1,000 or 10,000 users, the less latching we incur in our approaches, the better off will be.
Data Dictionary:

The V$LATCH view contains detailed latch usage statistics for each latch, including the number of times each latch was requested and waited for.
To find the problem latches since database startup, run the following query:

SELECT n.name, l.sleeps
  FROM v$latch l, v$latchname n
  WHERE n.latch#=l.latch# and l.sleeps > 0 order by l.sleeps;

To see latches that are currently a problem on the database run:

SELECT n.name, SUM(w.p3) Sleeps
  FROM V$SESSION_WAIT w, V$LATCHNAME n
 WHERE w.event = `latch free'
   AND w.p2 = n.latch#
 GROUP BY n.name;