Oracle RAC Startup Architecture

When troubleshooting an Oracle Real Application Clusters environment, understanding the startup sequence of cluster components is extremely important. Many issues during node boot, cluster startup or database availability can be traced back to one of these services failing or starting incorrectly.

In an Oracle RAC environment, cluster services start in a strict sequence, where each component depends on the previous one. Knowing this order helps in quickly identifying where the failure occurs and which log files to analyze.

Oracle RAC Startup Flow

            Oracle RAC Startup Sequence
            ---------------------------

                    Node Boot
                        │
                        ▼
    ┌─────────────────────────────────────┐
    │ 1. OHASD                            │
    │ Oracle High Availability Service    │
    │ Initializes Oracle Clusterware      │
    └─────────────────────────────────────┘
                        │
                        ▼
    ┌─────────────────────────────────────┐
    │ 2. CSSD                             │
    │ Cluster Synchronization Services    │
    │ • Voting Disk Management            │
    │ • Node Heartbeat                    │
    │ • Cluster Membership                │
    └─────────────────────────────────────┘
                        │
                        ▼
    ┌─────────────────────────────────────┐
    │ 3. CRSD                             │
    │ Cluster Resource Services           │
    │ Manages:                            │
    │ • ASM                               │
    │ • Listeners                         │
    │ • VIPs                              │
    │ • Databases                         │
    └─────────────────────────────────────┘
                        │
                        ▼
    ┌─────────────────────────────────────┐
    │ 4. EVMD                             │
    │ Event Manager Daemon                │
    │ Handles Cluster Events              │
    └─────────────────────────────────────┘
                        │
                        ▼
    ┌─────────────────────────────────────┐
    │ 5. ASM Instance                     │
    │ Storage Management Layer            │
    │ Diskgroups become available         │
    └─────────────────────────────────────┘
                        │
                        ▼
    ┌─────────────────────────────────────┐
    │ 6. Listeners                        │
    │ • Local Listener                    │
    │ • SCAN Listener                     │
    └─────────────────────────────────────┘
                        │
                        ▼
    ┌─────────────────────────────────────┐
    │ 7. RAC Database Instances           │
    │ Database becomes available          │
    └─────────────────────────────────────┘

Explanations

Lets get a simplified explanation of the Oracle RAC startup flow for a 2-Node cluster.

1. OHASD – Oracle High Availability Service

The first component that starts in the cluster stack is OHASD.

OHASD is responsible for initializing the Oracle Clusterware stack on the node. It ensures that all required cluster components are launched in the correct order.

Since this service is the entry point of the cluster framework, if OHASD fails, the rest of the cluster services will not start.

LOG LOCATION:
-----------------------

$GRID_HOME/log/<node>/ohasd/ohasd.log
$GRID_BASE/diag/crs/<node>/crs/trace/ohasd.trc

2. CSSD – Cluster Synchronization Services

After OHASD starts successfully, the CSSD daemon is launched.

CSSD is a critical component responsible for maintaining cluster health and communication between nodes. Its main responsibilities include:

  • Managing voting disks
  • Maintaining cluster membership
  • Monitoring node heartbeat

If CSSD detects that a node is not responding, it can trigger node eviction to maintain cluster consistency.

LOG LOCATION:
-----------------------

$GRID_HOME/log/<node>/cssd/ocssd.log
$GRID_BASE/diag/crs/<node>/crs/trace/cssd.trc

3. CRSD – Cluster Resource Services

Once cluster membership is successfully established by CSSD, CRSD starts.

CRSD is responsible for managing cluster resources across nodes. These resources include:

  • ASM instances
  • Database instances
  • Listeners
  • Virtual IPs (VIPs)
  • SCAN services

CRSD ensures that these resources are started, stopped, and relocated based on cluster policies.

LOG LOCATION:
-----------------------

$GRID_HOME/log/<node>/crsd/crsd.log
$GRID_BASE/diag/crs/<node>/crs/trace/crsd.trc

4. EVMD – Event Manager Daemon

The Event Manager (EVMD) handles cluster event notifications.

It processes events generated by cluster components and helps propagate those events to other services or management tools. This allows Oracle Clusterware to react appropriately to status changes within the cluster.

LOG LOCATION:
-----------------------

$GRID_HOME/log/<node>/evmd/evmd.log

5. ASM Instance Startup

After cluster services are fully operational, the Automatic Storage Management (ASM) instance starts.

ASM is responsible for managing Oracle storage, including:

  • Disk groups
  • File allocation
  • Storage redundancy

Once ASM is available, database files stored in ASM disk groups become accessible.

LOG LOCATION:
-----------------------

$GRID_BASE/diag/asm/+asm/+ASM<node_number>/alert/alert_+ASM<node>.log

6. Listener Startup

Next, Oracle listeners are started.

The startup sequence usually follows this order:

  • Local listener
  • SCAN listener

The SCAN listener enables clients to connect to the cluster without needing to know specific node details.

LOG LOCATION:
-----------------------
Local Listener:
***************
$GRID_BASE/diag/tnslsnr/<node>/listener/alert/log.xml

SCAN Listener:
***************
$GRID_BASE/diag/tnslsnr/<node>/listener_scan*/alert/log.xml

7. Database Instance Startup

Finally, the database instances are started on the nodes.

Each node in the RAC cluster runs its own database instance, but all instances access the same shared database storage.

At this stage, the cluster becomes fully operational and ready to handle client connections.

LOG LOCATION:
-----------------------

$ORACLE_BASE/diag/rdbms/<dbname>/<instance>/alert/alert_<instance>.log

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *