// // Before we start tuning GoldenGate…

A technical troubleshooting blog about Oracle with other Databases & Cloud Technologies.

Before we start tuning GoldenGate…

3 min read

Large objects (LOBs)

LOBs can be a problem in data replication by virtue of their size and the ability to extract, transmit, and deliver the data from source to target. Tables containing LOB datatypes should be isolated from regular data to use a dedicated Extract, Data Pump, and Replicat process group to enhance throughput. 

It also ensure that the target table has a primary key to avoid Full Table Scans (FTS), an Oracle GoldenGate best practice.

 LOB INSERT operations can insert an empty (null) LOB into a row before updating it with the data. This is because a LOB (depending on its size) can spread its data across multiple Logical Change Records, resulting in multiple DML operations required at the target database.

Online redo

Before we start tuning GoldenGate, we must look at both the source and target databases and their ability to read/write data. Data replication is I/O intensive, so fast disks are important, particularly for the online redo logs. Redo logs play an important role in GoldenGate: they are constantly being written to by the database and concurrently being read by the Extract process.

Firstly, ensure that only the necessary amount of supplemental logging is enabled on the database. In the case of GoldenGate, the logging of the Primary Key is all that is required.

Next, take a look at the database wait events, in particular the ones that relate to redo. For example, if you are seeing “Log File Sync” waits, this is an indicator that either your disk writes are too slow or your application is committing too frequently, or a combination of both.

Base lining

Before we can start tuning, we must record our baseline. This will provide a reference point to tune from. We can later look back at our baseline and calculate the percentage improvement made from deploying new configurations.

An ideal baseline is to find the “breaking point” of your application requirements. For example, the following questions must be answered:

What is the maximum acceptable end to end latency?

What are the maximum application transactions per second we must accommodate?

To answer these questions we must start with a single threaded data replication configuration having just one Extract, one Data Pump, and one Replicat process. This will provide us with a worst case scenario in which to build improvements on.

Ideally, our data source should be the application itself, inserting, deleting, and updating “real data” in the source database. However, simulated data with the ability to provide throughput profiles will allow us to gauge performance accurately 

Considerations for using parallel process groups

To maintain data integrity, ensure to include tables with referential constraints between one another in the same parallel process group. It’s also worth considering disabling referential constraints on the target database schema to allow child records to be populated before their parents, thus increasing throughput. GoldenGate will always commit transactions in the same order as the source, so data integrity is maintained.

Oracle best practice states no more than 3 Replicat processes should read the same remote trail file. To avoid contention on Trail files, pair each Replicat with its own Trail files and Extract process. Also, remember that it is easier to tune an Extract process than a Replicat process, so concentrate on your source before moving your focus to the target.

Splitting large tables into row ranges across process groups

What if you have some large tables with a high data change rate within a source schema and you cannot logically separate them from the remaining tables due to referential constraints? GoldenGate provides a solution to this problem by “splitting” the data within the same schema via the @RANGE function. The @RANGE function can be used in the Data Pump and Replicat configuration to “split” the transaction data across a number of parallel processes.

The Replicat process is typically the source of performance bottlenecks because, in its normal mode of operation, it is a single-threaded process that applies operations one at a time by using regular DML. Therefore, to leverage parallel operation and enhance throughput, the more Replicats the better (dependant on the number of CPUs and memory available on the target system).

Hope it worked for you !! 🙂