A technical troubleshooting blog about Oracle with other Databases & Cloud Technologies.

Comparison Between Oracle Streams and GoldenGate

5 min read
Staging

Streams:  Streams may store or stage changes in a database buffered queue. The benefit is that changes are stored in memory which has a fast access time. 

GoldenGate: When processing a one-time activity : an initial load or a batch run, GoldenGate stages extracted changes in an extract file. During normal operation , GoldenGate will stage changes in an extract trail. The difference is that no checkpoint information is maintained for an extract file whereas it is for a trail. The OGG trail is comprised of  multiple, unstructured files characterized by filenames with a common 2-characater prefix and an ascending six digit suffix. Thus, we refer to trail 'ab'  which is composed of trail files 'ab000000', 'ab000001', 'ab000002', etc
Memory Allocation

Both Steams and GoldenGate work more effectively if transactions are small and short lived. 

Streams: Streams requires memory to be allocated on the source and destination databases to cache changes . Staging these changes involves caching transaction information in the Streams pool which is located within the SGA. Users need to be aware therefore of the Streams pool related parameter : STREAMS_POOL_SIZE. As a starting measure , it is useful to preallocate at least 200MB. This is recommended for the Capture and Apply side databases. The Streams pool , as well as holding staged changes in the related buffered queue, also accommodates Logminer's work area for a Capture process. If required, changes are only written to a database table at the Apply side .

GoldenGate: Extract caches transaction information in OS memory . For Oracle database, caching is associated with the parameter CACHEMGR parameter which is relevant to the source Extract process.
Large Transactions

Streams: Large transactions are also referenced under 'Transaction Spill' below. These are managed at the Apply side and written to a partitioned table. Large transactions are written out to the same Apply spill table after the LCR count reaches the Apply parameter : TXN_LCR_SPILL_THRESHOLD. Streams Capture also reports the transaction id of a large transaction as it is being mined in the alert log file.

GoldenGate: Large transactions may be flushed from the Extract cache to disk to avoid affecting smaller transactions in the cache . The size of the cache is controlled by the CACHESIZE parameter . Unlike Streams there is no transaction size (number of rows affected) at which Goldengate writes large transactions out to disk.
Long Running transactions (LRTs)

Streams: Streams captures committed and uncommitted changes it is interested in. All txn changes - committed or uncommitted -  are sent to the Apply process. The Apply process will then write to the Apply spill table a transaction which has been open for a duration which exceeds the Apply parameter : TXN_AGE_SPILL_THRESHOLD seconds. Streams Capture also reports the transaction id of a long running transaction as it is being mined in the alert log file.

GoldenGate: Transaction activity is cached. Until Extract processes a commit or rollback, the transaction is considered open and its information continues to be collected in memory. The management of these open long running transaction is handled within the bounded recovery mechanism where , as with Streams , long running transactions will be written to disk. GoldenGate writes transaction to the trail in transaction commit order. This means that transactions may appear 'out of order' when viewed from the perspective of when they were started.

GoldenGate Extract will also warn users that long transactions are taking place if there is an attempt to shutdown Extract ; Streams doesn't do this.  If there are long running transactions, this will also provide checkpoint information which will show how far back the Extract process will go in the Oracle redo if it is forced to shutdown at this point - as a bounded recovery checkpoint may not have taken place recently. The user is therefore able to make an informed choice as to whether it is reasonable to perform a force shutdown of the Extract process. With Bounded Recovery taking place, even with long running transactions taking place , Extract should not need to go too far back in the redo logs on restart. Refer to 'Bounded Recovery' below.
Log Mining

Logminer might be used in a replication environment to verify the original transaction changes in the Oracle redo logs in the event of any uncertainty. In this situation, relevant redo logs containing the transaction would be added to an Ad-Hoc logminer session and v$logmnr_contents inspected to determine the details.

Streams: Streams uses logminer to retrieve changes from Oracle redo logs. The Streams capture process is a client of logminer.

GoldenGate: GoldenGate needs to interpret the content of the redo log files directly. With GoldenGate version 11.2.1 however, there is also be the option to use integrated Capture which will use logminer . Integrated Capture (as opposed to classic Capture - refer to the Oracle Installation and Setup Guide for 11.2.1) supports more data types and storage types as well as making it easier to work with RAC and ASM ; other benefits are also provided with integrated Capture.
Supplemental Logging

Both Streams and GoldenGate require Supplemental changes to be logged in Oracle's redo Stream .

The purpose of Supplemental logging relates to the following :

- detail needs to be written to the redo stream for a key column (primary key, unique key or substitute key) which will be passed across to the remote database to allow the equivalent row on the destination table to be accessed by index access and not Full table Scan (FTS);

- detail also needs to be logged for those columns which will be used in conflict resolution; these may be columns which were not specified in the operation.


Streams: Supplemental logging is added at the point the prepare command is issued. If additional supplemental logging is required , this needs to be added explicitly with the relevant 'ADD SUPPLEMENTAL LOG' command

GoldenGate: GoldenGate supplemental logging is added with the ADD TRANDATA command. Where DDL is handled, DDLOPTIONS ADDTRANDATA would also need to be used as an Extract parameter. Columns specified will be unconditionally logged in the redo stream.