Goldengate Multi-Master with Integrated Replicat and Apply

In this article we are going to discuss the simplest bi-directional Goldengate configuration that we could build.

The environment is as follows:

Two Linux VMs running Oracle Enterprise Linux, x86_64.  One server is build in virtual box, using IP address 10.12.1.8, hostname gg121too.  The other is built in VMware workstation, and is IP address 10.12.1.28, hostname gg183too.

This sort of simple bi-directional setup is what might be built in preparation for upgrade to a more recent oracle version. In our case we are replicating from Oracle 12.1.0.2 to Oracle 18.3.0 and back.

Use Case:

Prior to the upgrade, all activity is on the 12.1.0.2 database, and all changes are replicated to the oracle 18.3.0 instance.  In the near future, we plan to upgrade to 18.3.0 by redirecting our application to the 18.3.0 instance.  Since Goldengate is set up as multi-master, after switching the apps to the 18.3.0 instance, all updates will be replicated to the 12.1.0.2 instance.  At a point when everyone is content that the 18.3 instance is running properly without problems, we will remove the Goldengate replication and remove the 12.1 database.

Install Oracle Goldengate on each server. In our case, we chose the Goldengate version appropriate to the oracle version, then chose /u01/gghome for the Goldengate home directory, and chose the option to start the manager after installation. Choosing the option to start the manager also means that the directories will be created.

On each server, the manager parameters are as follows:

PORT 7809
AUTOSTART ER *

The GLOBALS has this line:

CHECKPOINTTABLE gguser.checkpoint

Next, we configured both database instances for Goldengate. First we create the Goldengate tablespace. Then we created the Goldengate users and granted them required privileges on the database:

SQL> create tablespace golden datafile ‘/u01/app/oradata/ORA121T/golden01.dbf’ size 1024m autoextend on;

SQL> create user gguser identified by gguser;

SQL> grant dba, create session, select any table, unlimited tablespace, select any dictionary to gguser;

SQL> create user ggadmin identified by ggadmin;

SQL> grant dba, create session, select any table, unlimited tablespace, select any dictionary to gguser;

SQL> exec dbms_goldengate_auth.grant_admin_privilege(‘GGUSER’);

SQL> exec dbms_goldengate_auth.grant_admin_privilege(‘GGADMIN’);

SQL> alter user gguser default tablespace golden;

The GGUSER user will be used to run the replication process itself.  The GGADMIN user will be used to run administrative commands that do not get replicated between the two systems. For example, should sequences get out of synch, manual intervention may be required to make sure the sequence is at the right value in each instance.

In order for Goldengate to work properly, supplemental log data must be added to the database, force logging must be turned on, and flashback should be turned on.  In order to turn on flashback database, run these commands:

SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database flashback on;
SQL> alter database archivelog;  — Only if necessary
SQL> alter database open;
SQL> alter system set enable_goldengate_replication=TRUE scope=both;

Next, to turn on supplemental log data and force logging:

SQL> alter database add supplemental log data;
SQL> alter database force logging;
SQL> alter database switch logfile;

Note that the commands above should be run on both the source and destination databases.

Next we will set up our tnsnames.ora file.  While the entry names are the same on each server, the entries themselves are different.  Below are the tns entries for gg121too:

ORA121T =
  (DESCRIPTION =
    (SDU = 2097152)
    (ADDRESS =
      (PROTOCOL = BEQ)
      (PROGRAM = /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle)
      (ARGV0 = ora121t)
      (ARGS='(DESCRIPTION=(SDU=2097152)(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))’)
      (ENVS=’ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=ora121t’)
    )
    (CONNECT_DATA =
      (SID = ora121t)
    )
  )

ORA183T =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.12.1.28)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = ora183t)
    )
  )

On ora183too below are the TNS entries:

ORA121T =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.12.1.8)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = ora121t)
    )
  )

ORA183T =
  (DESCRIPTION =
    (SDU = 2097152)
    (ADDRESS =
      (PROTOCOL = BEQ)
      (PROGRAM = /u01/app/oracle/product/18.3.0/dbhome_1/bin/oracle)
       (ARGV0 = ora183t)
      (ARGS='(DESCRIPTION=(SDU=2097152)(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))’)
      (ENVS=’ORACLE_HOME=/u01/app/oracle/product/18.3.0/dbhome_1,ORACLE_SID=ora183t’)
    )
    (CONNECT_DATA =
      (SID = ora183t)
    )
  )

Note that these entries are optimized to ensure the best performance for the local connection.

Next, connect to Goldengate and configure the credentials. 

On gg121too:

GGSCI (gg121too.localdomain)1>add credentialstore
GGSCI (gg121too.localdomain)2> alter credentialstore add user gguser@ora121t, password gguser alias ggulocal
GGSCI (gg121too.localdomain)3>alter credentialstore add user ggadmin@ora121t, password ggadmin alias ggalocal
GGSCI (gg121too.localdomain)4>alter credentialstore add user gguser@ora183t, password gguser alias gguremote
GGSCI (gg121too.localdomain)5>alter credentialstore add user ggadmin@ora183t, password ggadmin alias ggaremote

Note what we are doing here.  The intent is that the local credentials always have the word ‘local’ in the name, and the remote system credentials will always have the word remote in the name.  Thus on gg183too, the corresponding commands are below:

GGSCI (gg183too.localdomain)1>add credentialstore
GGSCI (gg183too.localdomain)2> alter credentialstore add user gguser@ora183t, password gguser alias ggulocal
GGSCI (gg183too.localdomain)3>alter credentialstore add user ggadmin@ora183t, password ggadmin alias ggalocal
GGSCI (gg183too.localdomain)4>alter credentialstore add user gguser@ora121, password gguser alias gguremote
GGSCI (gg183too.localdomain)5>alter credentialstore add user ggadmin@ora121t, password ggadmin alias ggaremote

We will be copying the hr schema from the 12c to the 18c instance. Most often multiple schemas will need to be replicated, but we are keeping this to a simple case.

Next run these commands:

GGSCI (gg121too.localdomain)1> dblogin useridalias ggulocal
GGSCI (gg121too as gguser@ora121t) 2> add checkpointtable gguser.checkpoint
GGSCI (gg121too as gguser@ora121t) 6> add trandata hr.* cols *

At this point, Goldengate is configured on the source database.  Now create the checkpointtable on the destination database:

GGSCI (gg121too.localdomain)1> dblogin useridalias gguremote
GGSCI (gg121too as gguser@ora183t) 2> add checkpointtable gguser.checkpoint

Now we are ready to begin the configuration of the replication.

Connect back to the local instance and create the extract process:

GGSCI (gg121too) 4> dblogin useridalias ggulocal

GGSCI (gg121.localdomain as gguser@ora121t) 9> register extract capture1,  database

2019-08-08 17:35:51  INFO    OGG-02003  Extract CAPTURE1 successfully registered with database at SCN 1804072. <== Make note of this SCN it will be used later.

GGSCI (gg121.localdomain as gguser@ora121t) 10> add extract capture1, integrated tranlog, begin now
EXTRACT (Integrated) added.

GGSCI (gg121.localdomain as gguser@ora121t) 11> add rmttrail ./dirdat/rt, extract capture1
RMTTRAIL added.

Create the parameter file for the capture, and add these lines (edit params CAPTURE1):

extract capture1
useridalias ggulocal
rmthost 10.12.1.28, mgrport 7809
tranlogOptions IntegratedParams (max_sga_size 256), excludeuser ggadmin
DDL include mapped
rmttrail ./dirdat/rt
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
table hr.*;

Next, start the extract and monitor the file ggserr.log.  Fix any errors that occur:

GGSCI (gg121.localdomain as gguser@ora121t) 10> start extract capture1

Next, export the data in the schemas that are being replicated. In this case, we are using the HR schema.  The SCN from above is used as the flashback scn for the datapump export:

expdp directory=homedir dumpfile=hrscn.dmp flashback_scn=1804072 schemas=HR

Export: Release 12.1.0.2.0 – Production on Thu Aug 8 17:39:46 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: system/password

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_01″:  system/******** directory=homedir dumpfile=hrscn.dmp flashback_scn=1804071 schemas=HR
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 512 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported “HR”.”COUNTRIES”                            6.906 KB      25 rows
. . exported “HR”.”DEPARTMENTS”                          7.570 KB      27 rows
. . exported “HR”.”EMPLOYEES”                            17.53 KB     107 rows
. . exported “HR”.”JOBS”                                 7.554 KB      19 rows
. . exported “HR”.”JOB_HISTORY”                          7.640 KB      10 rows
. . exported “HR”.”LOCATIONS”                            8.882 KB      23 rows
. . exported “HR”.”REGIONS”                              5.992 KB       4 rows
. . exported “HR”.”TEST”                                 19.72 KB     123 rows
Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  /home/oracle/hrscn.dmp
Job “SYSTEM”.”SYS_EXPORT_SCHEMA_01″ successfully completed at Thu Aug 8 17:40:35 2019 elapsed 0 00:00:33

Next, copy the file hrscn.dmp to the destination system and import it into the database:

[oracle@gg183too gghome]$ impdp system/password directory=homedir dumpfile=hrscn.dmp

Import: Release 18.0.0.0.0 – Production on Thu Aug 8 16:55:49 2019
Version 18.7.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 – Production
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″:  system/******** directory=homedir dumpfile=hrscn.dmp
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported “HR”.”COUNTRIES”                            6.906 KB      25 rows
. . imported “HR”.”DEPARTMENTS”                          7.570 KB      27 rows
. . imported “HR”.”EMPLOYEES”                            17.53 KB     107 rows
. . imported “HR”.”JOBS”                                 7.554 KB      19 rows
. . imported “HR”.”JOB_HISTORY”                          7.640 KB      10 rows
. . imported “HR”.”LOCATIONS”                            8.882 KB      23 rows
. . imported “HR”.”REGIONS”                              5.992 KB       4 rows
. . imported “HR”.”TEST”                                 19.72 KB     123 rows
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully completed at Thu Aug 8 16:56:42 2019 elapsed 0 00:00:44

Now create and start the replicat process.  Connect to ggsci on the destination system and create the process.

GGSCI (gg183too) 8> edit params apply1

replicat apply1
DBOPTIONS INTEGRATEDPARAMS(parallelism 2)
assumetargetdefs
useridalias ggulocal
handlecollisions
discardfile ./dirrpt/hrerr.dsc, purge
ddl include all
map hr.*, target hr.*;

GGSCI (gg183too) 2> add replicat apply1, integrated, exttrail ./dirdat/rt

GGSCI (gg183too) 2> start replicat apply1

Once again, monitor the file ggserr.log for any errors and fix.  At this point, one way replication from the 12.1 source to the 18.3 target should be working.  Run some updates to verify that everything is working.

Next, we configure the reverse replication in case we need it.  This is basically identical to the initial replication, the only difference is we dont need to do an export/import.

GGSCI (gg183too) 2> dblogin useridalias ggulocal
GGSCI (gg183too as gguser@ora183t) 2> register extract capture2, database
GGSCI (gg183too as gguser@ora183t) 3> add extract capture2, integrated tranlog, begin now
GGSCI (gg183too as gguser@ora183t) 9> add rmttrail ./dirdat/xt, extract capture2

GGSCI (gg183too as gguser@ora183t) 3> edit params CAPTURE2

extract capture2
useridalias ggulocal
rmthost 10.12.1.8, mgrport 7809
tranlogOptions IntegratedParams (max_sga_size 256), excludeuser ggadmin
DDL include mapped
rmttrail ./dirdat/xt
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
table hr.*;

GGSCI (gg183too as gguser@ora183t) 3> start extract capture2

Monitor for errors in ggserr.log.

Then, go back to the source instance and configure the replicat process.

GGSCI (gg121too) 7> add replicat apply2 integrated exttrail ./dirdat/xt
R

GGSCI (gg121too) 7> edit params apply2

replicat apply2
  DBOPTIONS INTEGRATEDPARAMS(parallelism 2)
  AssumeTargetDefs
  DiscardFile ./dirrpt/rpdw.dsc, purge
  USERIDALIAS ggulocal
  DDLERROR DEFAULT DISCARD IGNOREMISSINGOBJECTS
  REPERROR (DEFAULT, EXCEPTION)
  MAP HR.*, target HR.*;

GGSCI (gg121too) 8> start replicat apply2

Once again, monitor the ggserr.log file for problems.  At this point, everything should be replicating.  Run an update to verify that changes are copying in both directions.

Multi-master replication is now configured and running between and Oracle 12.1 instance and an oracle 18.3 instance.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: