Posted in 2016

Dataguard Sandbox – like never before

I decided to create a dataguard config using iscsi from orcl-store that will provide 20GB disks to 2 servers.. first of all I would like to create a RAID device on “physical discs” (I created 4 virtual disks, dynamically allocated and attached them to my VM).. Edit: the services are bad. don’t use them..

[root@orcl-store ~]# mdadm --create md0 --level=10 --raid-devices=2 /dev/sd[bc]
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/md0 started.
[root@orcl-store ~]# mdadm --create md1 --level=10 --raid-devices=2 /dev/sd[de]
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/md1 started.
[root@orcl-store ~]# mdadm --detail --brief /dev/md/md0 >> /etc/mdadm.conf
[root@orcl-store ~]# mdadm --detail --brief /dev/md/md1 >> /etc/mdadm.conf
[root@orcl-store ~]# cat /etc/mdadm.conf
ARRAY /dev/md/md0 metadata=1.2 name=orcl-store.example.com:md0 UUID=7a39d490:3652ccb7:1f20bc39:853de558
ARRAY /dev/md/md1 metadata=1.2 name=orcl-store.example.com:md1 UUID=f8b85c79:d0a0158d:b47b0430:dd0eb0af
[root@orcl-store ~]# exporthttp 
[root@orcl-store ~]# yum install scsi-target-utils

In the /etc/tgt/targets.conf I define:

<target iqn.2016-12.com.example.orcl-store:orcl-site1>
	direct-store	/dev/md0
</target>

<target iqn.2016-12.com.example.orcl-store:orcl-site2>
        direct-store    /dev/md1
</target>

and check if it’s running

[root@orcl-store ~]# service tgtd start
[root@orcl-store ~]# 
[root@orcl-store ~]# service tgtd status
tgtd (pid 2785 2782) is running...
[root@orcl-store ~]# chkconfig tgtd on
[root@orcl-store ~]# tgtadm -o show -m target
Target 1: iqn.2016-12.com.example.orcl-store:orcl-site1
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags: 
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 21458 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/md0
            Backing store flags: 
    Account information:
    ACL information:
        ALL
Target 2: iqn.2016-12.com.example.orcl-store:orcl-site2
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00020000
            SCSI SN: beaf20
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags: 
        LUN: 1
            Type: disk
            SCSI ID: IET     00020001
            SCSI SN: beaf21
            Size: 21458 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/md1
            Backing store flags: 
    Account information:
    ACL information:
        ALL

Finished.. now let’s go to the initiators

[root@orcl-site-01 ~]# yum install iscsi-initiator-utils
[root@orcl-site-01 ~]# iscsiadm -m discovery --type sendtargets -p 192.168.2.100
Starting iscsid:                                           [  OK  ]
192.168.2.100:3260,1 iqn.2016-12.com.example.orcl-store:orcl-site1
192.168.2.100:3260,1 iqn.2016-12.com.example.orcl-store:orcl-site2
[root@orcl-site-01 ~]# iscsiadm -m node --targetname iqn.2016-12.com.example.orcl-store:orcl-site1 -p 192.168.2.100:3260 -l
Logging in to [iface: default, target: iqn.2016-12.com.example.orcl-store:orcl-site1, portal: 192.168.2.100,3260] (multiple)
Login to [iface: default, target: iqn.2016-12.com.example.orcl-store:orcl-site1, portal: 192.168.2.100,3260] successful.
[root@orcl-site-01 ~]# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 6.2.0-873.22.el6
Target: iqn.2016-12.com.example.orcl-store:orcl-site1 (non-flash)
    Current Portal: 192.168.2.100:3260,1
    Persistent Portal: 192.168.2.100:3260,1
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1988-12.com.oracle:39d373a698b8
        Iface IPaddress: 192.168.2.10
        Iface HWaddress: 
        Iface Netdev: 
        SID: 1
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 120
        Target Reset Timeout: 30
        LUN Reset Timeout: 30
        Abort Timeout: 15
        *****
        CHAP:
        *****
        username: 
        password: ********
        username_in: 
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 8192
        FirstBurstLength: 65536
        MaxBurstLength: 262144
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 3  State: running
        scsi3 Channel 00 Id 0 Lun: 0
        scsi3 Channel 00 Id 0 Lun: 1
            Attached scsi disk sdb      State: running

and do the same on the second node… Partition it

[root@orcl-site-01 ~]# fdisk /dev/sdb 
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x380174ee.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-20463, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-20463, default 20463): 
Using default value 20463

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

And now my persistent naming with udev, because it’s too much to download asmlib šŸ˜€ I make it on both sides, but don’t forget the scsi_id is on second node other, the result is other…

[root@orcl-site-01 dev]# cat /etc/udev/rules.d/99-oracle.rules
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1IET_00010001", SYMLINK+="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
[root@orcl-site-01 dev]# udevadm trigger
[root@orcl-site-01 dev]# ls -l /dev/asm-disk1 
lrwxrwxrwx. 1 root root 4 Dec  2 14:19 /dev/asm-disk1 -> sdb1

Well, now I have everything ready for my ASM…. I don’t want to bother you with pictures (I mean myself with creating photos and preparing it) I simply install the grid infrastructure for standalone on both nodes, create a diskgroup data with external redundancy from the /dev/asm-disk1, install the database home and then I create a Container DB with 1 Pluggable DB..

I defined a service on the primary as this stupid ./runInstaller defined the db_unique_name = db_name:

[oracle@orcl-site-01 ~]$ srvctl add service -db condb -service condb_site1.example.com -role primary
[oracle@orcl-site-01 ~]$ srvctl start service -db condb 

Now it’s important to define everything right…First I decided to create all network/admin files as a softlink to the grid/network/admin so I administer only one set of files.. it looks like this, the auxiliary instance needs a static listener entry:

[oracle@orcl-site-02 admin]$ cat tnsnames.ora
condb_site1=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=condb_site1.example.com)(SERVER=DEDICATED))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.10)(PORT=1521)))
condb_site2=(DESCRIPTION=(CONNECT_DATA=(SID=condb)(SERVER=DEDICATED))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.11)(PORT=1521)))
[oracle@orcl-site-02 admin]$ cat /products/grid/12.1.0.2/network/admin/listener.ora
# listener.ora Network Configuration File: /products/grid/12.1.0.2/network/admin/listener.ora
# Generated by Oracle configuration tools.

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = orcl-site-02.example.com)(PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = CONDB_SITE2_DGMGRL)
      (ORACLE_HOME = /products/oracle/12.1.0.2)
      (SID_NAME = condb)
    )
  )

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON		# line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET		# line added by Agent

As we see I changed the definition of the listener and restarted them and defined the _DGMGRL thing..

I create a pfile from spfile and change it to have everything OK, it means I create the audit directory, remove the control_files parameter as it will be created in +DATA anyway, create a password file and start the instance in nomount, create spfile from pfile, restart to nomount… If you have all this, you can connect to the rman…

[oracle@orcl-site-02 admin]$ rman target sys/oracle@condb_site1 auxiliary sys/oracle@condb_site2

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Dec 2 16:42:44 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONDB (DBID=1185408016)
connected to auxiliary database: CONDB (not mounted)

RMAN> duplicate target database for standby from active database ;

Starting Duplicate Db at 02-DEC-16
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=22 device type=DISK

contents of Memory Script:
{
   backup as copy reuse
   targetfile  '/products/oracle/12.1.0.2/dbs/orapwcondb' auxiliary format 
 '/products/oracle/12.1.0.2/dbs/orapwcondb'   ;
}
executing Memory Script

Starting backup at 02-DEC-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=28 device type=DISK
Finished backup at 02-DEC-16

contents of Memory Script:
{
   sql clone "alter system set  control_files = 
  ''+DATA/CONDB_SITE2/CONTROLFILE/current.257.929551397'', ''+DATA/CONDB_SITE2/CONTROLFILE/current.258.929551397'' comment=
 ''Set by RMAN'' scope=spfile";
   restore clone from service  'condb_site1' standby controlfile;
}
executing Memory Script

sql statement: alter system set  control_files =   ''+DATA/CONDB_SITE2/CONTROLFILE/current.257.929551397'', ''+DATA/CONDB_SITE2/CONTROLFILE/current.258.929551397'' comment= ''Set by RMAN'' scope=spfile

Starting restore at 02-DEC-16
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service condb_site1
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:27
output file name=+DATA/CONDB_SITE2/CONTROLFILE/current.261.929551415
output file name=+DATA/CONDB_SITE2/CONTROLFILE/current.262.929551417
Finished restore at 02-DEC-16

contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database

contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   set newname for clone tempfile  2 to new;
   set newname for clone tempfile  3 to new;
   switch clone tempfile all;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  6 to new;
   set newname for clone datafile  7 to new;
   set newname for clone datafile  8 to new;
   set newname for clone datafile  9 to new;
   set newname for clone datafile  10 to new;
   restore
   from service  'condb_site1'   clone database
   ;
   sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to +DATA in control file
renamed tempfile 2 to +DATA in control file
renamed tempfile 3 to +DATA in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 02-DEC-16
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service condb_site1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to +DATA
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:36
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service condb_site1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00003 to +DATA
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:15
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service condb_site1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to +DATA
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:16
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service condb_site1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00005 to +DATA
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:17
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service condb_site1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00006 to +DATA
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:11
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service condb_site1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00007 to +DATA
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:27
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service condb_site1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00008 to +DATA
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:47
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service condb_site1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00009 to +DATA
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:26
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service condb_site1
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00010 to +DATA
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:09
Finished restore at 02-DEC-16

sql statement: alter system archive log current

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=12 STAMP=929551755 file name=+DATA/CONDB_SITE2/DATAFILE/system.263.929551481
datafile 3 switched to datafile copy
input datafile copy RECID=13 STAMP=929551757 file name=+DATA/CONDB_SITE2/DATAFILE/sysaux.264.929551549
datafile 4 switched to datafile copy
input datafile copy RECID=14 STAMP=929551761 file name=+DATA/CONDB_SITE2/DATAFILE/undotbs1.265.929551593
datafile 5 switched to datafile copy
input datafile copy RECID=15 STAMP=929551764 file name=+DATA/CONDB_SITE2/42AF23EB61132764E0530A02A8C04ADB/DATAFILE/system.266.929551611
datafile 6 switched to datafile copy
input datafile copy RECID=16 STAMP=929551766 file name=+DATA/CONDB_SITE2/DATAFILE/users.267.929551627
datafile 7 switched to datafile copy
input datafile copy RECID=17 STAMP=929551768 file name=+DATA/CONDB_SITE2/42AF23EB61132764E0530A02A8C04ADB/DATAFILE/sysaux.268.929551635
datafile 8 switched to datafile copy
input datafile copy RECID=18 STAMP=929551770 file name=+DATA/CONDB_SITE2/42AF665748852E95E0530A02A8C02DF5/DATAFILE/system.269.929551693
datafile 9 switched to datafile copy
input datafile copy RECID=19 STAMP=929551772 file name=+DATA/CONDB_SITE2/42AF665748852E95E0530A02A8C02DF5/DATAFILE/sysaux.270.929551709
datafile 10 switched to datafile copy
input datafile copy RECID=20 STAMP=929551775 file name=+DATA/CONDB_SITE2/42AF665748852E95E0530A02A8C02DF5/DATAFILE/users.271.929551735
Finished Duplicate Db at 02-DEC-16

RMAN> 

I change the tnsnames.ora to:

[oracle@orcl-site-02 admin]$ cat tnsnames.ora 
condb_site1=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=condb_site1.example.com)(SERVER=DEDICATED))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.10)(PORT=1521)))
condb_site2=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=condb_site2.example.com)(SERVER=DEDICATED))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.11)(PORT=1521)))

set parameters fal_server=[condb_site1|condb_site2], dg_broker_start=true and standby_file_management=auto on both instances and create the dgmgrl configuration:

[oracle@orcl-site-01 ~]$ dgmgrl sys/oracle@condb_site1
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> create configuration myconfig as primary database is condb connect identifier is condb_site1 ;
Configuration "myconfig" created with primary database "condb"
DGMGRL> add database condb_site2 as connect identifier is condb_site2 maintained as physical ;
Database "condb_site2" added
DGMGRL> enable configuration ;
Enabled.

DGMGRL> show configuration ;

Configuration - myconfig

  Protection Mode: MaxPerformance
  Members:
  condb       - Primary database
    condb_site2 - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 47 seconds ago)

Voila, the dataguard is created… So it means I need 1 day to configure 3 machines, install software, create a database, duplicate it and create the dataguard broker configuration (7 hours)

Advertisements

Author:

Database administrator who loves to work with Oracle software. (Sometimes not)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s