Showing posts with label Dataguard. Show all posts
Showing posts with label Dataguard. Show all posts

Friday, February 23, 2018

Oracle Data Pump Presentation online now

Thank you to the Dallas Oracle Users Group for letting me present recently on my "Optimize your Database Import" topic.

I covered the following topics in my presentation -
  • Data Pump Overview
  • 12c (12.1 & 12.2) New Features
  • Data guard & Data pump working together
  • Customer case study - Optimizing Data pump import

You may view my presentation at the following link.

https://www.slideshare.net/NabilNawaz/optimizing-your-database-import



Wednesday, May 31, 2017

Oracle Data Guard Timezone Considerations

I got a question from a client about Oracle Data guard timezones requirements. They asked if the timezones on the Primary and Standby servers need to be the same or not? The first thing that came to mind is that from my experience it simply does not matter at all. For example if your Primary Database server is in Chicago(Central time) and your Standby Database server is in New York(Eastern time) then you should not have any issues with log shipping replication. 

I was talking to an Oracle consultant on the topic today and he pointed out that the Oracle documentation actual does recommend that BOTH Primary and Standby Database servers should consider having the same timezone. This was something I learned new about Data guard.

Check out the verbiage from the 12c documentation, Data guard Concepts and Administration guide. That came as 

https://docs.oracle.com/database/121/SBYDB/standby.htm#SBYDB4717

"Because some applications that perform updates involving time-based data cannot handle data entered from multiple time zones, consider setting the time zone for the primary and remote standby systems to be the same to ensure the chronological ordering of records is maintained after a role transition."

Based on this information it perhaps would be a good idea to consider using one standard timezone for database servers in a Data guard configuration such as the GMT or the UTC+0 timezone. Keep in mind before considering this standard that you would need to check if your specific application cannot handle data entered from multiple time zones, if this is the case then consider using the same timezone for your Database servers in the Data guard configuration. One another option workaround would be to ensure the timezone data is recorded from the application as the data is recorded into the database, this configuration would allow different time zones for the database servers in a data guard configuration and should not present any issues after a role transition.








Sunday, December 4, 2016

Oracle Dataguard Deep Dive Presentation

Thank you to the Dallas OUG for letting me present recently on my "Oracle Dataguard Deep Dive" topic. I also covered 12c new features as well in this presentation. You may view my presentation at the following link.

http://tinyurl.com/hq5h7s3

Tuesday, September 13, 2016

Dallas Oracle Users Group - Presenting "Oracle Dataguard Deep Dive"

I am very excited and honored to be presenting at the Dallas Oracle Users Group soon on Sept 22, 2016. The topic is "Oracle Dataguard Deep Dive". Session details below.


To RSVP for this meeting:  http://www.doug.org/calendar.html#id=10163&cid=538&wid=201&type=Cal



Location
Texas State Government Facility (education-related)
- Use building Entrance C on SE side
Dallas Room – 1st floor
400 East Spring Valley Road
Richardson, Texas, 75081


Presentation Details
Oracle Data Guard Deep Dive
Ensuring your business continuity for critical production databases is of paramount importance. Oracle Data Guard offers high available synchronization and reporting of your primary database. Oracle Data Guard is the most comprehensive solution available to eliminate single points of failure for mission critical Oracle Databases. It prevents data loss and downtime in the simplest and most economical manner by maintaining a synchronized physical replica of a production database at a remote location. If the production database is unavailable for any reason, client connections can quickly, and in some configurations transparently, fail-over to the synchronized replica to restore service. We will go through explaining Data Guard concepts, several new features in 12c such as Far/Fast sync, learn about Data Guard single instance and RAC setup step-by-step, configuration, broker setup and monitoring, and even tips on automating the complete setup.  This presentation will be very useful for someone who wants to take a deep dive into Data Guard or even refresh their skills.


 

Wednesday, December 16, 2015

ORA-00304: requested INSTANCE_NUMBER is busy

I was working on verifying that all 12c Dataguard instances were running on an Exadata X5 full rack system and I came across one instance that was not running. I noticed this when I saw the /etc/oratab entry and did not see the corresponding instance running on the database compute server.

I manually tried to bring up the instance from node 3 and encountered the error below.

node3-DR> sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Dec 16 18:30:15 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount;
ORA-00304: requested INSTANCE_NUMBER is busy
SQL> exit
Disconnected

I then looked at the instance alert log and saw the following:

...
Wed Dec 16 18:30:22 2015
ASMB started with pid=39, OS id=59731
Starting background process MMON
Starting background process MMNL
Wed Dec 16 18:30:22 2015
MMON started with pid=40, OS id=59733
Wed Dec 16 18:30:22 2015
MMNL started with pid=41, OS id=59735
Wed Dec 16 18:30:22 2015
NOTE: ASMB registering with ASM instance as Standard client 0xffffffffffffffff (reg:1438891928) (new connection)
NOTE: ASMB connected to ASM instance +ASM3 osid: 59737 (Flex mode; client id 0xffffffffffffffff)
NOTE: initiating MARK startup
Starting background process MARK
Wed Dec 16 18:30:22 2015
MARK started with pid=42, OS id=59741
Wed Dec 16 18:30:22 2015
NOTE: MARK has subscribed
High Throughput Write functionality enabled
Wed Dec 16 18:30:34 2015
USER (ospid: 59191): terminating the instance due to error 304
Wed Dec 16 18:30:35 2015
Instance terminated by USER, pid = 59191

 The error number was 304 that was in the output from sqlplus and from the alert log as well, then the next thing I did was naturally check google and oracle support to see if I could find something that matched my scenario and could not find something exact. 

Then the next thing that came to mind was to check gv$instance to see where the Primary RAC and Standby RAC standby database were running from.

# Primary
  INST_ID INSTANCE_NUMBER INSTANCE_NAME     HOST_NAME      STATUS
---------- --------------- ---------------- ------------ ------------
         1               1     PROD1            node01        OPEN
         3               3     PROD3            node08        OPEN
         2               2     PROD2            node07        OPEN

# Standby
   INST_ID INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME      STATUS
---------- --------------- ---------------- ------------ ------------
         1               1 DG1               node01        MOUNTED
         3               3 DG3               node08        MOUNTED
         2               2 DG2               node07        MOUNTED

I then realized from node 3 that the oratab entry for instance 3 was not correct it was already running from node node08!

Furthermore I also verified the RAC configuration via srvctl.

node3-DR>srvctl config database -d DG3
Database unique name: DG
Database name: PROD
Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_1
Oracle user: oracle
Spfile: +RECOC1/DG/PARAMETERFILE/spfile.4297.894386377
Password file:
Domain:
Start options: mount
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATAC1,RECOC1
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances: DG1,DG2,DG3
Configured nodes: node01,node07,node08
Database is administrator managed

The configured nodes for the Dataguard Standby database is supposed to be running from node01, node07 and node08 it is not configured to run on node03.

Once I confirmed this I simply removed the oratab entry to prevent any future further confusion. Moral of the story please only leave oratab entries intact for real instances that need to run from the node.