Showing posts with label Oracle 12c. Show all posts
Showing posts with label Oracle 12c. Show all posts

Friday, July 20, 2018

Easy way to verify Oracle Database initialization parameters!

I wanted to share an easy way that we could verify the value of certain oracle database initialization parameters on-premise or in the Cloud across a 2 node RAC instance. I tested this script on a 12.2 multitenant oracle database and employ using decode to check for a value otherwise printing a message that the parameter is not set. Once you run the script you can run a simple command from the Unix prompt to see which parameters are not set correctly and then take the appropriate action to adjust the parameters. I also verify a database property and numeric values as well such as parallel_max_servers. This method should help you run a simple script to crosscheck your standard initilization parameters.

spool /tmp/verify_params.out

-- parallel_force_local FALSE
select decode(( select count(*) from gv$parameter where inst_id=1 and value='FALSE' and name='parallel_force_local' ),1, 'YES', 'NO parallel_force_local is not set to FALSE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2 and value='FALSE' and name='parallel_force_local' ),1, 'YES', 'NO parallel_force_local is not set to FALSE') from dual;

-- *._fix_control='14033181:0'
select decode(( select count(*) from gv$parameter where inst_id=1 and value='14033181:0' and name='_fix_control' ),1, 'YES', 'NO _fix_control is not set to 14033181:0') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2 and value='14033181:0' and name='_fix_control' ),1, 'YES', 'NO _fix_control is not set to 14033181:0') from dual;

-- *._gby_hash_aggregation_enabled=FALSE
select decode(( select count(*) from gv$parameter where inst_id=1  and value='FALSE' and name='_gby_hash_aggregation_enabled' ),1, 'YES', 'NO _gby_hash_aggregation_enabled is not set to FALSE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='FALSE' and name='_gby_hash_aggregation_enabled' ),1, 'YES', 'NO _gby_hash_aggregation_enabled is not set to FALSE') from dual;

-- *._gc_trace_freelist_empty=FALSE
select decode(( select count(*) from gv$parameter where inst_id=1  and value='FALSE' and name='_gc_trace_freelist_empty' ),1, 'YES', 'NO _gc_trace_freelist_empty is not set to FALSE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='FALSE' and name='_gc_trace_freelist_empty' ),1, 'YES', 'NO _gc_trace_freelist_empty is not set to FALSE') from dual;

-- *._ignore_desc_in_index=TRUE
select decode(( select count(*) from gv$parameter where inst_id=1  and value='TRUE' and name='_ignore_desc_in_index' ),1, 'YES', '_ignore_desc_in_index is not set to TRUE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='TRUE' and name='_ignore_desc_in_index' ),1, 'YES', '_ignore_desc_in_index is not set to TRUE') from dual;

-- *._optimizer_skip_scan_enabled=TRUE
select decode(( select count(*) from gv$parameter where inst_id=1  and value='TRUE' and name='_optimizer_skip_scan_enabled' ),1, 'YES', '_optimizer_skip_scan_enabled is not set to TRUE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='TRUE' and name='_optimizer_skip_scan_enabled' ),1, 'YES', '_optimizer_skip_scan_enabled is not set to TRUE') from dual;

-- *._unnest_subquery=FALSE
select decode(( select count(*) from gv$parameter where inst_id=1  and value='FALSE' and name='_unnest_subquery' ),1, 'YES', '_unnest_subquery is not set to FALSE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='FALSE' and name='_unnest_subquery' ),1, 'YES', '_unnest_subquery is not set to FALSE') from dual;

-- *.audit_trail='DB'
select decode(( select count(*) from gv$parameter where inst_id=1  and value='DB' and name='audit_trail' ),1, 'YES', 'audit_trail is not set to DB') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='DB' and name='audit_trail' ),1, 'YES', 'audit_trail is not set to DB') from dual;

-- *.compatible='12.2.0'
select decode(( select count(*) from gv$parameter where inst_id=1  and value='12.2.0' and name='compatible' ),1, 'YES', 'compatible is not set to 12.2') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='12.2.0' and name='compatible' ),1, 'YES', 'compatible is not set to 12.2') from dual;

-- *.db_files=4000
select decode(( select count(*) from gv$parameter where inst_id=1  and to_number(value) >= 4000 and name='db_files' ),1, 'YES', 'db_files is not set to 4000 or greater') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and to_number(value) >= 4000 and name='db_files' ),1, 'YES', 'db_files is not set to 4000 or greater') from dual;

-- *.diagnostic_dest='/u02/app/oracle'
select decode(( select count(*) from gv$parameter where inst_id=1  and value = '/u02/app/oracle' and name='diagnostic_dest' ),1, 'YES', 'diagnostic_dest is not set to /u02/app/oracle') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value = '/u02/app/oracle' and name='diagnostic_dest' ),1, 'YES', 'diagnostic_dest is not set to /u02/app/oracle') from dual;

-- *.log_archive_format='%t_%s_%r.arc'
select decode(( select count(*) from gv$parameter where inst_id=1  and value = '%t_%s_%r.arc' and name='log_archive_format' ),1, 'YES', 'log_archive_format is not set to %t_%s_%r.arc ') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value = '%t_%s_%r.arc' and name='log_archive_format' ),1, 'YES', 'log_archive_format is not set to %t_%s_%r.arc ') from dual;


-- *.max_dump_file_size='50K'
select decode(( select count(*) from gv$parameter where inst_id=1  and value = '50K' and name='max_dump_file_size' ),1, 'YES', 'max_dump_file_size is not set to 50K') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value = '50K' and name='max_dump_file_size' ),1, 'YES', 'max_dump_file_size is not set to 50K') from dual;

-- *.nls_length_semantics='CHAR'
select decode(( select count(*) from gv$parameter where inst_id=1  and value = 'CHAR' and name='nls_length_semantics' ),1, 'YES', 'nls_length_semantics is not set to CHAR') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value = 'CHAR' and name='nls_length_semantics' ),1, 'YES', 'nls_length_semantics is not set to CHAR') from dual;

-- *.open_cursors=10000
select decode(( select count(*) from gv$parameter where inst_id=1  and to_number(value) >= 10000 and name='open_cursors' ),1, 'YES', 'open_cursors is not set 10000 or higher') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and to_number(value) >= 10000 and name='open_cursors' ),1, 'YES', 'open_cursors is not set 10000 or higher') from dual;

-- *.processes=4000
select decode(( select count(*) from gv$parameter where inst_id=1  and to_number(value) >= 4000 and name='processes' ),1, 'YES', 'processes is not set to 4000 or higher') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and to_number(value) >= 4000 and name='processes' ),1, 'YES', 'processes is not set to 4000 or higher') from dual;

-- *.parallel_max_servers should be 6 per pdb
select decode(( select count(*) from gv$parameter where inst_id=1  and to_number(value) * 6 >= (select count(*) - 1 from v$pdbs) and name='parallel_max_servers' ),1, 'YES', 'parallel_max_servers needs to be 6 per pdb') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and to_number(value) * 6 >= (select count(*) - 1 from v$pdbs) and name='parallel_max_servers' ),1, 'YES', 'parallel_max_servers needs to be 6 per pdb') from dual;

-- *.temp_undo_enabled=TRUE
select decode(( select count(*) from gv$parameter where inst_id=1 and value = 'TRUE' and name='temp_undo_enabled' ),1, 'YES', 'temp_undo_enabled is not set to TRUE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2 and value = 'TRUE' and name='temp_undo_enabled' ),1, 'YES', 'temp_undo_enabled is not set to TRUE') from dual;

-- *.undo_retention=10800
select decode(( select count(*) from gv$parameter where inst_id=1 and to_number(value) >= 10800 and name='undo_retention' ),1, 'YES', 'undo_retention is not 10800 or greater') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2 and to_number(value) >= 10800 and name='undo_retention' ),1, 'YES', 'undo_retention is not 10800 or greater') from dual;

-- LOCAL_UNDO_ENABLED
select decode((SELECT count(*) FROM   database_properties WHERE  property_name = 'LOCAL_UNDO_ENABLED' and PROPERTY_VALUE='TRUE'), 1, 'YES','LOCAL_UNDO_ENABLED property is not TRUE') from dual;

exit

Example run of verifying the output and what is not set correctly. In this case we have 3 parameters that requires attention.

$ grep -i not /tmp/verify_params.out|sort -u
max_dump_file_size is not set to 50K                                                                                                                             
NO _fix_control is not set to 14033181:0                                                                                                                         
undo_retention is not 10800 or greater                                                

Friday, February 23, 2018

Oracle Data Pump Presentation online now

Thank you to the Dallas Oracle Users Group for letting me present recently on my "Optimize your Database Import" topic.

I covered the following topics in my presentation -
  • Data Pump Overview
  • 12c (12.1 & 12.2) New Features
  • Data guard & Data pump working together
  • Customer case study - Optimizing Data pump import

You may view my presentation at the following link.

https://www.slideshare.net/NabilNawaz/optimizing-your-database-import



Wednesday, February 21, 2018

Presenting at the Dallas Oracle Users Group on Feb 22, 2018!

Tomorrow (Thursday, 2/22/18 from 5 – 7:15 pm)  I will be presenting at the Dallas Oracle Users Group(OUG). My topic is "Optimize your Database Import". I am happy to be presenting at the Dallas OUG once again, thank you! I also want to thank BIAS Corporation for sponsoring my event as well. 


Presentation Overview

“Optimize your Database Import” by Nabil Nawaz, BIAS Corporation.

Oracle DataPump is an excellent tool for cloning databases and schemas and it is widely used as a common toolset today among DBAs and Developers to transfer data and structure between databases. Please come and learn about new Data pump features for Oracle version 12.2. We will also be sharing a case study for a large multi-terabyte database for optimizing a data pump import process that originally ran for more than a day and then the process was tuned to run in just about 4-6 hours a nearly 90% performance enhancement. The tips that will be shared will be of great value and help to ensure you are able to have a well-tuned import process with DataPump.

Refreshments sponsored by BIAS Corporation.


To attend, RSVP here.

Contact dougadministrative@gmail.com if your plans change, so we can order the right amount of food.

Location/Directions
University of Dallas, Gorman Lecture Center, Room C
1845 East Northgate Drive
Irving, Texas

*Park near the bell tower.
Region 10 ESC - DOUG Meetings
Directions to the campus:http://udallas.edu/visitors/directions.php

Campus map:http://udallas.edu/visitors/documents/CampusMap_11-16-16.pdf

Presenter Bio

Nabil Nawaz started his career with Oracle in 1997 and is currently a Technical Manager at BIAS Corporation and has 20 years of experience working as an Oracle DBA & Solution Architect starting with version 7.1.6, he is Cloud IaaS, OCP and Exadata certified and also an Oracle ACE associate.

He is a contributing author on the recent book Oracle Exadata Expert's Handbook. His background is quite vast with Oracle and has had the opportunity to work as a consultant in many large fortune 500 companies focusing on architecting high available solutions using RAC & Data guard and currently working with architecting Oracle Cloud and Engineered systems such as Exadata, Supercluster, ODAs and Virtualization technologies.

He can be followed at his blog, http://nnawaz.blogspot.com/ and on Twitter @Nabil_Nawaz
Visit DOUG at http://doug.org/ or join our LinkedIn Group.

Thursday, November 23, 2017

Copy files in Parallel to the Cloud and Beyond!

As an IT professional there might be many times you might be tasked with copying multiple files from one server to another. A simple and straightforward way would be just to issue a single sftp or scp command as follows:

$ scp filename*.dmp remotehost:/Backups/

For the most part that will work fine but if you have a limited network pipe between your source and target server and if you have multiple large files then this serial or single-threaded method may not be the optimal solution especially if time is of the essence because you would end up waiting for each file to be fully copied before the next file would be copied.

I have come up with a cool innovative method I have been using for many years to copy a list of multiple files in parallel from your source server to the target server. Please note this method can be used for just about anything such as multiple export dump files, RMAN backup files, logfiles files  and so on. I would highly recommend this method when copying files to the Cloud or any remote host.

Hope these steps help you to save time and copy your files to the remote server that could be in your own Datacenter, a remote Datacenter or even in the Cloud!

1. First create a file that will contain a list of all of the files to be copied.


$ ls -1 /Backup/*dmp > copy_dump_files   

2. Review the list of the contents of the files.

$ cat copy_dump_files


/Backup/export_01.dmp
/Backup/export_02.dmp
/Backup/export_03.dmp
/Backup/export_04.dmp
/Backup/export_05.dmp
/Backup/export_06.dmp
/Backup/export_07.dmp
/Backup/export_08.dmp
/Backup/export_09.dmp
/Backup/export_10.dmp
/Backup/export_11.dmp
/Backup/export_12.dmp
/Backup/export_13.dmp
/Backup/export_14.dmp
/Backup/export_15.dmp
/Backup/export_16.dmp
/Backup/export_17.dmp
/Backup/export_18.dmp

/Backup/export_19.dmp
/Backup/export_20.dmp
/Backup/export_21.dmp
/Backup/export_22.dmp


3. Edit the file copy_dump_files and on each line of the file put the beginning of each file put the string "scp" and at the end of each line put the string “remotehost:/Backup/” shown as follows:

scp /Backup/export_01.dmp remotest:/Backups/
...


4. Next we split the copy_dump_files file into smaller pieces as follows with only 2 lines per resulting file. Note you could change the number from 2 to anything. The higher the number the more files will be in each split file.

$ split -l 2 copy_dump_files copy_dump_files_1

5. Please note the resulting files will each contain 2 lines as specified in the previous step with the split command.

$ wc -l copy_dump_files_1*
   2 copy_dump_files_1aa
   2 copy_dump_files_1ab
   2 copy_dump_files_1ac
   2 copy_dump_files_1ad
   2 copy_dump_files_1ae
   2 copy_dump_files_1af
   2 copy_dump_files_1ag
   2 copy_dump_files_1ah
   2 copy_dump_files_1ai
   2 copy_dump_files_1aj
   2 copy_dump_files_1ak
  22 total

5. Put all of the copy files into a script shown as follows 

$ ls -1 copy_dump_files_1* > copy_script.sh


6.  Set all of the copy files with executable permissions.

$ chmod 700 copy_*

7. Edit the file copy_script.sh and put the following string at the beginning of each line “nohup ./” and at the end of each line an character “&” which will allow us to run 11 copies simultaneously. Shown as follows:



   $ nohup ./copy_dump_files_1aa  &
   $ nohup ./copy_dump_files_1ab  &

   $ nohup ./copy_dump_files_1ac  &
  ...

  8. Now invoke the copy script to kick off in nohup and in the background to run multiple scp copies to your remote host shown as follows and all copies will run in the background! Please also monitor the target location to ensure the file sizes are increasing.


$ nohup ./copy_script.sh &