This test was done in 11.2.0.4 version

Target Database is UAT database :- UAT

Auxiliary Database is Prod database (duplicate database) :- Prod

  1. Checking password file for TARGET and Auxiliary Database

Sql>select * from V$PWFILE_USERS;

Sql>grant sysdba to sys;

Sql>alter user sys identified by password;

Create Password File On Remote Server For Aux DB (if its not already present).

You must also have a Password file on the original server for the Target DB.

IMPORTANT: ** Set the password to the same as target database. **

If no password file on Target, create and bounce the database

On UNIX:

$cd $ORACLE_HOME/dbs

$orapwd file=orapwSID password=xxxxxxxx force=y (if you already have one,

use same name)

(Note: Replace xxxxxxxxx with your actual password for the SYS user.)

  1. Checking connections between TARGET and Auxiliary Databases.

Configure TNSNAMES.ora  entry for TARGET DB entry in Auxiliary DB and Auxiliary DB entry in TARGET DB

Check the connections by tnsping on both DB side .

  1. Performing TARGET Full Database backup with controlfile on Disk Level by RMAN

Note:- The storage should be common or you need to do NFS this backup mountpoint to the remote server (auxiliary database location) during the rman duplicate command

Connect TARGET DB

Connect RMAN and RUN the below RMAN script to take FULL TARGET Database backup.

RUN {

  ALLOCATE CHANNEL t1 TYPE DISK FORMAT ‘/dbrestore/UAT_rman/UATDB%d_DB_%u_%s_%p’;

  BACKUP DATABASE PLUS ARCHIVELOG;

  RELEASE CHANNEL t1;

  }

  1. Set the following 2 parameters in Auxiliary DB init.ora (PFILE)

Create PFILE for Aux DB.

Copy the initTARGET.ora file to the Aux server and modify.

Make sure all paths are set correctly for new database.

update these:

Set CONTROL_FILES parameter based on new database locations.

Set DB_NAME parameter based on new database.

These are new:

Set DB_FILE_NAME_CONVERT based on new database.

Ex: db_file_name_convert=’OLD_FILE_NAME’,’NEW_FILE_NAME’

Set LOG_FILE_NAME_CONVERT based on new database.

Ex: log_file_name_convert=’OLD_FILE_NAME’,’NEW_FILE_NAME’

  1. Start Auxiliary Database (prod) In A NOMOUNT State From Aux Server Through SQL*Plus.

Sqlplus / as sysdba

STARTUP FORCE NOMOUNT

  1. Duplicating TARGET Database :-

RMAN> connect target

connected to target database: UAT (DBID=293507132)

RMAN> connect auxiliary sys/W0rldw1de@prod

connected to auxiliary database: PROD (DBID=534731388)

RMAN>

  run {

       allocate auxiliary channel t1 type disk;

       allocate auxiliary channel t2 type disk;

       allocate auxiliary channel t3 type disk;

      duplicate target database to prod;  

    }

Note:- If the target (UAT) database backup was taken as offline backup it will automatically duplicate, if the target DB is in online, you need to set the archivelog number in the beginning during this duplicate command so that the auxiliary (prod) DB will be restored upto that point of time.

  1. Checking VALID/INVALID objects in Auxiliary DB (After DUPLICATED from TARGET).

Check the object count (VALID & INVALID) in TARGET DB by

Select count(*) from dba_objects where status=’VALID’;

Select count(*) from dba_objects where status=’INVALID’;

(Confirm all the objects are duplicated properly)

Check the object count (VALID & INVALID) in Auxiliary DB by

Select count(*) from dba_objects where status=’VALID’;

Select count(*) from dba_objects where status=’INVALID’;

DECLARE
v_output VARCHAR2 (2000);
v_owner VARCHAR2 (30);
v_table VARCHAR2 (30);
BEGIN
v_owner := UPPER (‘<Owner_name’);
v_table := UPPER (‘<Table_name>’);
dbms_mview.get_mv_dependencies (v_owner || ‘.’ || v_table, v_output);
DBMS_OUTPUT.put_line (CHR (13));
DBMS_OUTPUT.put_line
(‘Materialized Views Dependent on table’ );
DBMS_OUTPUT.put_line (CHR (13));
DBMS_OUTPUT.put_line (v_output);
END;
/
select count(*) from DBA_MVIEW_LOGS where MASTER = ‘<Table_Name>’;

column table_owner format a15
column table_name format a20
column index_name format a20
column column_name format a20

Select table_owner, table_name, index_name, column_name
FROM dba_ind_columns
Order by table_owner, table_name, column_name
Where table_owner=’OWNER_NAME’
AND table_name=’TABLE_NAME’;

Drop and Recreate Online Redolog Files
Method to drop and recreate online redolog files with 2 members to each group.

Firstly ORACLE will never allow you to drop the current ONLINE redolog file –

Ex :

SQL> select GROUP#,THREAD#,STATUS from v$log;
GROUP# THREAD# STATUS
———- ———- —————-
1 1 CURRENT
2 1 UNUSED
3 1 INACTIVE
4 1 INACTIVE

SQL> alter database drop logfile group 1;
alter database drop logfile group 1
*
ERROR at line 1:
ORA-01623: log 1 is current log for instance test (thread 1) – cannot drop
ORA-00312: online log 1 thread 1: ‘/u03/oradata/testdb/redo01.log’

Now to drop and recreate online redolog files in the same directory –

SQL> select member from v$logfile;
MEMBER
——————————————————————————–
/u50/oradata/test/redo4a.log
/u51/oradata/test/redo4b.log
/u50/oradata/test/redo3a.log
/u51/oradata/test/redo3b.log
/u50/oradata/test/redo2a.log
/u51/oradata/test/redo2b.log
/u50/oradata/test/redo1a.log
/u51/oradata/test/redo1b.log

8 rows selected.

Intention is to drop and recreate these online logs with a different size.

SQL> select GROUP#,THREAD#,STATUS from v$log;
GROUP# THREAD# STATUS
———- ———- —————-
1 1 INACTIVE
2 1 INACTIVE
3 1 CURRENT
4 1 INACTIVE

Begin dropping the redolog groups with STATUS=’INACTIVE’ – As mentioned earlier Oracle will not allow you to drop a current online redolog file.

sql > alter database drop logfile group 1;
cd /u50/oradata/test

rm redo1a.log

cd /u51/oradata/test

rm redo1b.log

Recreate group with both members –

sql > alter database add logfile group 1(‘/u50/oradata/test/log1a.ora’,’/u51/oradata/test/log1b.ora’)size 2048m;
Do the same for the other three groups

To change the status of a log group from CURRENT to INACTIVE, simply switch a logfile with this command-

sql> alter system switch logfile;
Run the command 2 or 3 times if the group status does not change.

To check status of group remember the command is

sql > select GROUP#,THREAD#,STATUS from v$log;
Also make sure the logfile is physically removed from the disk with the rm command – if you don’t do this and try to recreate a logfile an error will appear similar to this –

Ex: You dropped log group 3 but did not remove the physical file redo03.log from disk /u03/oradata/test and tried to recreate the group.

SQL> alter database add logfile group3 ‘/u03/oradata/test/redo03.log’ size 50m;
alter database add logfile ‘/u03/oradata/test/redo03.log’ size 50m
*
ERROR at line 1:
ORA-01577: cannot add log file ‘/u03/oradata/test/redo03.log’ – file already
part of database

What is An Oracle Database?

Basically, there are two main components of Oracle database –– instance and database itself. An instance consists of some memory structures and the background processes, whereas a database refers to the disk resources. Figure 1 will show you the relationship.

file1

Figure 1. Two main components of Oracle database

Instance

Database files themselves are useless without the memory structures and processes to interact with the database. Oracle defines the term instance as the memory structure and the background processes used to access data from a database. The memory structures and background processes contitute an instance. The memory structure itself consists of System Global Area (SGA), Program Global Area (PGA), and an optional area –– Software Area Code. In the other hand, the mandatory background processes are Database Writer (DBWn), Log Writer (LGWR), Checkpoint (CKPT), System Monitor (SMON), and Process Monitor (PMON). And another optional background processes are Archiver (ARCn), Recoverer (RECO), etc. Figure 2 will illustrate the relationship for those components on an instance.

file2

Figure 2. The instance components

System Global Area

SGA is the primary memory structures. When Oracle DBAs talk about memory, they usually mean the SGA. This area is broken into a few of part memory –– Buffer Cache, Shared Pool, Redo Log Buffer, Large Pool, and Java Pool.

Buffer Cache

Buffer cache is used to stores the copies of data block that retrieved from datafiles. That is, when user retrieves data from database, the data will be stored in buffer cache. Its size can be manipulated via DB_CACHE_SIZE parameter in init.ora initialization parameter file.

Shared Pool

Shared pool is broken into two small part memories –– Library Cache and Dictionary Cache. The library cache is used to stores information about the commonly used SQL and PL/SQL statements; and is managed by a Least Recently Used (LRU) algorithm. It is also enables the sharing those statements among users. In the other hand, dictionary cache is used to stores information about object definitions in the database, such as columns, tables, indexes, users, privileges, etc.

The shared pool size can be set via SHARED_POOL_SIZE parameter in init.ora initialization parameter file.

Redo Log Buffer

Each DML statement (select, insert, update, and delete) executed by users will generates the redo entry. What is a redo entry? It is an information about all data changes made by users. That redo entry is stored in redo log buffer before it is written into the redo log files. To manipulate the size of redo log buffer, you can use theLOG_BUFFER parameter in init.ora initialization parameter file.

Large Pool

Large pool is an optional area of memory in the SGA. It is used to relieves the burden place on the shared pool. It is also used for I/O processes. The large pool size can be set by LARGE_POOL_SIZE parameter in init.ora initialization parameter file.

Java Pool

As its name, Java pool is used to services parsing of the Java commands. Its size can be set by JAVA_POOL_SIZE parameter in init.ora initialization parameter file.

Program Global Area

Although the result of SQL statement parsing is stored in library cache, but the value of binding variable will be stored in PGA. Why? Because it must be private or not be shared among users. The PGA is also used for sort area.

Software Area Code

Software area code is a location in memory where the Oracle application software resides.

Oracle processes

There are two categories of processes that run with an Oracle database. They are mentioned below:

  • User processes
  • System processes

The following figure illustrates the relationship between user processes, server processes, PGA, and session:

file3

The first interaction with the Oracle-based application comes from the user computer that creates a user process. The user process then communicates with the server process on the host computer. Here, PGA is used to store session specific information.

Oracle Background Processes

Oracle background processes is the processes behind the scene that work together with the memories.

DBWn

Database writer (DBWn) process is used to write data from buffer cache into the datafiles. Historically, the database writer is named DBWR. But since some of Oracle version allows us to have more than one database writer, the name is changed to DBWn, where n value is a number 0 to 9.

LGWR

Log writer (LGWR) process is similar to DBWn. It writes the redo entries from redo log buffer into the redo log files.

CKPT

Checkpoint (CKPT) is a process to give a signal to DBWn to writes data in the buffer cache into datafiles. It will also updates datafiles and control files header when log file switch occurs.

SMON

System Monitor (SMON) process is used to recover the system crash or instance failure by applying the entries in the redo log files to the datafiles.

PMON

Process Monitor (PMON) process is used to clean up work after failed processes by rolling back the transactions and releasing other resources.

ARCH

The ARCH background process is invoked when your database is running in ARCHIVELOG mode. If you are archiving your redo logs, the redo logs are touched by several background processes. First, the LGWR process copies the log_buffer contents to the online redo log files, and then the ARCH process copies the online redo log files to the archived redo log filesystem on UNIX. The ARCH process commonly offloads the most recent online redo log file whenever a log switch operation occurs in Oracle.

file4

The figure 4: shows various components of SGA, Oracle background processes, and their interactions with control files, data files, Redo Log files, and archived redo logs.

Database

The database refers to disk resources, and is broken into two main structures –– Logical structures and Physical structures.

Logical Structures:~

Oracle database is divided into smaller logical units to manage, store, and retrieve data effeciently. The logical units are tablespace, segment, extent, and data block. Figure 5 will illustrate the relationships between those units.

file5

Figure 5. The relationships between the Oracle logical structures

Tablespace

A Tablespace is a grouping logical database objects. A database must have one or more tablespaces. In the Figure 5, we have three tablespaces –– SYSTEM tablespace, Tablespace 1, and Tablespace 2. Tablespace is composed by one or more datafiles.

There are three types of tablespaces in Oracle:

  • Permanent tablespaces
  • Undo tablespaces
  • temporary tablespaces

Segment

A Tablespace is further broken into segments. A segment is used to stores same type of objects. That is, every table in the database will store into a specific segment (named Data Segment) and every index in the database will also store in its own segment (named Index Segment). The other segment types are Temporary Segment and Rollback Segment.
A segment is a container for objects (such as tables, views, packages . . . indexes). A segment consists of Extends.

There are 11 types of Segments in oracle 10g.

  1. Table
  2. Table Partition
  3. Index
  4. Index Partition
  5. Cluster
  6. Rollback
  7. Deferred Rollback
  8. Temporary
  9. Cache
  10. Lobsegment
  11. Lobindex

Extent
A segment is further broken into extents. An extent consists of one or more data block. When the database object is enlarged, an extent will be allocated. Unlike a tablespace or a segment, an extent cannot be named. Space for a data on a hard disk is allocated in extends.

Data Block

A data block is the smallest unit of storage in the Oracle database. The data block size is a specific number of bytes within tablespace and it has the same number of bytes.

Physical Structures:~The physical structures are structures of an Oracle database (in this case the disk files) that are not directly manipulated by users. The physical structure consists of datafiles, redo log files, and control files.

Datafiles

A datafile is a file that correspondens with a tablespace. One datafile can be used by one tablespace, but one tablespace can has more than one datafiles. An Oracle databae include of a number of physical files called datafile.

Redo Log Files

A Redo Log is a file that is part of an Oracle Database. When a transaction is committed the transaction’s details in the redo log buffer is written in a redo log file. These files contain information that helps in recovery in the event of system failure.

file6
The figure 6: shows three Redo Log groups. Each group consists of two members. The first member of each Redo Log group is stored in directory D1 and the second member is stored in directory D2.

Control Files

Control files are used to store information about physical structure of database. The control file is absolutely crucial to database operations. It contains the following types of information:

  1. Database Information
  2. Archive log history
  3. Tablespace and datafile records
  4. Redo threads
  5. Database’s creation data
  6. Database name
  7. Current Archive information
  8. Log records
  9. Database Id which is unique to each Database

 

Backup Fails Because of Control File Enqueue: Scenario

In this scenario, a backup job fails because RMAN cannot make a snapshot control file. The message stack is as follows:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 25/05/2014 22:48:44
ORA-00230: operation disallowed: snapshot control file enqueue unavailable
Backup Fails Because of Control File Enqueue: Diagnosis
When RMAN needs to back up or resynchronize from the control file, it first creates a snapshot or consistent image of the control file. If one RMAN job is already backing up the control file while another needs to create a new snapshot control file, then you may see the following message:

waiting for snapshot control file enqueue

Under normal circumstances, a job that must wait for the control file enqueue waits for a brief interval and then successfully obtains the enqueue. RMAN makes up to five attempts to get the enqueue and then fails the job. The conflict is usually caused when two jobs are both backing up the control file, and the job that first starts backing up the control file waits for service from the media manager.

To determine which job is holding the conflicting enqueue:

After you see the first message stating “RMAN-08512: waiting for snapshot control file enqueue”, start a new SQL*Plus session on the target database:

% sqlplus ‘SYS/oracle@trgt AS SYSDBA’

Execute the following query to determine which job is causing the wait:

SELECT s.SID, USERNAME AS “User”, PROGRAM, MODULE,
ACTION, LOGON_TIME “Logon”, l.*
FROM V$SESSION s, V$ENQUEUE_LOCK l
WHERE l.SID = s.SID
AND l.TYPE = ‘CF’
AND l.ID1 = 0
AND l.ID2 = 2;

You should see output similar to the following (the output in this example has been truncated):

SID User Program Module Action Logon
— —- ——————– ——————- —————- ———
9 SYS rman@h13 (TNS V1-V3) backup full datafile: c10000210 STARTED 25-May-2014
Backup Fails Because of Control File Enqueue: Solution
Commonly, enqueue situations occur when a job is writing to a tape drive, but the tape drive is waiting for new tape to be inserted. If you start a new job in this situation, then you will probably receive the enqueue message because the first job cannot complete until the new tape is loaded.

After you have determined which job is creating the enqueue, you can do one of the following:

Wait until the job holding the enqueue completes

Cancel the current job and restart it after the job holding the enqueue completes

Cancel the job creating the enqueue

Below are scripts that we will use in our environment before import the data using impdp inorder to speed up the process by disabling the constraints and triggers.

CONSTRAINTS DISABLE
——————-

Set linesize 300

Spool COnstraints_disable.sql

select ‘alter table ‘ || owner || ‘.’ || table_name || ‘ disable constraint ‘ || constraint_name || ‘;’ from all_constraints where owner=’test_schema’;
CONSTRAINTS ENABLE
——————-

SQL> set linesize 300

SQL> spool Constraints_enable.sql

SQL> select ‘alter table ‘ || owner || ‘.’ || table_name || ‘ enable constraint ‘ || constraint_name || ‘;’ from all_constraints where owner=’test_schema’;

Trigger Disable
—————

SQL> Set linesize 300

SQL> spool trigger_disable.sql

select ‘alter trigger ‘ || owner || ‘.’ || trigger_name || ‘ disable; ‘ from all_triggers where owner=’test_schema’;
Trigger Enable
—————

SQL> Set linesize 300

SQL> spool trigger_enable.sql

select ‘alter trigger ‘ || owner || ‘.’ || trigger_name || ‘ enable; ‘ from all_triggers where owner=’test_schema’;
UNLOCK_TABLE_STATISTICS
—————————

SQL > set linesize 300

SQL> Unlock_statistics.sql

select ‘exec DBMS_STATS.UNLOCK_TABLE_STATS(”’||owner||”’,”’||table_name||”’);’ from dba_tab_statistics where stattype_locked is not null and owner in(‘test_schema’);

Even after we killed the session using “immediate” keyword and its still in the state session marked for killing.

SQL> Alter system kill session ‘374,1005’ immediate;

ora-00031 : Session marked for killing..

In this case get the PID corresponding to the SID and kill it at the OS level using kill -9

–Use the below query to get the PID —

select
sid,
spid
from v$process p, v$session s
where paddr = addr
and s.sid in(‘374’)
order by s.sid;

then kill the corresponding spid in the os level using kill -9 spid

select table_name,num_rows counter from dba_tables where owner=’Schema_name’ order by table_name;

Once I have wrongly deleted the spfile in our environment.. Thank god the db was up and running so there was no need for the spfile at the time.. Then I have restore the spfile from the controlfile autoback.

If you have configured control file autobackups, the SPFILE is backed up with the control file whenever an autobackup is taken.

If you want to restore the SPFILE from the autobackup, you must first set the DBID for your database, and then use the RESTORE SPFILE FROM AUTOBACKUP command. The procedure is similar to restoring the control file from autobackup. You must first set the DBID for your database, and then use the RESTORE CONTROLFILE FROM AUTOBACKUP command:

RMAN uses the autobackup format and DBID to hunt for control file autobackups, and if a control file autobackup is found, restores the SPFILE from that backup to its default location.

RMAN> SET DBID 1066917090;

RMAN>run {
2> allocate channel c1 type ‘sbt_tape’ parms ‘ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/emsp_tdpo.opt)’;
3> RESTORE SPFILE to ‘/u01/app/oracle/product/spfiletestdb.ora’ FROM AUTOBACKUP;  ## if you not specifying any location it will                       ##restore to the default location ($ORACLE_HOME)
4> release channel c1;
5> }

allocated channel: c1
channel c1: sid=65 devtype=SBT_TAPE
channel c1: Data Protection for Oracle: version 5.5.1.0

Starting restore at 25-JAN-201212:46:31

channel c1: looking for autobackup on day: 20120125
channel c1: autobackup found: c-849745015-20120125-01
channel c1: SPFILE restore from autobackup complete
Finished restore at 25-JAN-201212:50:01

released channel: c1

RMAN> exit