Posts Tagged ‘oracle’

Backup Fails Because of Control File Enqueue: Scenario

In this scenario, a backup job fails because RMAN cannot make a snapshot control file. The message stack is as follows:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 25/05/2014 22:48:44
ORA-00230: operation disallowed: snapshot control file enqueue unavailable
Backup Fails Because of Control File Enqueue: Diagnosis
When RMAN needs to back up or resynchronize from the control file, it first creates a snapshot or consistent image of the control file. If one RMAN job is already backing up the control file while another needs to create a new snapshot control file, then you may see the following message:

waiting for snapshot control file enqueue

Under normal circumstances, a job that must wait for the control file enqueue waits for a brief interval and then successfully obtains the enqueue. RMAN makes up to five attempts to get the enqueue and then fails the job. The conflict is usually caused when two jobs are both backing up the control file, and the job that first starts backing up the control file waits for service from the media manager.

To determine which job is holding the conflicting enqueue:

After you see the first message stating “RMAN-08512: waiting for snapshot control file enqueue”, start a new SQL*Plus session on the target database:

% sqlplus ‘SYS/oracle@trgt AS SYSDBA’

Execute the following query to determine which job is causing the wait:

SELECT s.SID, USERNAME AS “User”, PROGRAM, MODULE,
ACTION, LOGON_TIME “Logon”, l.*
FROM V$SESSION s, V$ENQUEUE_LOCK l
WHERE l.SID = s.SID
AND l.TYPE = ‘CF’
AND l.ID1 = 0
AND l.ID2 = 2;

You should see output similar to the following (the output in this example has been truncated):

SID User Program Module Action Logon
— —- ——————– ——————- —————- ———
9 SYS rman@h13 (TNS V1-V3) backup full datafile: c10000210 STARTED 25-May-2014
Backup Fails Because of Control File Enqueue: Solution
Commonly, enqueue situations occur when a job is writing to a tape drive, but the tape drive is waiting for new tape to be inserted. If you start a new job in this situation, then you will probably receive the enqueue message because the first job cannot complete until the new tape is loaded.

After you have determined which job is creating the enqueue, you can do one of the following:

Wait until the job holding the enqueue completes

Cancel the current job and restart it after the job holding the enqueue completes

Cancel the job creating the enqueue

Even after we killed the session using “immediate” keyword and its still in the state session marked for killing.

SQL> Alter system kill session ‘374,1005’ immediate;

ora-00031 : Session marked for killing..

In this case get the PID corresponding to the SID and kill it at the OS level using kill -9

–Use the below query to get the PID —

select
sid,
spid
from v$process p, v$session s
where paddr = addr
and s.sid in(‘374’)
order by s.sid;

then kill the corresponding spid in the os level using kill -9 spid

Once I have wrongly deleted the spfile in our environment.. Thank god the db was up and running so there was no need for the spfile at the time.. Then I have restore the spfile from the controlfile autoback.

If you have configured control file autobackups, the SPFILE is backed up with the control file whenever an autobackup is taken.

If you want to restore the SPFILE from the autobackup, you must first set the DBID for your database, and then use the RESTORE SPFILE FROM AUTOBACKUP command. The procedure is similar to restoring the control file from autobackup. You must first set the DBID for your database, and then use the RESTORE CONTROLFILE FROM AUTOBACKUP command:

RMAN uses the autobackup format and DBID to hunt for control file autobackups, and if a control file autobackup is found, restores the SPFILE from that backup to its default location.

RMAN> SET DBID 1066917090;

RMAN>run {
2> allocate channel c1 type ‘sbt_tape’ parms ‘ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/emsp_tdpo.opt)’;
3> RESTORE SPFILE to ‘/u01/app/oracle/product/spfiletestdb.ora’ FROM AUTOBACKUP;  ## if you not specifying any location it will                       ##restore to the default location ($ORACLE_HOME)
4> release channel c1;
5> }

allocated channel: c1
channel c1: sid=65 devtype=SBT_TAPE
channel c1: Data Protection for Oracle: version 5.5.1.0

Starting restore at 25-JAN-201212:46:31

channel c1: looking for autobackup on day: 20120125
channel c1: autobackup found: c-849745015-20120125-01
channel c1: SPFILE restore from autobackup complete
Finished restore at 25-JAN-201212:50:01

released channel: c1

RMAN> exit

select DECODE(parameter, ‘NLS_CHARACTERSET’, ‘CHARACTER SET’,
‘NLS_LANGUAGE’, ‘LANGUAGE’,
‘NLS_TERRITORY’, ‘TERRITORY’) name,
value from v$nls_parameters
WHERE parameter IN ( ‘NLS_CHARACTERSET’, ‘NLS_LANGUAGE’, ‘NLS_TERRITORY’)
/

              Recently I have faced a problem with our undo tablespace on production database. The database size is 2TB and the undotbs was growing very large also there were space constraints at the mountpoint level. So I came with the below idea and first implemented in the test environment and its worked out as I wished. Then I have implemented the same in production.

How to switch to a new UNDO tablespace and drop the old one on Oracle database (Version greater than 10g & 11g since the undo_management must be auto).

STEP -1

=======

$ sqlplus / as sysdba

SQL> show parameter undo

NAME TYPE VALUE

———————————— ———– ——————————

undo_management string AUTO

undo_retention integer 900

undo_tablespace string UNDOTBS1

SQL>

The current undo tablespace as suggested by the initialization parameter undo_tablespace is UNDOTBS1.

STEP -2

=======

— Create a new undo tablespace

CREATE UNDO TABLESPACE undotbs2

DATAFILE ‘/d01/apps/oradata/oraxpo/undotbs201.dbf’

SIZE 50M AUTOEXTEND ON NEXT 5M;

Tablespace created.

STEP -3

=======

— Switch the database to the new UNDO tablespace.

ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS2 SCOPE=BOTH;

System altered.

STEP -4

=======

— Try to drop the tablespace but failed.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;

DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES

*

ERROR at line 1:

ORA-30013: undo tablespace ‘UNDOTBS1’ is currently in use

With the alter system set undo_tablespace=UNDOTBS2, the database UNDO tablespace is changed and any new transaction’s undo data will go to the new tablespace i.e. UNDOTBS2. But the undo data for already pending transaction (e.g. someone initiated a transaction before the UNDO tablespace switch) is still in the old tablespace with a status of PENDING OFFLINE. As far as it is there you cannot drop the old tablespace.

STEP -5

=======

— The query shows the name of the UNDO segment in the UNDOTBS1 tablespace and its status.

Now lets see which users/sessions are running this pending transaction.

set lines 10000

column name format a10

SELECT a.name,b.status

FROM v$rollname a,v$rollstat b

WHERE a.usn = b.usn

AND a.name IN (

SELECT segment_name

FROM dba_segments

WHERE tablespace_name = ‘UNDOTBS1’

);

NAME STATUS

———- —————

_SYSSMU8$ PENDING OFFLINE

column username format a6

SELECT a.name,b.status , d.username , d.sid , d.serial#

FROM v$rollname a,v$rollstat b, v$transaction c , v$session d

WHERE a.usn = b.usn

AND a.usn = c.xidusn

AND c.ses_addr = d.saddr

AND a.name IN (

SELECT segment_name

FROM dba_segments

WHERE tablespace_name = ‘UNDOTBS1’

);

NAME STATUS USERNA SID SERIAL#

———- ————— —— ———————— ———- ———-

_SYSSMU8$ PENDING OFFLINE SCOTT 147 4

So this is SCOTT with SID=147 and SERIAL#=4. Since we know now the user, we can go to him/her and request to end the transaction gracefully i.e. issue a ROLLBACK or COMMIT. However, if this is not possible (say the user initiated the transaction and left for annual leave 🙂 and trust me this happens) you may go ahead and kill the session to release the undo segments in the UNDOTBS1 tablespace. (Don’t Kill the session in a production environment wait for the undo retention period)

SQL> alter system kill session ‘147,4’;

System altered.

SELECT a.name,b.status , d.username , d.sid , d.serial#

FROM v$rollname a,v$rollstat b, v$transaction c , v$session d

WHERE a.usn = b.usn

AND a.usn = c.xidusn

AND c.ses_addr = d.saddr

AND a.name IN (

SELECT segment_name

FROM dba_segments

WHERE tablespace_name = ‘UNDOTBS1’

);

no rows selected

As we can see once the session is kills we don’t see anymore segments occupied in the UNDOTBS1 tablespace.

Lets drop UNDOTBS1.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;

DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES

*

ERROR at line 1:

ORA-30013: undo tablespace ‘UNDOTBS1’ is currently in use

If you are retaining undo data then you still won’t be able to drop the tablespace because it is still in use by undo_retention. Let the UNDO_RETENTION time pass and then try to drop the tablespace. In my case it is 900 seconds i.e. 15 minutes.

— After 15 minutes.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;

Tablespace dropped.

Oracle: How to kill data pump jobs (Below example is for import (impdp) the same is applicable for export (expdp) )

When you import or export using data pump impdp or expdp tools, the import/export is done by a job. You have an option to provide a job name using JOB_NAME parameter too

Following sql will give you the list of data pump jobs

select * from dba_datapump_jobs;

If you want to kill your impdp or expdp

1) Make sure that your impdp/expdp command prompt window is active

2) Press Control-C , It will pause the job. Don’t press another Control-C or close the command prompt. This will just close the window, but the job will still be running in the background

3) Type Kill_Job

ex: Import>kill_job

Are you sure you wish to stop this job (y/n): y

 

If by mistake, you closed the window and your import/export job is still running,

1) Get the name of the job using

select * from dba_datapump_jobs

2) Open a new command prompt window. If you want to kill your import job type

impdp username/password@database attach=name_of_the_job   (Get the name_of_the_job using the above query)

3) Once you are attached to job, Type Kill_Job

ex: Import>kill_job

Are you sure you wish to stop this job (y/n): y

And your job is killed, it will no longer show in dba_datapump_jobs

Activating the Standby Database – Switchover Method

The main advantage of a graceful switchover is that it avoids the resetlogs operation. By avoiding the resetlogs operation, the source database can resume its role as the standby database almost immediately with no data loss. Another feature to a graceful switchover is that it does not invalidate previous backups.

1. Prerequisites :-

There is no loss of any archive logs that haven’t been (yet) applied to the standby database.

2. Setjob_queue_processes value to 0 in both (PRIMARY and STANDBY) (PROD & DR)

SQL>SHOW PARAMETER JOB_QUEUE_PROCESS

NAME TYPE VALUE

———————————— ———– ————-

job_queue_processes integer 10

SQL> ALTERSYSTEM SET JOB_QUEUE_PROCESSES=0 scope=both;

3. In PRIMARY database check the database role. (PROD server)

SQL> select NAME,DATABASE_ROLE,GUARD_STATUS,SWITCHOVER_STATUS, SWITCHOVER#,OPEN_MODE,PROTECTION_MODE from v$database;

NAME DATABASE_ROLE GUARD_S SWITCHOVER_STATUS SWITCHOVER# OPEN_MODE PROTECTION_MODE

——— —————- ——- ——————– ———– ———- ——————–

DBNAMEPRIMARY NONE SESSIONS ACTIVE 4106602309 READ WRITE MAXIMUM PERFORMANCE

4. In STANDBY database check the database role. (DR server)

SQL> select NAME,DATABASE_ROLE,GUARD_STATUS,SWITCHOVER_STATUS, SWITCHOVER#,OPEN_MODE,PROTECTION_MODE from v$database;

NAME DATABASE_ROLE GUARD_S SWITCHOVER_STATUS SWITCHOVER# OPEN_MODE PROTECTION_MODE

——— —————- ——- ——————– ———– ———- ——————–

DBNAME PHYSICAL STANDBY NONE SESSIONS ACTIVE 4106602309 MOUNTED MAXIMUM PERFORMANCE

5. Shutdown the PRIMARY database. (PROD server)

SQL> shutdown immediate

Database closed.
Database dismounted.
ORACLE instance shut down.

6. Open the PRIMARY database in RESTRICTED mode. (PROD server)

SQL> startup restrict

ORACLE instance started.

Total System Global Area 252777660 bytes
Fixed Size 451772 bytes
Variable Size 218103808 bytes
Database Buffers 33554432 bytes
Redo Buffers 667648 bytes
Database mounted.
Database opened.

7. Archive the current log on the PRIMARY database. (PROD server)

SQL> alter system archive log current;

System altered.

8. Make sure the primary database and standby database are in sync. On both the primary and standby instances, issue the following. (PROD & DR)

SQL> select thread#, max (sequence#) from v$archived_log where APPLIED=’YES’ group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 1934

Now, compare the results and make sure the Thread and Sequence # are the same. If the standby instance is ahead by 1 or none, you are in sync.

9. Initiate the switchover on the PRIMARY database. (PROD)

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;

10. Once the step above has completed, log on to the STANDBY database and issues the following command. (DR)

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

11. Immediately return to the FORMER PRIMARY database and issue a shutdown and mount the NEW STANDBY database. (PROD server)

SQL> shutdown immediate
SQL> startup mount;

12. On the NEW PRIMARY/OLD STANDBY, you can now open the database as the PRIMARY database. (DR server)

SQL> alter database open;

Database opened.

13. Verify the new STANDBY’S status. (PROD server)

SQL> select name, database_role from v$database;

NAME DATABASE_ROLE
——— —————-
PHYSICAL STANDBY

14. Setjob_queue_processes value to 10 in both (PRIMARY and STANDBY) (PROD & DR)

SQL>SHOW PARAMETER JOB_QUEUE_PROCESS

NAME TYPE VALUE

———————————— ———– ————-

job_queue_processes integer 0

SQL> ALTERSYSTEM SET JOB_QUEUE_PROCESSES=10 scope=both;

15. Put the NEW STANDBY/FORMER PRIMARY database into managed recovery mode. (PROD)

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Database altered.

16. Test the communications for archive by performing a log switch. (DR server)

SQL> alter system switch logfile;

System altered.

SQL> alter system switch logfile;

System altered.

Now check whether these logs are applying in the NEW STANDBY and also check the listener status.

17. On the NEW PRIMARY database instance, create the temporary tablespace. (DR server)

SQL> Create temporary tablespace temp tempfile ‘/u01/dbname/oradata/temp01.dbf’ size 5120m;

Tablespace created.

18. On the NEW PRIMARY database instance, take a backup if possible. (DR server)

 

Switch back to Orginal Primary Database – Switchover Method

1. Prerequisites :-

There is no loss of any archive logs that haven’t been (yet) applied to the standby database.

2. Setjob_queue_processes value to 0 in both (PRIMARY and STANDBY) (PROD & DR)

SQL>SHOW PARAMETER JOB_QUEUE_PROCESS

NAME TYPE VALUE

———————————— ———– ————-

job_queue_processes integer 10

SQL> ALTERSYSTEM SET JOB_QUEUE_PROCESSES=0 scope=both;

3. In New PRIMARY database check the database role. (DR server)

SQL> select NAME,DATABASE_ROLE,GUARD_STATUS,SWITCHOVER_STATUS, SWITCHOVER#,OPEN_MODE,PROTECTION_MODE from v$database;

NAME DATABASE_ROLE GUARD_S SWITCHOVER_STATUS SWITCHOVER# OPEN_MODE PROTECTION_MODE

——— —————- ——- ——————– ———– ———- ——————–

DBNAMEPRIMARY NONE SESSIONS ACTIVE 4106602309 READ WRITE MAXIMUM PERFORMANCE

4. In New STANDBY database check the database role. (PROD server)

SQL> select NAME,DATABASE_ROLE,GUARD_STATUS,SWITCHOVER_STATUS, SWITCHOVER#,OPEN_MODE,PROTECTION_MODE from v$database;

NAME DATABASE_ROLE GUARD_S SWITCHOVER_STATUS SWITCHOVER# OPEN_MODE PROTECTION_MODE

——— —————- ——- ——————– ———– ———- ——————–

DBNAME PHYSICAL STANDBY NONE SESSIONS ACTIVE 4106602309 MOUNTED MAXIMUM PERFORMANCE

5. Shutdown the New PRIMARY database. (DR server)

SQL> shutdown immediate

Database closed.
Database dismounted.
ORACLE instance shut down.

6. Open the New PRIMARY database in RESTRICTED mode. (DR server)

SQL> startup restrict

ORACLE instance started.

Total System Global Area 252777660 bytes
Fixed Size 451772 bytes
Variable Size 218103808 bytes
Database Buffers 33554432 bytes
Redo Buffers 667648 bytes
Database mounted.
Database opened.

7. Archive the current log on the New PRIMARY database. (DR server)

SQL> alter system archive log current;

System altered.

8. Make sure the primary database and standby database are in sync. On both the primary and standby instances, issue the following. (PROD & DR)

SQL> select thread#, max (sequence#) from v$archived_log where APPLIED=’YES’ group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 1934

Now, compare the results and make sure the Thread and Sequence # are the same. If the standby instance is ahead by 1 or none, you are in sync.

9. Initiate the switchover on the New PRIMARY database. (DR)

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;

10. Once the step above has completed, log on to the New STANDBY database/Orginal Primary and issues the following command. (PROD)

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

11. Immediately return to the FORMER PRIMARY database/Orginal STANDBY and issue a shutdown and mount the Orginal STANDBY database. (DR server)

SQL> shutdown immediate
SQL> startup mount;

12. On the Orginal PRIMARY/OLD STANDBY, you can now open the database as the PRIMARY database. (PROD server)

SQL> alter database open;

Database opened.

13. Verify the orginal STANDBY’S status. (DR server)

SQL> select name, database_role from v$database;

NAME DATABASE_ROLE
——— —————-
PHYSICAL STANDBY

14. Setjob_queue_processes value to 10 in both (PRIMARY and STANDBY) (PROD & DR)

SQL>SHOW PARAMETER JOB_QUEUE_PROCESS

NAME TYPE VALUE

———————————— ———– ————-

job_queue_processes integer 0

SQL> ALTERSYSTEM SET JOB_QUEUE_PROCESSES=10 scope=both;

15. Put the Orginal STANDBY/FORMER PRIMARY database into managed recovery mode. (DR)

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Database altered.

16. Test the communications for archive by performing a log switch. (PROD server)

SQL> alter system switch logfile;

System altered.

SQL> alter system switch logfile;

System altered.

Now check whether these logs are applying in the Orginal STANDBY and also check the listener status.

17. On the Orginal PRIMARY database instance, take a backup if possible. (PROD server)

 

col STATUS format a9
col hrs format 999.99
select
SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,’mm/dd/yy hh24:mi’) start_time,
to_char(END_TIME,’mm/dd/yy hh24:mi’)   end_time,
elapsed_seconds/3600                   hrs
from V$RMAN_BACKUP_JOB_DETAILS
order by session_key;

To get the current status of the Running RMAN JOB

SELECT SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK,
ROUND (SOFAR/TOTALWORK*100, 2) “% COMPLETE”
FROM V$SESSION_LONGOPS
WHERE OPNAME LIKE ‘RMAN%’ AND OPNAME NOT LIKE ‘%aggregate%’
AND TOTALWORK! = 0 AND SOFAR <> TOTALWORK;

1. Identify and copy the database files

With the source database started, identify all of the database’s files. The following query will display all datafiles, tempfiles and redo logs:

 

set lines 100 pages 999

col name format a50

select name, bytes

from (select name, bytes

from v$datafile

union all

select name, bytes

from v$tempfile

union all

select lf.member “name”, l.bytes

from v$logfile lf

, v$log l

where lf.group# = l.group#) used

, (select sum(bytes) as poo

from dba_free_space) free

/

 

OR

SQL>Select name from v$datafile;

SQL>Select member from v$logfile;

Make sure that the clone databases file-system is large enough and has all necessary directories.

If the source database has a complex file structure, you might want to consider modifying the

above sql to produce a file copy script.

Stop the source database with:

SQL>shutdown immediate

Copy, scp or ftp the files from the source database/machine to the target.

Do not copy the control files across. Make sure that the files have the correct permissions and ownership.

Start the source database up again

SQL> startup

 

2. Produce a pfile for the new database

This step assumes that you are using a spfile. If you are not, just copy the existing pfile.

 

From sqlplus:

SQL> create pfile=’init<new database sid>.ora’ from spfile;

This will create a new pfile in the $ORACLE_HOME/dbs directory.

Once created, the new pfile will need to be edited. If the cloned database is to have a new name,

this will need to be changed, as will any paths. Review the contents of the file and make

alterations as necessary.

Also think about adjusting memory parameters. If you are cloning a production database onto

a slower development machine you might want to consider reducing some values.

Now open the parameter file in clone database and change the following parameters with the respective current location.

CONTROL FILES

BACKGROUND_DUMP_DEST

USER_DUMP_DEST

CORE_DUMP_DEST

LOG_ARCHIVE_DEST_1

And Place the clone_DB(new DB) pfile on /$ORACLE_HOME/dbs

Note. Pay particular attention to the control locations.

  

3. In original database generate CREATECONTROLFILE statement by typing the following command.

SQL>alter database backup controlfile to trace;

This will create a trace file containing the “CREATE CONTROLFILE” command to recreate the controlfile in text form.

4. Now, go to the USER_DUMP_DEST directory on the original Database server and open the latest trace file.The trace file will have the form “ora_NNNN.trc with NNNN being a number. This file will contain steps and as well as CREATE CONTROLFILE statement. Copy the CREATE CONTROLFILE statement and paste it in a notepad.

Edit the file

FROM: CREATE CONTROLFILE REUSE DATABASE olddbname” RESETLOGS … 

  TO: CREATE CONTROLFILE set DATABASE “newdbname”  RESETLOGS … 

Change the word ‘REUSE’ to ‘set’ and the ‘olddbname’ to ‘newdbname’. Also change the datafiles location parameter to clone database location.

5. Create the necessary directory on the clone database (destination database) server on your desired location.

 

Example :- mkdir udump adump cdump bdump arch

 

udump – user dump destination

bdump – background dump destination

adump – audit dump destination

cdump – core dump destination

arch – Archive log destination

6. Now copy the pfile from the original database server to the clone database server and placed it under the $ORACLE_HOME/dbs location. Now open the parameter file in clone database and change the following parameters with the respective current location.

 

CONTROL FILES
BACKGROUND_DUMP_DEST
USER_DUMP_DEST
CORE_DUMP_DEST
LOG_ARCHIVE_DEST_1

7. In clone database SERVER export ORACLE_SID environment variable and start the instance

$export ORACLE_SID= Clone DB name

$sqlplus

Enter User:/ as sysdba

SQL> startup nomount pfile=’initNEWDB_NAME.ora’;

8. Run create controlfile script to create the controlfile

SQL>@createcontrolfile.sql

9. Trouble shoot:

It is quite common to run into problems at this stage. Here are a couple of common errors and solutions:

  • ORA-01113: file 1 needs media recoveryYou probably forgot to stop the source database before copying the files.

Go back to step 1 and recopy the files.

  • ORA-01503: CREATE CONTROLFILE failed

          ORA-00200: controlfile could not be created

          ORA-00202: controlfile: ‘/u03/oradata/dg9a/control01.ctl’

          ORA-27038: skgfrcre: file exists

Double check the pfile created in step 2. Make sure the control_files setting

is pointing at the correct location. If the control_file setting is ok, make sure that the control

files were not copied with the rest of the database files. If they were, delete or rename them.

 

10. Open the database

SQL>alter database open;

11. Perform a few checks

If the last step went smoothly, the database should be open.

It is advisable to perform a few checks at this point:

Check that the database has opened with:

SQL> select status from v$instance;

The status should be ‘OPEN’

Make sure that the datafiles are all ok:

SQL> select distinct status from v$datafile;

It should return only ONLINE and SYSTEM.

Take a quick look at the alert log too.

 

12. Set the databases global name

The new database will still have the source databases global name. Run the following to reset it:

SQL> alter database rename global_name to <new database sid>

/

13. Create a spfile

From sqlplus:

SQL> create spfile from pfile;

14. Change the database ID

If RMAN is going to be used to back-up the database, the database ID must be changed.

If RMAN isn’t going to be used, there is no harm in changing the ID anyway – and it’s a good practice to do so.

From sqlplus:

SQL> shutdown immediate

SQL> startup mount

exit

From unix:

$ nid target=/

NID will ask if you want to change the ID. Respond with ‘Y’. Once it has finished, start the database up again in sqlplus:

SQL> shutdown immediate

SQL> startup mount

SQL> alter database open resetlogs

/

15. Configure TNS

Add entries for new database in the listener.ora and tnsnames.ora as necessary.

16. Finished

That’s it!