Archive for the ‘Step by Step My Oracle Guide’ Category

              Recently I have faced a problem with our undo tablespace on production database. The database size is 2TB and the undotbs was growing very large also there were space constraints at the mountpoint level. So I came with the below idea and first implemented in the test environment and its worked out as I wished. Then I have implemented the same in production.

How to switch to a new UNDO tablespace and drop the old one on Oracle database (Version greater than 10g & 11g since the undo_management must be auto).

STEP -1

=======

$ sqlplus / as sysdba

SQL> show parameter undo

NAME TYPE VALUE

———————————— ———– ——————————

undo_management string AUTO

undo_retention integer 900

undo_tablespace string UNDOTBS1

SQL>

The current undo tablespace as suggested by the initialization parameter undo_tablespace is UNDOTBS1.

STEP -2

=======

— Create a new undo tablespace

CREATE UNDO TABLESPACE undotbs2

DATAFILE ‘/d01/apps/oradata/oraxpo/undotbs201.dbf’

SIZE 50M AUTOEXTEND ON NEXT 5M;

Tablespace created.

STEP -3

=======

— Switch the database to the new UNDO tablespace.

ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS2 SCOPE=BOTH;

System altered.

STEP -4

=======

— Try to drop the tablespace but failed.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;

DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES

*

ERROR at line 1:

ORA-30013: undo tablespace ‘UNDOTBS1’ is currently in use

With the alter system set undo_tablespace=UNDOTBS2, the database UNDO tablespace is changed and any new transaction’s undo data will go to the new tablespace i.e. UNDOTBS2. But the undo data for already pending transaction (e.g. someone initiated a transaction before the UNDO tablespace switch) is still in the old tablespace with a status of PENDING OFFLINE. As far as it is there you cannot drop the old tablespace.

STEP -5

=======

— The query shows the name of the UNDO segment in the UNDOTBS1 tablespace and its status.

Now lets see which users/sessions are running this pending transaction.

set lines 10000

column name format a10

SELECT a.name,b.status

FROM v$rollname a,v$rollstat b

WHERE a.usn = b.usn

AND a.name IN (

SELECT segment_name

FROM dba_segments

WHERE tablespace_name = ‘UNDOTBS1’

);

NAME STATUS

———- —————

_SYSSMU8$ PENDING OFFLINE

column username format a6

SELECT a.name,b.status , d.username , d.sid , d.serial#

FROM v$rollname a,v$rollstat b, v$transaction c , v$session d

WHERE a.usn = b.usn

AND a.usn = c.xidusn

AND c.ses_addr = d.saddr

AND a.name IN (

SELECT segment_name

FROM dba_segments

WHERE tablespace_name = ‘UNDOTBS1’

);

NAME STATUS USERNA SID SERIAL#

———- ————— —— ———————— ———- ———-

_SYSSMU8$ PENDING OFFLINE SCOTT 147 4

So this is SCOTT with SID=147 and SERIAL#=4. Since we know now the user, we can go to him/her and request to end the transaction gracefully i.e. issue a ROLLBACK or COMMIT. However, if this is not possible (say the user initiated the transaction and left for annual leave 🙂 and trust me this happens) you may go ahead and kill the session to release the undo segments in the UNDOTBS1 tablespace. (Don’t Kill the session in a production environment wait for the undo retention period)

SQL> alter system kill session ‘147,4’;

System altered.

SELECT a.name,b.status , d.username , d.sid , d.serial#

FROM v$rollname a,v$rollstat b, v$transaction c , v$session d

WHERE a.usn = b.usn

AND a.usn = c.xidusn

AND c.ses_addr = d.saddr

AND a.name IN (

SELECT segment_name

FROM dba_segments

WHERE tablespace_name = ‘UNDOTBS1’

);

no rows selected

As we can see once the session is kills we don’t see anymore segments occupied in the UNDOTBS1 tablespace.

Lets drop UNDOTBS1.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;

DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES

*

ERROR at line 1:

ORA-30013: undo tablespace ‘UNDOTBS1’ is currently in use

If you are retaining undo data then you still won’t be able to drop the tablespace because it is still in use by undo_retention. Let the UNDO_RETENTION time pass and then try to drop the tablespace. In my case it is 900 seconds i.e. 15 minutes.

— After 15 minutes.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;

Tablespace dropped.

Oracle: How to kill data pump jobs (Below example is for import (impdp) the same is applicable for export (expdp) )

When you import or export using data pump impdp or expdp tools, the import/export is done by a job. You have an option to provide a job name using JOB_NAME parameter too

Following sql will give you the list of data pump jobs

select * from dba_datapump_jobs;

If you want to kill your impdp or expdp

1) Make sure that your impdp/expdp command prompt window is active

2) Press Control-C , It will pause the job. Don’t press another Control-C or close the command prompt. This will just close the window, but the job will still be running in the background

3) Type Kill_Job

ex: Import>kill_job

Are you sure you wish to stop this job (y/n): y

 

If by mistake, you closed the window and your import/export job is still running,

1) Get the name of the job using

select * from dba_datapump_jobs

2) Open a new command prompt window. If you want to kill your import job type

impdp username/password@database attach=name_of_the_job   (Get the name_of_the_job using the above query)

3) Once you are attached to job, Type Kill_Job

ex: Import>kill_job

Are you sure you wish to stop this job (y/n): y

And your job is killed, it will no longer show in dba_datapump_jobs

Activating the Standby Database – Switchover Method

The main advantage of a graceful switchover is that it avoids the resetlogs operation. By avoiding the resetlogs operation, the source database can resume its role as the standby database almost immediately with no data loss. Another feature to a graceful switchover is that it does not invalidate previous backups.

1. Prerequisites :-

There is no loss of any archive logs that haven’t been (yet) applied to the standby database.

2. Setjob_queue_processes value to 0 in both (PRIMARY and STANDBY) (PROD & DR)

SQL>SHOW PARAMETER JOB_QUEUE_PROCESS

NAME TYPE VALUE

———————————— ———– ————-

job_queue_processes integer 10

SQL> ALTERSYSTEM SET JOB_QUEUE_PROCESSES=0 scope=both;

3. In PRIMARY database check the database role. (PROD server)

SQL> select NAME,DATABASE_ROLE,GUARD_STATUS,SWITCHOVER_STATUS, SWITCHOVER#,OPEN_MODE,PROTECTION_MODE from v$database;

NAME DATABASE_ROLE GUARD_S SWITCHOVER_STATUS SWITCHOVER# OPEN_MODE PROTECTION_MODE

——— —————- ——- ——————– ———– ———- ——————–

DBNAMEPRIMARY NONE SESSIONS ACTIVE 4106602309 READ WRITE MAXIMUM PERFORMANCE

4. In STANDBY database check the database role. (DR server)

SQL> select NAME,DATABASE_ROLE,GUARD_STATUS,SWITCHOVER_STATUS, SWITCHOVER#,OPEN_MODE,PROTECTION_MODE from v$database;

NAME DATABASE_ROLE GUARD_S SWITCHOVER_STATUS SWITCHOVER# OPEN_MODE PROTECTION_MODE

——— —————- ——- ——————– ———– ———- ——————–

DBNAME PHYSICAL STANDBY NONE SESSIONS ACTIVE 4106602309 MOUNTED MAXIMUM PERFORMANCE

5. Shutdown the PRIMARY database. (PROD server)

SQL> shutdown immediate

Database closed.
Database dismounted.
ORACLE instance shut down.

6. Open the PRIMARY database in RESTRICTED mode. (PROD server)

SQL> startup restrict

ORACLE instance started.

Total System Global Area 252777660 bytes
Fixed Size 451772 bytes
Variable Size 218103808 bytes
Database Buffers 33554432 bytes
Redo Buffers 667648 bytes
Database mounted.
Database opened.

7. Archive the current log on the PRIMARY database. (PROD server)

SQL> alter system archive log current;

System altered.

8. Make sure the primary database and standby database are in sync. On both the primary and standby instances, issue the following. (PROD & DR)

SQL> select thread#, max (sequence#) from v$archived_log where APPLIED=’YES’ group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 1934

Now, compare the results and make sure the Thread and Sequence # are the same. If the standby instance is ahead by 1 or none, you are in sync.

9. Initiate the switchover on the PRIMARY database. (PROD)

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;

10. Once the step above has completed, log on to the STANDBY database and issues the following command. (DR)

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

11. Immediately return to the FORMER PRIMARY database and issue a shutdown and mount the NEW STANDBY database. (PROD server)

SQL> shutdown immediate
SQL> startup mount;

12. On the NEW PRIMARY/OLD STANDBY, you can now open the database as the PRIMARY database. (DR server)

SQL> alter database open;

Database opened.

13. Verify the new STANDBY’S status. (PROD server)

SQL> select name, database_role from v$database;

NAME DATABASE_ROLE
——— —————-
PHYSICAL STANDBY

14. Setjob_queue_processes value to 10 in both (PRIMARY and STANDBY) (PROD & DR)

SQL>SHOW PARAMETER JOB_QUEUE_PROCESS

NAME TYPE VALUE

———————————— ———– ————-

job_queue_processes integer 0

SQL> ALTERSYSTEM SET JOB_QUEUE_PROCESSES=10 scope=both;

15. Put the NEW STANDBY/FORMER PRIMARY database into managed recovery mode. (PROD)

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Database altered.

16. Test the communications for archive by performing a log switch. (DR server)

SQL> alter system switch logfile;

System altered.

SQL> alter system switch logfile;

System altered.

Now check whether these logs are applying in the NEW STANDBY and also check the listener status.

17. On the NEW PRIMARY database instance, create the temporary tablespace. (DR server)

SQL> Create temporary tablespace temp tempfile ‘/u01/dbname/oradata/temp01.dbf’ size 5120m;

Tablespace created.

18. On the NEW PRIMARY database instance, take a backup if possible. (DR server)

 

Switch back to Orginal Primary Database – Switchover Method

1. Prerequisites :-

There is no loss of any archive logs that haven’t been (yet) applied to the standby database.

2. Setjob_queue_processes value to 0 in both (PRIMARY and STANDBY) (PROD & DR)

SQL>SHOW PARAMETER JOB_QUEUE_PROCESS

NAME TYPE VALUE

———————————— ———– ————-

job_queue_processes integer 10

SQL> ALTERSYSTEM SET JOB_QUEUE_PROCESSES=0 scope=both;

3. In New PRIMARY database check the database role. (DR server)

SQL> select NAME,DATABASE_ROLE,GUARD_STATUS,SWITCHOVER_STATUS, SWITCHOVER#,OPEN_MODE,PROTECTION_MODE from v$database;

NAME DATABASE_ROLE GUARD_S SWITCHOVER_STATUS SWITCHOVER# OPEN_MODE PROTECTION_MODE

——— —————- ——- ——————– ———– ———- ——————–

DBNAMEPRIMARY NONE SESSIONS ACTIVE 4106602309 READ WRITE MAXIMUM PERFORMANCE

4. In New STANDBY database check the database role. (PROD server)

SQL> select NAME,DATABASE_ROLE,GUARD_STATUS,SWITCHOVER_STATUS, SWITCHOVER#,OPEN_MODE,PROTECTION_MODE from v$database;

NAME DATABASE_ROLE GUARD_S SWITCHOVER_STATUS SWITCHOVER# OPEN_MODE PROTECTION_MODE

——— —————- ——- ——————– ———– ———- ——————–

DBNAME PHYSICAL STANDBY NONE SESSIONS ACTIVE 4106602309 MOUNTED MAXIMUM PERFORMANCE

5. Shutdown the New PRIMARY database. (DR server)

SQL> shutdown immediate

Database closed.
Database dismounted.
ORACLE instance shut down.

6. Open the New PRIMARY database in RESTRICTED mode. (DR server)

SQL> startup restrict

ORACLE instance started.

Total System Global Area 252777660 bytes
Fixed Size 451772 bytes
Variable Size 218103808 bytes
Database Buffers 33554432 bytes
Redo Buffers 667648 bytes
Database mounted.
Database opened.

7. Archive the current log on the New PRIMARY database. (DR server)

SQL> alter system archive log current;

System altered.

8. Make sure the primary database and standby database are in sync. On both the primary and standby instances, issue the following. (PROD & DR)

SQL> select thread#, max (sequence#) from v$archived_log where APPLIED=’YES’ group by thread#;

THREAD# MAX(SEQUENCE#)
———- ————–
1 1934

Now, compare the results and make sure the Thread and Sequence # are the same. If the standby instance is ahead by 1 or none, you are in sync.

9. Initiate the switchover on the New PRIMARY database. (DR)

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;

10. Once the step above has completed, log on to the New STANDBY database/Orginal Primary and issues the following command. (PROD)

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

11. Immediately return to the FORMER PRIMARY database/Orginal STANDBY and issue a shutdown and mount the Orginal STANDBY database. (DR server)

SQL> shutdown immediate
SQL> startup mount;

12. On the Orginal PRIMARY/OLD STANDBY, you can now open the database as the PRIMARY database. (PROD server)

SQL> alter database open;

Database opened.

13. Verify the orginal STANDBY’S status. (DR server)

SQL> select name, database_role from v$database;

NAME DATABASE_ROLE
——— —————-
PHYSICAL STANDBY

14. Setjob_queue_processes value to 10 in both (PRIMARY and STANDBY) (PROD & DR)

SQL>SHOW PARAMETER JOB_QUEUE_PROCESS

NAME TYPE VALUE

———————————— ———– ————-

job_queue_processes integer 0

SQL> ALTERSYSTEM SET JOB_QUEUE_PROCESSES=10 scope=both;

15. Put the Orginal STANDBY/FORMER PRIMARY database into managed recovery mode. (DR)

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Database altered.

16. Test the communications for archive by performing a log switch. (PROD server)

SQL> alter system switch logfile;

System altered.

SQL> alter system switch logfile;

System altered.

Now check whether these logs are applying in the Orginal STANDBY and also check the listener status.

17. On the Orginal PRIMARY database instance, take a backup if possible. (PROD server)

 

1. Identify and copy the database files

With the source database started, identify all of the database’s files. The following query will display all datafiles, tempfiles and redo logs:

 

set lines 100 pages 999

col name format a50

select name, bytes

from (select name, bytes

from v$datafile

union all

select name, bytes

from v$tempfile

union all

select lf.member “name”, l.bytes

from v$logfile lf

, v$log l

where lf.group# = l.group#) used

, (select sum(bytes) as poo

from dba_free_space) free

/

 

OR

SQL>Select name from v$datafile;

SQL>Select member from v$logfile;

Make sure that the clone databases file-system is large enough and has all necessary directories.

If the source database has a complex file structure, you might want to consider modifying the

above sql to produce a file copy script.

Stop the source database with:

SQL>shutdown immediate

Copy, scp or ftp the files from the source database/machine to the target.

Do not copy the control files across. Make sure that the files have the correct permissions and ownership.

Start the source database up again

SQL> startup

 

2. Produce a pfile for the new database

This step assumes that you are using a spfile. If you are not, just copy the existing pfile.

 

From sqlplus:

SQL> create pfile=’init<new database sid>.ora’ from spfile;

This will create a new pfile in the $ORACLE_HOME/dbs directory.

Once created, the new pfile will need to be edited. If the cloned database is to have a new name,

this will need to be changed, as will any paths. Review the contents of the file and make

alterations as necessary.

Also think about adjusting memory parameters. If you are cloning a production database onto

a slower development machine you might want to consider reducing some values.

Now open the parameter file in clone database and change the following parameters with the respective current location.

CONTROL FILES

BACKGROUND_DUMP_DEST

USER_DUMP_DEST

CORE_DUMP_DEST

LOG_ARCHIVE_DEST_1

And Place the clone_DB(new DB) pfile on /$ORACLE_HOME/dbs

Note. Pay particular attention to the control locations.

  

3. In original database generate CREATECONTROLFILE statement by typing the following command.

SQL>alter database backup controlfile to trace;

This will create a trace file containing the “CREATE CONTROLFILE” command to recreate the controlfile in text form.

4. Now, go to the USER_DUMP_DEST directory on the original Database server and open the latest trace file.The trace file will have the form “ora_NNNN.trc with NNNN being a number. This file will contain steps and as well as CREATE CONTROLFILE statement. Copy the CREATE CONTROLFILE statement and paste it in a notepad.

Edit the file

FROM: CREATE CONTROLFILE REUSE DATABASE olddbname” RESETLOGS … 

  TO: CREATE CONTROLFILE set DATABASE “newdbname”  RESETLOGS … 

Change the word ‘REUSE’ to ‘set’ and the ‘olddbname’ to ‘newdbname’. Also change the datafiles location parameter to clone database location.

5. Create the necessary directory on the clone database (destination database) server on your desired location.

 

Example :- mkdir udump adump cdump bdump arch

 

udump – user dump destination

bdump – background dump destination

adump – audit dump destination

cdump – core dump destination

arch – Archive log destination

6. Now copy the pfile from the original database server to the clone database server and placed it under the $ORACLE_HOME/dbs location. Now open the parameter file in clone database and change the following parameters with the respective current location.

 

CONTROL FILES
BACKGROUND_DUMP_DEST
USER_DUMP_DEST
CORE_DUMP_DEST
LOG_ARCHIVE_DEST_1

7. In clone database SERVER export ORACLE_SID environment variable and start the instance

$export ORACLE_SID= Clone DB name

$sqlplus

Enter User:/ as sysdba

SQL> startup nomount pfile=’initNEWDB_NAME.ora’;

8. Run create controlfile script to create the controlfile

SQL>@createcontrolfile.sql

9. Trouble shoot:

It is quite common to run into problems at this stage. Here are a couple of common errors and solutions:

  • ORA-01113: file 1 needs media recoveryYou probably forgot to stop the source database before copying the files.

Go back to step 1 and recopy the files.

  • ORA-01503: CREATE CONTROLFILE failed

          ORA-00200: controlfile could not be created

          ORA-00202: controlfile: ‘/u03/oradata/dg9a/control01.ctl’

          ORA-27038: skgfrcre: file exists

Double check the pfile created in step 2. Make sure the control_files setting

is pointing at the correct location. If the control_file setting is ok, make sure that the control

files were not copied with the rest of the database files. If they were, delete or rename them.

 

10. Open the database

SQL>alter database open;

11. Perform a few checks

If the last step went smoothly, the database should be open.

It is advisable to perform a few checks at this point:

Check that the database has opened with:

SQL> select status from v$instance;

The status should be ‘OPEN’

Make sure that the datafiles are all ok:

SQL> select distinct status from v$datafile;

It should return only ONLINE and SYSTEM.

Take a quick look at the alert log too.

 

12. Set the databases global name

The new database will still have the source databases global name. Run the following to reset it:

SQL> alter database rename global_name to <new database sid>

/

13. Create a spfile

From sqlplus:

SQL> create spfile from pfile;

14. Change the database ID

If RMAN is going to be used to back-up the database, the database ID must be changed.

If RMAN isn’t going to be used, there is no harm in changing the ID anyway – and it’s a good practice to do so.

From sqlplus:

SQL> shutdown immediate

SQL> startup mount

exit

From unix:

$ nid target=/

NID will ask if you want to change the ID. Respond with ‘Y’. Once it has finished, start the database up again in sqlplus:

SQL> shutdown immediate

SQL> startup mount

SQL> alter database open resetlogs

/

15. Configure TNS

Add entries for new database in the listener.ora and tnsnames.ora as necessary.

16. Finished

That’s it!

Oracle 10g Enable Read Write On Physical Standby Database.

On Standby

Enable Flashback Database.

SQL> show parameter db_recovery

Code:

NAME TYPE VALUE

———————- ———– ———————————–

db_recovery_file_dest string /u01/app/oracle/flash_recovery_area

db_recovery_file_dest_size big integer 2G

SQL> alter system set db_recovery_file_dest_size=6g;

System altered.

SQL> alter system set db_flashback_retention_target=1440;

System altered.

Retention period is 24 hours.

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> select open_mode from v$database;

OPEN_MODE
———-
MOUNTED

Prepare standby for read write.

Cancel Redo apply.

SQL> alter database recover managed standby database cancel;

Database altered.

Create a restore point named before_open_standby

SQL> create restore point before_open_standby guarantee
flashback database;

Restore point created.

SQL> select scn,storage_size,time, name from v$restore_point;

Code:

SCN STORAGE_SIZE TIME NAME

—— ———— —————————— ——————-

486759 8192000 19-JAN-12 06.37.39.000000000 AM BEFORE_OPEN_STANDBY
SQL>

On Primary

SQL> alter system archive log current;

System altered.

Stop Remote Archive shipping since we are not going to
use it.

SQL> alter system set log_archive_dest_state_2=defer;

System altered.

SQL> alter system switch logfile;

System altered.

On Standby

SQL> alter database activate standby database;

Database altered.

Skip the next statement. if the standby was not opened read-only
since the instance was last started.

SQL> startup mount force;

Switch to maximum performance mode.

SQL> alter database set standby database to maximize performance;

Database altered.

SQL> alter database open;

Database altered.

Stop remote redo shipping if any on standby.

SQL> alter system set log_archive_dest_state_2 = defer;

System altered.

—————————————————————-

Taking back the Standby Database to its original state.

SQL> startup mount force;
ORACLE instance started.

Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes
Database mounted.

SQL> flashback database to restore point before_open_standby;

Flashback complete.

SQL> alter database convert to physical standby;

Database altered.

SQL> startup mount force;
ORACLE instance started.

Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes
Database mounted.

SQL> drop restore point before_open_standby;

Restore point dropped.

SQL> alter system set log_archive_dest_state_2=enable scope=both;

System altered.

SQL> alter database recover managed standby database disconnect
from session;

Database altered.

All the gaps in the archive logs should be automatically applied
because of the FAL_SERVER and FAL_CLIENT parameters.

If that does not work and your database is too far behind,
then make incremental backup on primary and apply it to standby.

On Primary

SQL> alter system set log_archive_dest_state_2 = enable scope=both;

System altered.

Cloning Oracle database with hot backup

1.First get the details of datafiles and archivelog location that present in the original database using the below command.

SQL> Select name from v$datafile;
SQL> archive log list;

2.Get the latest SCN by using the below command.

SQL> select max(first_change#) chng
2 from v$archived_log
3 /

CHNG
———-
424485

3.Bring the database in begin backup mode.

SQL> Alter database begin backup;

Database altered.

Note :- Make sure once you start begin backup mode there shouldn’t be No RMAN backup run on the database till you end the backup mode. Because till 10G if begin backup mode and RMAN backup is start at the same time there is a change of undo segment corruption and its a BUG reported by oracle support and I too faced the same problem, to resolve this we need downtime so be cautious.

4.Check the status whether the datafiles are in backup mode.

SQL> select * from v$backup;

FILE# STATUS CHANGE# TIME
———- —————— ———- ———
1 ACTIVE 424935 22-FEB-12
2 ACTIVE 424935 22-FEB-12

5.Now copy the datafiles physically from the original location to the destination.
Example :
SQL> select name from v$datafile;

NAME
——————————————————————————–
/u04/app/oracle/product/10.2.0/oradata/test/system01.dbf
/u04/app/oracle/product/10.2.0/oradata/test/undotbs01.dbf
SQL>exit
$ cd /u04/app/oracle/product/10.2.0/oradata/test/
$ ls -lrt
-rw-r—– 1 oracle staff 209723392 Feb 22 12:29 undotbs01.dbf
-rw-r—– 1 oracle staff 429924352 Feb 22 12:29 system01.dbf
$ cp *.dbf /u03/oradata/clonedb/data

Note :- If you want to clone the database to the different server refer the end of the command for SCP (server level copy command)

6.After copying all the datafiles to destination location stop the backup mode in the original database.

SQL> alter database end backup;

Database altered.

7. Check the status of the datafiles.

SQL> select * from v$backup;

FILE# STATUS CHANGE# TIME
———- —————— ———- ———
1 NOT ACTIVE 424935 22-FEB-12
2 NOT ACTIVE 424935 22-FEB-12

8.Archive the current logfile.

SQL> alter system archive log current;

System altered.

9.Get the details of archivelogs that will be needed for recovery while bringing up the clone database.

SQL> select name
2 from v$archived_log
3 where first_change# >= &change_no (424485)
4 order by name
5 /

Enter value for change_no: 424485
old 3: where first_change# >= &change_no
new 3: where first_change# >= 424485 (Enter the no which we got already ref #2)

NAME
——————————————————————————-
/u04/app/oracle/product/10.2.0/oradata/test/arch/1_26_775911007.dbf
/u04/app/oracle/product/10.2.0/oradata/test/arch/1_27_775911007.dbf

Copy the above listed archivelog files to the clone database archivelog location. (These logs are need for point in time recovery since we are cloning the database using hot backup)

10.Create Pfile from the spfile.

SQL> show parameter spfile

NAME     TYPE       VALUE
———————————— ———– ——————————
spfile      string        /u04/app/oracle/product/10.2.0/dbs/spfileprim.ora
SQL> create pfile from spfile;

File created.

11. In original database generate CREATE CONTROLFILE statement by typing the following command.

SQL>alter database backup controlfile to trace;
This will create a trace file containing the “CREATE CONTROLFILE” command to recreate the controlfile in text form.

12.Create the necessary directory on the clone database (destination database) server on your desired location.

Example :- mkdir udump adump cdump bdump arch

udump – user dump destination
bdump – background dump destination
adump – audit dump destination
cdump – core dump destination
arch – Archive log destination

13. Now, go to the USER_DUMP_DEST directory on the original Database server and open the latest trace file.The trace file will have the form “ora_NNNN.trc with NNNN being a number. This file will contain steps and as well as CREATE CONTROLFILE statement. Copy the CREATE CONTROLFILE statement and paste it in a notepad.

14. Edit the file

FROM: CREATE CONTROLFILE REUSE DATABASE “olddbname” RESETLOGS …
TO: CREATE CONTROLFILE SET DATABASE “newdbname” RESETLOGS …

Change the word ‘REUSE’ to ‘set’ and the ‘olddbname’ to ‘newdbname’. Also change the datafiles location parameter to clone database location.

15. Now copy the pfile from the original database server to the clone database server and placed it under the $ORACLE_HOME/dbs location. Now open the parameter file in clone database and change the following parameters with the respective current location.

CONTROL FILES
BACKGROUND_DUMP_DEST
USER_DUMP_DEST
CORE_DUMP_DEST
LOG_ARCHIVE_DEST_1

16. In clone database SERVER export ORACLE_SID environment variable and start the instance

$export ORACLE_SID=clone database name
$sqlplus
Enter User:/ as sysdba
SQL> startup nomount pfile=’init.clonedb.ora’;

17.Run create controlfile script to create the controlfile

SQL>@createcontrolfile.sql

Control file created.

18.Check the status of the Database.

SQL> select name,open_mode from v$database;

NAME OPEN_MODE
——— ———-
CLONE MOUNTED

19. Also check whether the clone database is pointing to its datafiles and dump files (just for verification)

SQL> select name from v$datafile;

NAME
——————————————————————————–
/u03/oradata/clone/data/system01.dbf
/u03/oradata/clone/data/undotbs01.dbf
/u03/oradata/clone/data/sysaux01.dbf
/u03/oradata/clone/data/users01.dbf

SQL> show parameter dump

NAME TYPE VALUE
————————- ———– ——————————
background_core_dump   string          partial
background_dump_dest    string         /u03/oradata/clone/bdump
core_dump_dest                   string         /u03/oradata/clone/cdump
max_dump_file_size           string         UNLIMITED
shadow_core_dump           string          partial
user_dump_dest                  string          /u03/oradata/clone/udump

20. Now media recovery is needed because we are cloning from the HOT backup. Follow the below setps.

SQL> recover database using BACKUP CONTROLFILE until cancel;

ORA-00279: change 424935 generated at 02/22/2012 12:29:27 needed for thread 1
ORA-00289: suggestion : /u03/oradata/clone/arch/1_27_775911007.dbf
ORA-00280: change 424935 for thread 1 is in sequence #27

21. Now we need to apply the necessary archive log files. (Refer the point no 9 & apply all the archivelog files )

Specify log: {=suggested | filename | AUTO | CANCEL}
/u03/oradata/clone/arch/1_27_775911007.dbf (Give the archive location and file name)
ORA-00279: change 425120 generated at 02/22/2012 12:37:58 needed for thread 1
ORA-00289: suggestion : /u03/oradata/clone/arch/1_28_775911007.dbf
ORA-00280: change 425120 for thread 1 is in sequence #28
ORA-00278: log file ‘/u03/oradata/clone/arch/1_27_775911007.dbf’ no longer
needed for this recovery

22.Once applied all the neceesary archive log files give cancel .

Specify log: {=suggested | filename | AUTO | CANCEL}
CANCEL
Media recovery cancelled.

23.Now open the database with open reset logs.

SQL> alter database open resetlogs;

Database altered.

SQL> select name,open_mode from v$database;

NAME OPEN_MODE
——— ———-
CLONE READ WRITE

24. Get the latest SCN no in the clone database

SQL> select max(first_change#) chng
2 from v$archived_log
3 /

CHNG
———-
424488 (It will matches with the No which we got it on the Point no #2 )

============================================================================

Command for Server copy (SCP)

If you want to clone the database on the different server, for copying the file from source to destination we need to use SCP comand.

Syntax:

$ scp your_username@remotehost.edu:foobar.txt /some/local/directory

Where, username is your Clone DB server login user id
remotehost is CLONE DB host name
foobar.txt is your filename
/some/local/directory is your destination location
Example :
Consider you are in the source server location (ie., Original DB location IP address 10.250.27.234)

$ pwd

/u04/app/oracle/product/10.2.0/oradata/test/

$ ls -lrt

rw-r—– 1 oracle staff 209723392 Feb 22 12:29 undotbs01.dbf
rw-r—– 1 oracle staff 429924352 Feb 22 12:29 system01.dbf

$ scp oracle@10.251.55.123:undotbs01.dbf /u03/oradata/clone/data/

Where, oracle is your Clone DB server login user id
10.251.55.123 is CLONE DB host name
undotbs01.dbf is your filename
/u03/oradata/clone/data is your destination