Showing posts with label Backup. Show all posts
Showing posts with label Backup. Show all posts

Saturday, December 5, 2009

Oracle Database Hot Backup Practices


An open (hot) database backup is made with database running in ARCHIVELOG mode. This kind of backup can be performed while the database is open and the users are working with the database, thus it is also called online backup. Online backups are done by placing a tablespace in backup mode, copying it somewhere else on disk or tape, and then taking the tablespace out of backup mode. After backing up tablespaces, you can backup the control file. Upon completing this lab exercise, you will know how to take a hot backup of your database, how to restore and recover your database after a simulated media failure.
IMPORTANT: Be sure you have done the Hands-On exercise before doing this lab to or you will not be able to complete the lab. The exercise sets up the environment by putting the database in ARCHIVELOG mode.
Creating a SPOOL session: Be sure that you start a spool session before you start executing your SQL code when working on the lab. Your SPOOL file name should be DATABASE_lab7.
L A B   S T E P S
Step 1: Taking a Hot Backup of the Database

A hot backup consists of datafiles, controlfiles, and archive redo logs. An online backup never contains online redo logs. First you need to backup the datafiles by putting each tablespace in backup mode. To do this you will use following command:
ALTER TABLESPACE tablespace_name BEGIN BACKUP;
You want to backup all of your available tablespaces with the exception of the temporary tablespace. When you are finished you should have five alter statements and have all five tablespaces in backup mode.
To verify that each tablespace is in backup mode query the V$BACKUP dictionary view to get the file number and status for each tablespace. Your result set should look similar to the following.
FILE# STATUS
---------- ------------------
1 ACTIVE
2 ACTIVE
3 ACTIVE
4 ACTIVE
5 ACTIVE

Next you need to take care of backing up the actual data files. Since you are in a Windows environment you can simply copy/paste the actual data files from their current location in the directories under the ORADATA directory in your OFA structure to the directory named BACKUP. When finished you should have all five of the datafiles in the directory call BACKUP.
Now you need to take the tablespaces out of backup mode. To do this you will need to use an ALTER TABLESPACE statement again, only this time you will need to specify END BACKUP after the tablespace name. When finished then verify none of the tablespace are still in backup mode by querying the V$BACKUP view again. Your result set output should look similar to the following.
FILE# STATUS
---------- ------------------
1 NOT ACTIVE
2 NOT ACTIVE
3 NOT ACTIVE
4 NOT ACTIVE
5 NOT ACTIVE

The final step in this process will be to issue an ALTER SYSTEM SWITCH LOGFILE command several times to create some archived log files for the actions you just took the database through.
Step 2: Backing up the Control files and Archiving the Redo logs

Now you need to get backups of the control file and another set of archived log files. To back up the control file you will need to issue a ALTER DATABASE command specifying to BACKUP CONTROLFILE and then give the complete directory path to the backup folder. Your complete command should look similar to the following (remember to use your database instance name, file path and a file name for the backup).
ALTER DATABASE BACKUP CONTROLFILE TO 'F:\DBM\FALLA\DB104\ORACLE\ADMIN\BACKUP\filename.bkp';
Go to your BACKUP folder in the OFA structure for your database instance and verify that the file is there.
Now issue a logfile switch command enough times to cycle through all three log groups.
Step 3: Simulating a Media Failure

Now you are going to simulate a media failure. This type of failure can happen when a hard drive crashes or becomes corrupt, or a file becomes corrupt and unusable.
a. First shut down the database. To save time you can issue a SHUTDOWN ABORT.
b. Now go to your OFA structure and rename the USERS01.dbf file to USERS01.dbf.old. If you get an error message stating that the file is in use by another application and access is denied then exit out of SQL*Plus to stop the session and try to rename the file again.
c. Next, attempt to start your database instance up again using your initdb###.ora file. You should get an error at this point so shutdown the instance again.
d. Go to your bdump directory of your OFA structure and open your alert log file. Find the place within the log listing that addresses the error caused by not being able to find the file you changed the name of. Copy that listing into notepad and hold on to if for the end of the lab. Your listing should look similar to the following.
Errors in file f:\dbm\springb\db1000\oracle\admin\bdump\db1000_dbw0_5700.trc:
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: 'F:\DBM\SPRINGB\DB1000\ORACLE\
ADMIN\ORADATA\DISK1\USERS01DV001.DBF'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.

Step 4: Restore and Recover the Database

Now we are going to do some magic and get the database back up and running using the backups that you created earlier.
a. Make sure that you have shutdown the database instance as instructed in Step 4. Copy just the data files (the five files with the .dbf extension) from the backup directory to the correct disk folder in your OFA structure. Be sure that you get them back in the disk folder that they belong in.
b. Now attempt to start your database instance back up using your initdb###.ora file. There still seems to be a problem so shutdown the instance again.
c. Now attempt to start your database instance in MOUNT mode using your initdb###.ora file. Query the V$RECOVER_FILE dictionary view to find out which file has a problem and what the error is. Your result set should look similar to the following.
FILE# ERROR
---------- -------
1
2
4

d. Now we need to recover the database and get back to where we want to be. Issue a RECOVER DATABASE command. When prompted for archived redo, hit the return until Oracle cannot find any more archived redo logs to apply. At this point you should see a message stating "Media recovery complete".
e. Now open the database using an ALTER DATABASE OPEN command. Congratulations, you have just brought your database back from the dead.
This concludes the Lab for Week 7. Be sure to enter a SPOOL OFF command to end your spool session and save your output file.

Monday, August 31, 2009

What happens during oracle database hot backup

According to oracle documentation you already heard/aware that during an Oracle tablespace hot backup, a script or program or command puts a tablespace into backup mode, then copies the datafiles to disk or tape, then takes the tablespace out of backup mode. We can check the backup mode from V$BACKUP view. In case of user managed backup, backup process starts after issuing ALTER TABLESPACE tbs_name BEGIN BACKUP; or ALTER DATABASE BEGIN BACKUP; command and backup process ends by ALTER TABLESPACE tbs_name END BACKUP; or ALTER DATABASE END BACKUP; command.

Hot backup is demonstrated inside the topic http://arjudba.blogspot.com/2008/05/user-managed-hot-backup-of-oracle.html.

Although the process is very clear and well understood but there are many misconception around hot backup. The misconception start what is actually done during hot backup, is data file opens writeable during backup process? or changes are stored somewhere in the SGA, the redologs, the rollback/undo segments or some combination thereof, and then written back into the datafile when the tablespace is taken out of backup mode?

Well, around the writeable issue inside datafile there is other misconception like "During hot backup process there is generated huge amount of redo data which in fact slows down the database dramatically if the database is in archivelog mode."

Now let's know what actually happens during hot backup. The hot backup steps are,

1)The corresponding tablespace is checkpointed.

2)The checkpoint SCN marker in the datafile headers cease to increment with checkpoints.

3)Full images of changed DB blocks are written to the redologs.

Whenever you issue,

ALTER TABLESPACE tbs_name BEGIN BACKUP;

command, at that point a checkpoint is performed against the target tablespace and the datafile header is frozen, so no more updates are allowed on it (the datafile header), this is for the database to know which was the last time the tablespace had a consistent image of the data.

But during backup process, the corresponding datafiles in the tablespace allow just normal read/write operations, that is I/O activity is not frozen.

In case of redo log generation, each block will be recorded into the redo log files, the first time it the block is changed. So if a row is modified for the first time inside date block since hot backup started the complete block image is recorded in the redo log files but subsequent transactions on the block will only record the transaction just as normal.

Above three steps are required to guarantee consistency during the file is restored and recovery. By freezing the checkpoint SCN in the file headers, any subsequent recovery on that backup copy of the file will know that it must commence at that SCN. Having an old SCN in the file header tells recovery that the file is an old one, and that it should look for the redolog file containing that SCN, and apply recovery starting there. Note that checkpoints to datafiles in hot backup mode are not suppressed during the backup, only the incrementing of the main checkpoint SCN flag. A "hot backup checkpoint" SCN marker in the file header continues to increment as periodic or incremental checkpoints progress normally.

By initially checkpointing the datafiles that comprise the tablespace and logging full block images to redo, Oracle guarantees that any blocks changed in the datafile while in hot backup mode will also be available in the redologs in case they are ever used for a recovery.

Now many one claims that during hot backup process there is excessive redo log generation than in normal mode. It actually depends on the amount of blocks changes during hot backup process. Because the first time a block is changed logging of full images of changed blocks in these tablespaces are recorded to the redo logs. Normally, Oracle logs an entry in the redologs for every change in the database, but it does not log the whole image of the database block. But during the hot backup process by logging full images of changed DB blocks to the redologs, Oracle eliminates the possibility of the backup containing irresolvable split blocks. To understand this reasoning, you must first understand what a split block is.

Typically, Oracle database blocks are a multiple of O/S blocks. For instance, most windows filesystems have a default block size of 512 bytes and unix filesystems have a default blocksize 2k, while Oracle’s default block size is 8k. This means that the filesystem stores data in 512 byte chunks, while Oracle performs reads and writes in 2k chunks, or multiples thereof. While backing up a datafile, your backup script makes a copy of the datafile from the filesystem, using O/S utilities such as copy, dd, cpio, or OCOPY. As it is making this copy, it is reading in O/S-block sized increments. If the database writer happens to be writing a DB block into the datafile at the same time that your script is reading that block’s constituent O/S blocks, your backup copy of the DB block could contain some O/S blocks from before the database performed the write, and some from after. This would be a split block.

By logging the full block image of the changed block to the redologs, it guarantees that in the event of a recovery, any split blocks that might be in the backup copy of the datafile will be resolved by overlaying them with the full legitimate image of the block from the redologs. Upon completion of a recovery, any blocks that got copied in a split state into the backup will have been resolved through application of full block images from the redologs.

Related Documents

Thursday, October 16, 2008

RMAN Incremental database backup in Oracle

I will demonstrate RMAN incremental database backup in this post.
My requirement is as follows.

1)Database running is noarchivelog mode. As I wish to take backup online so database need to run in archivelog mode.

2)I will use flash recovery area to take backup for better management of backup data.

3)Take full database backup or level 0 incremental backup in a week and each day will take incremental backup cumulative.

4)As a repository I will use recovery catalog in order to store backup information.

5)I need to setup rman configuration like,
-autobackup on(in order to back up the control file and spfile whenever the database structure metadata in the control file changes and whenever a backup record is added) ,

-backup optimization on (in order to skip logs that it has already backed up to the specified device). and,

-recovery window to 7 days. (to ensure that you can recover the database to any point within the last week)

6)My recovery catalog database name is neptune, listener port 1522 and serice name ARJU.NEPTUNE.ARJUBD.COM.

Below is the step by step solution.

Step 01: Connect to source database as sysdba.

oracle@neptune ~$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Oct 15 23:22:04 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Step 02: See the log mode of database.
SQL> select log_mode from v$database;

LOG_MODE
------------
NOARCHIVELOG

Step 03: Shutdown the source database.
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Step 04: Start the database in mount state.
SQL> startup mount
ORACLE instance started.

Total System Global Area 234881024 bytes
Fixed Size 2019800 bytes
Variable Size 109055528 bytes
Database Buffers 117440512 bytes
Redo Buffers 6365184 bytes
Database mounted.

Step 05: Enable archive log.
SQL> alter database archivelog;
Database altered.

Step 06: Open the database.
SQL> alter database open;
Database altered.

Step 07: Check the archived destination and flash recovery area size and location.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 83
Next log sequence to archive 85
Current log sequence 85

SQL> show parameter db_recovery_file_dest


NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /oracle/app/oracle/product/10.
2.0/db_1/flash_recovery_area
db_recovery_file_dest_size big integer 50G
If you see after giving archive log list Archive destination is not shown as USE_DB_RECOVERY_FILE_DEST then set it by,
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';

Step 08: Connect to rman of source database.
SQL> host rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Oct 15 23:24:16 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: ARJU (DBID=259530685)

Step 09: Change some configuration settings of source database.
RMAN> configure controlfile autobackup on;

using target database control file instead of recovery catalog
new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters are successfully stored

RMAN> configure backup optimization on;

new RMAN configuration parameters:
CONFIGURE BACKUP OPTIMIZATION ON;
new RMAN configuration parameters are successfully stored

RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;

new RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
new RMAN configuration parameters are successfully stored

Step 10: Connect to the recovery catalog database.
oracle@neptune ~$ sqlplus sys/sistem@neptune:1522/ARJU.NEPTUNE.ARJUBD.COM as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Oct 15 23:35:21 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Step 11: Create recovery catalog user and tablespace and assign permission.
SQL> create user rman identified by rman;

User created.

SQL> create tablespace catalog_spc datafile '/oradata1/catalog.dbf' size 10M autoextend on maxsize unlimited;

Tablespace created.

SQL> alter user rman default tablespace catalog_spc temporary tablespace temp quota unlimited on catalog_spc;

User altered.

SQL> grant recovery_catalog_owner to rman;

Grant succeeded.

Step 12: Connect to recovery catalog database.
bash-3.00$ rman catalog rman/rman@neptune:1522/ARJU.NEPTUNE.ARJUBD.COM

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Oct 15 23:39:43 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to recovery catalog database

Step 13: Create recovery catalog schema objects.
RMAN> create catalog tablespace catalog_spc;

recovery catalog created

Step 14: Be sure whether recovery catalog objects successfully created.
SQL> select table_name from tabs;

Step 15: Connect to rman on source and recovery catalog database.
bash-3.00$ rman target / catalog rman/rman@neptune:1522/ARJU.NEPTUNE.ARJUBD.COM

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Oct 15 23:41:53 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: ARJU (DBID=259530685)
connected to recovery catalog database

Step 16: Register database with recovery catalog.
RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

Step 17: Check whether registration was successful.
RMAN> report schema;

If this return database structure then registration is successful.

Step 18: Create scripts for incremental database backups.
This script is for full database backup level 0. This one will be run on weekly basis.
CREATE GLOBAL SCRIPT global_full_backup

{
BACKUP INCREMENTAL LEVEL 0 DATABASE PLUS ARCHIVELOG;
DELETE OBSOLETE;
}
This script is for incremental cumulative database backup level 0. This one will be run on daily basis.
CREATE GLOBAL SCRIPT global_incr_backup
{
BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE PLUS ARCHIVELOG;
DELETE OBSOLETE;
}

This one is for RMAN incrementally updated backups. I just made this and will not use it. For huge database we may consider it.
CREATE GLOBAL SCRIPT global_incr_updated_backup
{
RECOVER COPY OF DATABASE WITH TAG 'inc_update';
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'inc_update'
DATABASE;
}

A sample output after creating global_full_backup,
RMAN> CREATE GLOBAL SCRIPT global_full_backup;
{
BACKUP INCREMENTAL LEVEL 0 DATABASE PLUS ARCHIVELOG;
DELETE OBSOLETE;
}

starting full resync of recovery catalog
full resync complete
created global script global_full_backup

Step 19: Run these scripts and take backups.
Weekly,
RMAN>RUN{
EXECUTE GLOBAL SCRIPT global_full_backup;
}

Daily,
RMAN>RUN{
EXECUTE GLOBAL SCRIPT global_incr_backup;
}

You may consider also inside script like,
RMAN> backup database plus archivelog delete all input;
in order to delete archived log from all locations that already taken backup.

Or, you may issue,
RMAN> delete archivelog all backed up 1 times to disk;
in order to delete archive log that have at least 1 times backed up to disk.
Related Documents
What will be the Archived Redo Log Destination in Oracle
Archive Destination Settings fails with ORA-32017 and ORA-16179
Backup Database in Noarchivelog mode

Monday, October 13, 2008

Backup to remote location fails with ORA-19504 andORA-27054

Error Description
I mounted remote location jupiter:/export/home/oracle on my local server as /export/home/oracle/remote and then I wanted to take RMAN backup on the mapped drive but it fails with ORA-19504: and ORA-27054:.
RMAN> backup format '/export/home/oracle/remote/%U' datafile 4;

Starting backup at 13-OCT-08
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=136 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00004 name=/oradata1/arju/ARJU/users01.dbf
channel ORA_DISK_1: starting piece 1 at 13-OCT-08
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 10/13/2008 04:02:40
ORA-19504: failed to create file "/export/home/oracle/remote/03jt0tav_1_1"
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 2
Cause of the Problem
From Oracle 10.2g , Oracle checks by which options NFS mount is mounted on local filesystem. This checking is done to ensure that no corruption of the database can happen as incorrectly mounted NFS volumes can result in data corruption.

Generally the following checks are done on Solaris and AIX systems.

1) In order to check the mount option whether mount table can be read. Issue,
# cat /etc/mnttab

2) While NFS mount is "hard" option provided.
3) Whether the mount options include rsize>=32768 and wsize>=32768
4) If RAC environments, where NFS disks are supported, whether the "noac" mount option is used.

If above requirements are full and still above error appear then possibly oracle bug.
Solution of the Problem
Solution 1: Setting the mount option explicitly.
# mount -o rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 jupiter:/export/home/oracle /export/home/oracle/remote


The default rsize and wsize for NFS is 4096 Blocks

So if you have rsize=32k and wsize=32k then NFS would be able to read and write large datagram as
compared to deafult one.

TCP option will make sure that your client are getting the data or not

Hard & INTR - The program accessing a file on a NFS mounted file system will hang when the
server crashes. The process cannot be interrupted or killed unless you also specify intr. When
the NFS server is back online the program will continue undisturbed from where it was.

actimeo is for access timeout and it should be 0 .

After setting above parameters in NFS mount clause you will be able to take the backup
RMAN> backup format '/export/home/oracle/remote/%U' datafile 4;
And see it would be successful.

Solution 2: Set the Event 10298
SQL> alter system set event="10298 trace name context forever, level 32" scope=spfile;

System altered.

Now restart server and take backup.
SQL> startup force
ORACLE instance started.

Total System Global Area 234881024 bytes
Fixed Size 2019800 bytes
Variable Size 125832744 bytes
Database Buffers 100663296 bytes
Redo Buffers 6365184 bytes
Database mounted.
Database opened.

SQL> show parameter event

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
event string 10298 trace name context forev
er, level 32

RMAN> backup format '/export/home/oracle/remote/%U' datafile 4;

Starting backup at 13-OCT-08
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=135 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00004 name=/oradata2/arjudba/arjudba/arju/users01.dbf
channel ORA_DISK_1: starting piece 1 at 13-OCT-08
channel ORA_DISK_1: finished piece 1 at 13-OCT-08
piece handle=/export/home/oracle/remote/08jt0voi_1_1 tag=TAG20081013T044402 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13-OCT-08

Solution 3: If Oracle bug
If still fails then possibly an oracle bug for Oracle 10.2.0.1. In order to solve bug apply 5146667 one off patch for 10.2.0.1.

Related Documents
How to take RMAN backup on a remote disk

Sunday, October 12, 2008

How to take RMAN backup on a remote disk/ location

It is common to hear that for backup there is not sufficient space on my server or my server disk is about to full. And now I want to take my RMAN backup directly to remote location. How I can achieve that?

Now before taking backup let think in which type of storage on remote location you want to take backup? Is it a disk or to the tape. Below is the procedure for both types.

Taking Remote RMAN backup to Tape
1)This is of no different as you access remote tape devices.

2)First install and configures the drivers of the tape device on the local machine box.

3)Make sure that you can access to tape by making copies file to tape.

4)If step3 is successful then RMAN will be able to take backup to tape directly.

Taking Remote RMAN backup to disk
1)Remember that direct disk backup to remote location using RMAN is not possible as we can do it on local disk by using FORMAT parameter specifying local disk.

2)As RMAN backups to DISK can only be taken to some 'part' of the target database server. So, remote location will not work.

3)The thing you can achieve goal is to share remote location, then mount it on the target database where you want to take backup. An example of how to share folder and access it on other machine is discussed on How to share and Access file on Solaris Machine

4)Take RMAN backup to this mapped location using FORMAT clause. Based on this a remote backup is taken in http://arjudba.blogspot.com/2008/10/backup-to-remote-location-fails-with.html after sharing remote location and map it to location /export/home/oracle/remote/.
Related Documents
RMAN Backup format specification
Set up a Database for RMAN Backup
Create Multiple Duplex backup by one Backup Command

Monday, October 6, 2008

How to clone database on the same host with different name

In this post I will show how we can clone database on windows system without using RMAN duplicate. You can clone database by using RMAN duplicate which is discussed on Creating a Duplicate Database on a Remote Host -Part1. You can also clone database by taking backup of source database and restore it which is discussed in Restore and Recover database to a new host
Note that cloning in this way which is shown next part in this post will not change DBID as it was in source database. In order to change DBID after clone operation you have to use nid tool which is discussed on How to Change Database Name and DBID?
Step by steps cloning operation is described below. The scenario is,
-Source database arju will be cloned as arjucl

-Datafile named E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\*.dbf will be cloned as F:\ORACLE\ARJUCL\*.dbf

-Controlfile named E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\*.ctl will be cloned as
F:\ORACLE\*.CTL

-In cloned database adump,bdump,cdump,udump will be located in F:\ORACLE\*dump

Step 01:
In source database identify the datafile location and redo logfile location.
On windows machine,
C:\Documents and Settings\Queen>set ORACLE_SID=arju

On linux based machine,
$export ORACLE_SID=arju

C:\Documents and Settings\Queen>sqlplus / as sysdba


SQL*Plus: Release 10.2.0.1.0 - Production on Mon Oct 6 23:19:45 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> col file_name format a70
SQL> set linesize 160
SQL> select file_name, file_id from dba_data_files;

FILE_NAME FILE_ID
---------------------------------------------------------------------- ----------
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\USERS01.DBF 4
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\SYSAUX01.DBF 3
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\UNDOTBS01.DBF 2
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\SYSTEM01.DBF 1
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\EXAMPLE01.DBF 5
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\TEST_TBS01.DBF 6

6 rows selected.

SQL> select member from v$logfile;

MEMBER
----------------------------------------------------------------
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\REDO02.LOG
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\REDO01.LOG

Step 02:
Create a pfile from source spfile and controfile trace from source controlfile.
SQL> create pfile='f:\pfile.ora' from spfile;

File created.

SQL> alter database backup controlfile to trace as 'f:\controlfile.ctl';


Database altered.

On linux based system as linux file structure to save controlfile. Like instead of f:\controlfile.ctl use /oracle/controlfile.ctl

Step 03:
Shutdown the source database.
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Step 04:
Create the required directory for the cloned database. You should do this step in previous.
SQL> host
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

C:\Documents and Settings\Queen>mkdir f:\oracle

C:\Documents and Settings\Queen>mkdir f:\oracle\arjucl


On unix based machine adjust the location as you wish to create clone database.
Step 05:
Copy datafiles and redo log files to the location of f:\oracle\arjucl
C:\Documents and Settings\Queen>copy E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\*.dbf f:\oracle\arjucl
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\EXAMPLE01.DBF
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\SYSAUX01.DBF
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\SYSTEM01.DBF
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\TEMP01.DBF
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\TEST_TBS01.DBF
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\UNDOTBS01.DBF
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\USERS01.DBF
7 file(s) copied.

C:\Documents and Settings\Queen>copy E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\*.log f:\oracle\arjucl
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\REDO01.LOG
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\REDO02.LOG
E:\ORACLE\PRODUCT\10.2.0\ORADATA\ARJU\REDO03.LOG
3 file(s) copied.

Step 06:
At this stage task with the source database is finished. Edit the pfile f:\pfile.ora and change parameter control_files, db_name, different dump directory in order to reflect cloned database name and location.
On windows,
C:\Documents and Settings\Queen>notepad f:\pfile.ora
arjucl.__db_cache_size=25165824
arjucl.__java_pool_size=4194304
arjucl.__large_pool_size=4194304
arjucl.__shared_pool_size=62914560
arjucl.__streams_pool_size=0

*.audit_file_dest='F:\oracle\adump'
*.audit_trail='DB','EXTENDED'
*.background_dump_dest='F:\oracle\bdump'
*.compatible='10.2.0.1.0'
*.control_files='F:\oracle\control01.ctl',
'F:\oracle\control02.ctl','F:\oracle\control03.ctl'

*.core_dump_dest='F:\oracle\cdump'
*.db_block_size=8192
*.db_create_file_dest='C:\'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='arjucl'
*.db_recovery_file_dest_size=2147483648
*.db_recovery_file_dest='c:\test'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=arjuclXDB)'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=c:\test'
*.log_archive_dest_2='LOCATION=g:\'
*.log_archive_dest_3='LOCATION=USE_DB_RECOVERY_FILE_DEST'
*.open_cursors=300
*.pga_aggregate_target=202375168
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=104857600
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='F:\oracle\udump'

On linux based system use as, $vi pfile.ora
Step 07:
Edit the created text controlfile and change the database name along with logfile and datafile location and use SET DATABASE database_name in the controlfile creation script.

On windows,

C:\Documents and Settings\Queen>notepad f:\CONTROLFILE.CTL

STARTUP NOMOUNT PFILE='F:\PFILE.ORA'
CREATE CONTROLFILE REUSE SET DATABASE "arjucl" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 'F:\ORACLE\ARJUCL\REDO01.LOG' SIZE 50M,
GROUP 2 'F:\ORACLE\ARJUCL\REDO02.LOG' SIZE 50M,
GROUP 3 'F:\ORACLE\ARJUCL\REDO03.LOG' SIZE 50M

DATAFILE
'F:\ORACLE\ARJUCL\SYSTEM01.DBF',
'F:\ORACLE\ARJUCL\UNDOTBS01.DBF',
'F:\ORACLE\ARJUCL\SYSAUX01.DBF',
'F:\ORACLE\ARJUCL\USERS01.DBF',
'F:\ORACLE\ARJUCL\EXAMPLE01.DBF',
'F:\ORACLE\ARJUCL\TEST_TBS01.DBF'

CHARACTER SET WE8MSWIN1252
;
On unix based system open the controlfile with any editor software and then edit.
Step 08:Avoid this step if you are on unix based system. On windows, with oradim create new oracle service arjucl. However if you are on unix machine then simply ignore this step.
C:\Documents and Settings\Queen>oradim -new -sid arjucl
Instance created.

Step 09:
Set environmental variable to arjucl and connect to instance as sysdba.
C:\Documents and Settings\Queen>set ORACLE_SID=arjucl
On unix based system, export ORACLE_SID=arjucl
C:\Documents and Settings\Queen>sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Oct 7 00:04:32 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.
Step 10:
Run the controlfile creation script.
On windows,
SQL> @f:\controlfile.ctl
ORACLE instance started.

Total System Global Area 104857600 bytes
Fixed Size 1247516 bytes
Variable Size 71304932 bytes
Database Buffers 25165824 bytes
Redo Buffers 7139328 bytes

Control file created.

On unix based system run the modified controlfile script where you saved.

Step 11:
Open the database with resetlogs option.
SQL> alter database open resetlogs;
Database altered.

Check the cloned database name.
SQL> show parameter db_name
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name string arjucl

You can check the cloned database datafile location.
SQL> select file_name from dba_data_files;
FILE_NAME
--------------------------------------------------------------------------------
F:\ORACLE\ARJUCL\TEST_TBS01.DBF
F:\ORACLE\ARJUCL\EXAMPLE01.DBF
F:\ORACLE\ARJUCL\USERS01.DBF
F:\ORACLE\ARJUCL\SYSAUX01.DBF
F:\ORACLE\ARJUCL\UNDOTBS01.DBF
F:\ORACLE\ARJUCL\SYSTEM01.DBF

6 rows selected.

Wednesday, August 27, 2008

How to restore an Spfile from autobackup older than 7 days

Whenever we issue in RMAN, RMAN> restore spfile from autobackup; then restoring from autobackup stops at seven days and then it arises message channel ORA_DISK_1: no autobackup in 7 days found. For example,

RMAN> restore spfile from autobackup;
Starting restore at 31-AUG-08
using channel ORA_DISK_1

channel ORA_DISK_1: looking for autobackup on day: 20080831
channel ORA_DISK_1: looking for autobackup on day: 20080830
channel ORA_DISK_1: looking for autobackup on day: 20080829
channel ORA_DISK_1: looking for autobackup on day: 20080828
channel ORA_DISK_1: looking for autobackup on day: 20080827
channel ORA_DISK_1: looking for autobackup on day: 20080826
channel ORA_DISK_1: looking for autobackup on day: 20080825
channel ORA_DISK_1: no autobackup in 7 days found
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 08/31/2008 01:31:12
RMAN-06172: no autobackup found or specified handle is not a valid copy or piece

Whenever you give RMAN to RESTORE SPFILE FROM AUTOBACKUP or RESTORE CONTROLFILE FROM AUTOBACKUP, RMAN begins the search on the current day or on the day specified with the SET UNTIL caluse. If no autobackup is found in the current or SET UNTIL day, RMAN checks the previous day. In this way RMAN by default checks for 7 days autobackup from the current or SET UNTIL day.

However if you want to extend your searching of autobackup more than 7 then you have to use MAXDAYS option with the RESTORE command.

For example,
RMAN>restore spfile from autobackup maxdays 30;
or
RMAN>restore controlfile from autobackup maxdays 30;

In these cases autobackup searches will be performed up to 30 days from the current or SET UNTIL day.

Monday, August 11, 2008

How to Schedule or Automate Backup through Crontab

Schedule or Automate Backup is a needed thing almost in all environment. We can do automate or scheduling tasks through two ways, one is DBMS_SCHEDULER packager which will be discusses in another topic and another is OS scheduler. If you think about OS scheduler then on unix box use crontab and on windows box use scheduling jobs.

In this topic I have shown how we can take automate backup through crontab tool.

To avoid error Verify that ORACLE_HOME is set properly error like as below
Message file RMAN.msb not found
Verify that ORACLE_HOME is set properly

set proper ORACLE_HOME and ORACLE_SID in your backup file.
So I created my backup_job.sh as below.

bash-3.00$ vi backup_job.sh
#!/bin/bash
export ORACLE_HOME=/oracle/app/oracle/product/10.2.0/db_1
export ORACLE_SID=dbase
/oracle/app/oracle/product/10.2.0/db_1/bin/rman target sys/a@neptune:1522/dbase cmdfile=/oradata1/
backup/backup_job2.sh


If you don't set $ORACLE_HOME and don't give full path of rman then possibly in the output file you get rman:not found error.
/oradata1/backup/backup_job.sh: rman: not found

In the script I provided full path of rman.

The actual backup script are in backup_job2.sh and from backup_job.sh it will be called through rman.

bash-3.00$ vi backup_job2.sh
backup database format '/oradata1/backup/%U';


If you don't make these script executable or don't change permission to 777 then you likely get error in the output file as,
sh: /oradata1/backup/backup_job.sh: cannot execute
So change the permission as

bash-3.00$ chmod 777 backup_job.sh
bash-3.00$ chmod 777 backup_job2.sh

To write contents within crontab in vi editor issue,
bash-3.00$export EDITOR=vi
bash-3.00$ crontab -e
35 06 * * * /oradata1/backup/backup_job1.sh > /oradata1/backup/backupdata.log


By default cron jobs sends a email to the user account executing the cronjob. If this is not needed put the following command At the end of the cron job line .

48 06 * * * /oradata1/backup/backup_job.sh > /oradata1/backup/backupdata.log 2>&1

Now I will discuss some aspects of cronjob.

Crontab Restrictions
-You can execute crontab if your name appears in the file /usr/lib/cron/cron.allow.

-If that file does not exist, you can use crontab if your name does not appear in the file /usr/lib/cron/cron.deny.

-If only cron.deny exists and is empty, all users can use crontab.

-If neither file exists, only the root user can use crontab.

-The allow/deny files consist of one user name per line.

Crontab Commands
Before invoking crontab use export EDITOR=vi ;to specify a editor to open crontab file.
Now open crontab as below
-crontab -e :Edit your crontab file, or create one if it doesn't already exist.
-crontab -l :Display your crontab file.
-crontab -r :Remove your crontab file.
-crontab -v :Display the last time you edited your crontab file. (This option is only available on a few systems.)

Crontab syntax option
MINUTE HOUR DAY_OF_MONTH MONTH DAY_OF_WEEK USER COMMAND
for any field multiple value could assign, where they are separated by comma.

MINUTE accept value from 00 to 59
HOUR accept value from 00 to 23
DAY_OF_MONTH accept value from 01 to 31
MONTH accept value from 01 to 12
DAY_OF_WEEK accept value from 0 to 6 [0 for sunday, 1 for monday, ...]
--USER means that who will execute the command (optional/not common)
COMMAND the command for execution

Monday, June 2, 2008

how to take backup of oracle database from remote system

How to take backup of oracle database from remote system

In order to take oracle database backup from remote system you have to ensure the following things.

1)The database in which you will take backup is called the target database. The target database must be in mount or open state.

2)The target database listener must be running.

3)The user connected t the target database and eventually take backup must have SYSDBA privilege. So the user by which you connect that user entry must be in the password file.

In the following section I will take backup of machine neptune and database name is dbase. I will take backup from another machine named saturn.


Step 1:
Log on to neptune machine and see the listener service name and listener port number and ensure that listener is running. If you knew listener service name and port number previously and currently it is running then you can avoid this step.

In machine neptune invoke lsnrctl status.

SQL> !lsnrctl status

LSNRCTL for Solaris: Version 10.2.0.1.0 - Production on 03-JUN-2008 02:24:26

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC2)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Solaris: Version 10.2.0.1.0 - Production
Start Date 03-JUN-2008 02:24:09
Uptime 0 days 0 hr. 0 min. 16 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
Listener Log File /oracle/app/oracle/product/10.2.0/db_1/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC2)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=neptune)(PORT=1522)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "dbase" has 1 instance(s).
Instance "dbase", status READY, has 1 handler(s) for this service...
The command completed successfully.

Here the interested subject is HOST neptune, PORT=1522 and Service dbase

If you got these property then go to step 2.
If you got TNS-00511 then start the listener and register system with the listener.

SQL> host lsnrctl status

LSNRCTL for Solaris: Version 10.2.0.1.0 - Production on 03-JUN-2008 02:23:54

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC2)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
Solaris Error: 146: Connection refused
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=neptune)(PORT=1522)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
Solaris Error: 146: Connection refused

SQL> host lsnrctl start


SQL> alter system register;
System altered.

Step 2:
Log on to the target database. And add the user to password file. This is donr by invoking SYSDBA privilege to the user. If the user name is already entry ed in the passowrd file then ignore this step.

In target database I granted SYSDBA privilege to user ARJU which will take remote backup from other machine.

SQL> CONN / AS SYSDBA

Connected.
SQL> select username, sysdba from v$pwfile_users;

USERNAME SYSDB
------------------------------ -----
SYS TRUE

SQL> GRANT SYSDBA to ARJU;
SQL> select username, sysdba from v$pwfile_users;


USERNAME SYSDB
------------------------------ -----
SYS TRUE
ARJU TRUE

Step3:

Connect to the machine from which you want to take backup and invoke rman. The format you need to connect is,
rman target user_name/password_of_target_database@target_database_host_name:port_number/service_name
We already got this information by invoking lsnrctl status. So, enter that.
-bash-3.00$ rman target arju/a@neptune:1522/dbase

Recovery Manager: Release 10.2.0.1.0 - Production on Tue Jun 3 02:50:14 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: DBASE (DBID=1509380669)

Step4:
Perform backup operation. Like
RMAN> backup database;

Starting backup at 03-JUN-08
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
.
.
.

Related Documentation:
Database Authentication
What is Remote Login Passwordfile

Monday, May 19, 2008

User Managed Hot Backup and Recovery of lost of data file

Example of User Managed Hot Backup:
-----------------------------------------


1)Determine which data file you need to backup.
SQL> select file_name ,tablespace_name from dba_data_files

2 /

FILE_NAME TABLESPACE_NAME
---------------------------------------- ------------------------------
/oradata2/7.dbf USERS
/oradata2/data1/dbase/users01.dbf USERS
/oradata2/data1/dbase/sysaux01.dbf SYSAUX
/oradata2/data1/dbase/undotbs01.dbf UNDOTBS1
/oradata2/data1/dbase/system01.dbf SYSTEM
/oradata2/6.dbf USERS
6 rows selected.

2)Make the tablespace in backup mode one by one and copy the associated datafile to backup location.
SQL> ALTER TABLESPACE USERS BEGIN BACKUP;


Tablespace altered.

SQL> !cp /oradata2/data1/dbase/users01.dbf /oradata2/arju

SQL> !cp /oradata2/7.dbf /oradata2/arju

SQL> ALTER TABLESPACE USERS END BACKUP;


Tablespace altered.

SQL> ALTER TABLESPACE SYSAUX BEGIN BACKUP;


Tablespace altered.

SQL> !cp /oradata2/data1/dbase/sysaux01.dbf /oradata2/arju

SQL> ALTER TABLESPACE SYSAUX END BACKUP;


Tablespace altered.

SQL> ALTER TABLESPACE UNDOTBS1 BEGIN BACKUP;

Tablespace altered.

SQL> !cp /oradata2/data1/dbase/undotbs01.dbf /oradata2/arju

SQL> ALTER TABLESPACE UNDOTBS1 END BACKUP;


Tablespace altered.

SQL> ALTER TABLESPACE SYSTEM BEGIN BACKUP;


Tablespace altered.

SQL> !cp /oradata2/data1/dbase/system01.dbf /oradata2/arju

SQL> ALTER TABLESPACE SYSTEM END BACKUP;


Tablespace altered.

SQL> ALTER TABLESPACE USERS BEGIN BACKUP;

Tablespace altered.

SQL> !cp /oradata2/6.dbf /oradata2/arju

SQL> ALTER TABLESPACE USERS END BACKUP;


Tablespace altered.

3)Take a backup of your control file also.
SQL> ALTER DATABASE BACKUP CONTROLFILE TO '/oradata2/arju/control.ctl';
Database altered.


Now, I have taken backup.With this backup I will work for next recovery purpose. In this example I will show if someone has accidentally drop one datafile what will be my procedure.

Suppose someone accidentally two files.

SQL> !rm /oradata2/data1/dbase/users01.dbf
SQL> !rm /oradata2/6.dbf


Whenever you want to do any operation it will raise error like following,

SQL> alter tablespace users offline;
alter tablespace users offline
*
ERROR at line 1:
ORA-01116: error in opening database file 4
ORA-01110: data file 4: '/oradata2/data1/dbase/users01.dbf'
ORA-27041: unable to open file
Solaris-AMD64 Error: 2: No such file or directory
Additional information: 3


User Managed Recovery of Datafile when your database is online (Archive Log Mode)
---------------------------------------------------------------------------

1)Make the affected tablespace offline.
SQL> ALTER TABLESPACE USERS OFFLINE IMMEDIATE;
Tablespace altered.

2)Copy the lost datafile from backup into the location of the datafile.

SQL> !cp /oradata2/arju/users01.dbf /oradata2/data1/dbase/
SQL> !cp /oradata2/arju/6.dbf /oradata2


3)Recover the tablespace.
SQL> RECOVER TABLESPACE USERS;
Media recovery complete.

4)Make the Tablespace online.
SQL> ALTER TABLESPACE USERS ONLINE;
Tablespace altered.

While database is offline and can't start it will show following message.

SQL> !rm /oradata2/data1/dbase/users01.dbf


SQL> startup
ORACLE instance started.

Total System Global Area 167772160 bytes
Fixed Size 2019288 bytes
Variable Size 100663336 bytes
Database Buffers 58720256 bytes
Redo Buffers 6369280 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/oradata2/data1/dbase/users01.dbf'

1)In order to see which files need media recovery issue,

SQL> SELECT r.FILE# AS df#, d.NAME AS df_name, t.NAME AS tbsp_name,
d.STATUS, r.ERROR, r.CHANGE#, r.TIME
FROM V$RECOVER_FILE r, V$DATAFILE d, V$TABLESPACE t
WHERE t.TS# = d.TS#
AND d.FILE# = r.FILE#
;
2 3 4 5 6

DF# DF_NAME TBSP_NA STATUS ERROR CHANGE#
---- ----------------------------------- ------- ------- ---------- ---------
4 /oradata2/data1/dbase/users01.dbf USERS ONLINE FILE NOT 0
FOUND

2)Copy the lost file from backup to destination.

SQL> !cp /oradata2/arju/users01.dbf /oradata2/data1/dbase/users01.dbf

3)Recover the file and open the database.

SQL> RECOVER DATAFILE 4;

Media recovery complete.
SQL> ALTER DATABASE OPEN;

Database altered.


Note: If the location is damaged permamnent to make datafile in another location use,
SQL>ALTER DATABASE RENAME FILE '/oradata2/data1/dbase/users01.dbf' TO
'/disk2/users01.dbf';


If you do not have a backup of a specific datafile but you have archived log since the datafile creation then you can recover that datafile by creating an empty file by ALTER DATABASE CREATE DATAFILE and then simply perform recovery of the datafile.

Related Documents:
---------------------------

User Managed Restore Operation
What you will do if spfile lost Look at Solution of The problem Section
How to solve problem of inconsistent control file.
How to re-create Control file

Saturday, May 17, 2008

Non-Critical Files in terms of Recovery

Non-critical losses are failure events that can be resolved without significantly impacting the operation of the database if performed properly. A good DBA should be able to resolve non-critical
losses quickly and leave the database in operational state throughout the recovery process. Though during the recovery process there may be performance perform but it should be operational state.

Non-critical files are essentially database files that do not have a critical impact on the operations of the database when they have been compromised. If recovery is performed properly, these files can be recovered or rebuilt in some cases with minimal impact to database operations.

Non-Critical Files in Oracle Database.
-------------------------------------------------

1)Temporary tablespaces: If temporary tablespace or its datafiles are deleted accidentally then they can be recreated easily within any database shut down.

In order to create new one issue,
SQL>create temporary tablespace temp2 tempfile '/oradata/temp01.dbf' size 100M;

Then explicitly assign users to this temporary tablespace, like

SQL>ALTER USER ARJU TEMPORARY TABLESPACE TEMP;

In order to make database default temporary tablespace to temp2 use which will affect of all users whether they are explicitly set or not.,
An example:
i)create a Temporary Tablespace.

SQL> CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/oradata2/temp02.dbf' SIZE 10m;

Tablespace created.

SQL> ALTER USER ARJU TEMPORARY TABLESPACE TEMP;
User altered.

ii)Look at database default tablespace and temporary tablespace of user ARJU.

SQL> SELECT PROPERTY_VALUE,PROPERTY_NAME FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE '%TABLESPACE%';
PROPERTY_VALUE PROPERTY_NAME
------------------------------ ------------------------------
TEMP DEFAULT_TEMP_TABLESPACE
USERS DEFAULT_PERMANENT_TABLESPACE

SQL> SELECT TEMPORARY_TABLESPACE FROM DBA_USERS WHERE USERNAME='ARJU';
TEMPORARY_TABLESPACE
------------------------------
TEMP

iii)Now change the database default temporary tablespace.

SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP2;
Database altered.

iv) You will see all schema users' tablespace is set to default.

SQL> SELECT TEMPORARY_TABLESPACE FROM DBA_USERS WHERE USERNAME='ARJU';
TEMPORARY_TABLESPACE
------------------------------
TEMP2

v)However now you can change a user to point another tablespace.

SQL> ALTER USER ARJU TEMPORARY TABLESPACE TEMP;
User altered.

SQL> SELECT TEMPORARY_TABLESPACE FROM DBA_USERS WHERE USERNAME='ARJU';
TEMPORARY_TABLESPACE
------------------------------
TEMP


2)Non-current redo log files: If the status of redo log group is not current then dropping them will continue work in database.
In order to drop a redo log group query from V$LOG.
If the status is current then you can't drop it. Drop a non-current redo log group by,
SQL> ALTER DATABASE DROP LOGFILE GROUP 4;

To add new group with one memeber,
SQL> ALTER DATABASE ADD LOGFILE '/oradata2/data1/dbase/redo04.log' SIZE 5m;

To add a member in a group issue,

alter database add logfile member
'/oradata2/data1/dbase/redo01_01.log' to group 1;

To delete a member from a log group issue where status is no-current,

alter database drop logfile member
'/oradata2/data1/dbase/redo01_01.log';




3)Index tablespaces:
If the tablespace containing index or index datafile is dropped then database performance might suffer. Users can continue work. However in order to recover them it is needed to rebuild the index.


4)Indexes:
If any index is deleted then recover them by simply creating script and run them.

5)Read-only tablespaces: Read-only tablespaces are by nature non-critical database files. These tablespaces are static or do not have data modifications like normal tablespaces. This allows recovery to be a fairly straightforward process under most circumstances. No redo log information needs to be applied to recover read-only tablespaces if they stayed in read only mode since backup. Simply restore them from backup.

Like in RMAN,
RMAN>RESTORE TABLESPACE TBS_READ_ONLY;
or from OS,
$scp source_location location_in_control_file

Thursday, May 15, 2008

Backup Database control file -User Managed

Backup of controlfile is necessary if you make and modification to your database structure. For example if you add a a new tablespace or add a new datafile to an existing tablespace then backup the control file so that in the backup control file the database structure become up to date which will help you to perform database recovery in case of any problem.

If you configure the configuration parameter CONFIGURE CONTROLFILE AUTOBACKUP ON then control file is automatically backup after each database structural changes.

There are two user managed method to take control file backup.

1)Take control file Backup as a Binary File:
-----------------------------------------------

A binary backup is preferable to a trace file backup because it contains additional information such as
-the archived log history,
-offline range for read-only and offline tablespaces,
-and backup sets and copies (if you use RMAN).

But binary control file backups do not include tempfile entries.

To take control file backup as a binary file just issue,
SQL>ALTER DATABASE BACKUP CONTROLFILE TO '/oradata2/control.bak';

2)Backup control file as a Text file.
-------------------------------------------

To take backup control file as a text file simply issue,
SQL>ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS '/oradata2/control.bak';

If you specify neither the RESETLOGS nor NORESETLOGS option in the SQL statement, then the resulting trace file contains versions of the control file for both RESETLOGS and NORESETLOGS options.

Unlike binary backup of control file backup to trace tempfile entries are included- in the output using "ALTER TABLESPACE... ADD TEMPFILE" statements.

Wednesday, May 14, 2008

User Managed, Consistent Backup in Noarchivelog Mode

-You must have to take consistent backup if your database is in noarchivelog mode.

-Remember that database backup taken while the database is open or after an instance failure or SHUTDOWN ABORT is inconsistent.

-So in noarchivelog mode you can't take your backup while the database is open.

-In case of noarchivelog mode you can take your backup after the database has been shut down with the NORMAL, IMMEDIATE, or TRANSACTIONAL options. Backup taken in this way is called consistent backup.

-Consistent backup can be taken whether you are in archive or noarchivelog mode.

-Consistent backup can also called cold backup.

In the following sections I will see an example of how we can taken consistent used Managed Backup.

1)Determine the files that you need to take backup.

Whenever you decide to take your database user managed consistent backup then take backup of data files, control files spfiles and network files.

In order to decide which files you need to backup issue the following query while the database is in open state.

SQL>SELECT NAME "File Need Backup" FROM V$DATAFILE
UNION ALL
SELECT NAME FROM V$CONTROLFILE
UNION ALL
SELECT VALUE FROM V$PARAMETER WHERE NAME='spfile';

File Need Backup
---------------------------------------------------------------------------------
/oradata2/data1/dbase/system01.dbf
/oradata2/data1/dbase/undotbs01.dbf
/oradata2/data1/dbase/sysaux01.dbf
/oradata2/data1/dbase/users01.dbf
/oradata2/data.dbf
/oradata2/data1/data02.dbf
/oradata2/6.dbf
/oradata2/DBASE/datafile/o1_mf_permanen_42l31vg0_.dbf
/oradata2/data_test.dbf
/oracle/app/oracle/product/10.2.0/db_1/dbs/cntrldupbase.dbf
/oracle/app/oracle/product/10.2.0/db_1/dbs/spfiledupbase.ora

11 rows selected.

2)Make clean shutdown of the database.
That is anyone of the following,

SQL>SHUTDOWN
SQL>SHUTDOWN IMMEDIATE
SQL>SHUTDOWN TRANSACTIONAL


3)Use an operating system utility to make backups of all datafiles as well as all control files as shown in section 1. Like,

$scp /oradata2/data1/dbase/system01.dbf /backup
.
.

scp /oracle/app/oracle/product/10.2.0/db_1/dbs/spfiledupbase.ora /backup


Also backup network files. Do a recursive search for *.ora starting in your Oracle home directory and under it.

4)Startup the database.

$sqlplus / as sysdba
SQL>STARTUP

Tuesday, May 13, 2008

User Managed hot backup of oracle database

Used managed backup means you take backup without any oracle feature. Suppose if you take backup by Operating system then it is called user managed backup. And the term hot backup means taking your backup whenever your database is at open state.

To take full database backup follow the following steps.

1)Before proceed remember you can take online/hot backup whenever your database is in Archivelog mode. If your database run on noarchivelog mode then you must take consistent backup that is after cleanly shutdown. In order to determine the archival mode, issue the query,

SQL> SELECT LOG_MODE FROM V$DATABASE;

LOG_MODE
------------
ARCHIVELOG

If you see ARCHIVELOG then you can proceed further. In order to take backup while you are in noarhivelog mode follow other post on my blog.

2)Determine the files that you need to take backup.

Whenever you decide to take your database backup then take backup of data files , online redo log files ,control files, spfile.

In order to decide which files you need to backup issue the following query.

SQL>SELECT NAME "File Need Backup" FROM V$DATAFILE
UNION ALL
SELECT MEMBER FROM V$LOGFILE
UNION ALL
SELECT NAME FROM V$CONTROLFILE
UNION ALL
SELECT VALUE FROM V$PARAMETER WHERE NAME='spfile';


File Need Backup
--------------------------------------------------------------------------------
/oradata2/data1/dbase/system01.dbf
/oradata2/data1/dbase/undotbs01.dbf
/oradata2/data1/dbase/sysaux01.dbf
/oradata2/data1/dbase/users01.dbf
/oradata2/data.dbf
/oradata2/data1/data02.dbf
/oradata2/6.dbf
/oradata2/DBASE/datafile/o1_mf_permanen_42l31vg0_.dbf
/oradata2/data_test.dbf
/oradata2/data1/dbase/redo03.log
/oradata2/data1/dbase/redo02.log
/oradata2/data1/dbase/redo01.log
/oracle/app/oracle/product/10.2.0/db_1/dbs/cntrldupbase.dbf
/oracle/app/oracle/product/10.2.0/db_1/dbs/spfiledupbase.ora
13 rows selected.

So after running the above query I can say I need to backup 13 files.

3)Take the tablespace in backup mode rather than offline and read-only tablespace. In case of offline and read only tablespace you do not have to place the tablespace in backup mode because the database is not permitting changes to the datafiles.

You can check the status, tablespace_name and it's associated data file name with the following query,

SELECT t.STATUS,t.TABLESPACE_NAME "Tablespace", f.FILE_NAME "Datafile"
FROM DBA_TABLESPACES t, DBA_DATA_FILES f
WHERE t.TABLESPACE_NAME = f.TABLESPACE_NAME;
ORDER BY t.NAME;


Take the tablespace in backup mode rather than offline and read-only tablespace.

You can easily make a script of taking the online tablespace in backup mode by following query.

SQL>SELECT 'ALTER TABLESPACE ' ||TABLESPACE_NAME ||' BEGIN BACKUP;' "Script" FROM DBA_TABLESPACES WHERE STATUS NOT IN ('READ ONLY','OFFLINE');

Script
-------------------------------------------------------------
ALTER TABLESPACE SYSTEM BEGIN BACKUP;
ALTER TABLESPACE UNDOTBS1 BEGIN BACKUP;
ALTER TABLESPACE SYSAUX BEGIN BACKUP;
ALTER TABLESPACE TEMP BEGIN BACKUP;
ALTER TABLESPACE USERS BEGIN BACKUP;
ALTER TABLESPACE TEMP_T BEGIN BACKUP;

6 rows selected.

Alternatively, you can issue

SQL>ALTER DATABASE BEGIN BACKUP;


4)Copy the datafile to backup location.

After making a tablespace in backup mode take backup/copy of the associated datafiles. Here you can also make a script in order to copy datafiles to another location.

For online tablespace you must at first take it backup mode. You can check whether backup mode now active or not by issuing following query,

SQL>SELECT t.name AS "TB_NAME", d.file# as "DF#", d.name AS "DF_NAME", b.status
FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS='ACTIVE';

SQL> SELECT 'host scp '|| FILE_NAME || ' &backup_location ' "Backup Command" FROM DBA_DATA_FILES;
Enter value for backup_location: /backup

old 1: SELECT 'host scp '|| FILE_NAME || ' &backup_location ' "Backup Command" FROM DBA_DATA_FILES
new 1: SELECT 'host scp '|| FILE_NAME || ' /backup ' "Backup Command" FROM DBA_DATA_FILES

Backup Command
------------------------------------------------------------------------------------------
host scp /oradata2/data1/dbase/system01.dbf /backup
host scp /oradata2/data1/dbase/undotbs01.dbf /backup
host scp /oradata2/data1/dbase/sysaux01.dbf /backup
host scp /oradata2/data1/dbase/users01.dbf /backup
host scp /oradata2/data.dbf /backup
host scp /oradata2/data1/data02.dbf /backup
host scp /oradata2/6.dbf /backup
host scp /oradata2/DBASE/datafile/o1_mf_permanen_42l31vg0_.dbf /backup
host scp /oradata2/data_test.dbf /backup

9 rows selected.

Also you can backup network files. Do a recursive search for *.ora starting in your Oracle home directory and under it.

In order to make script for to copy data files for those tablespace which are only in backup mode then issue,

SQL>SELECT 'host scp '|| d.name ||' &backup_location' FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS='ACTIVE';


Run the script that you genereted.

On windows or other operating system you can use graphical browser to copy or other associated copy command.

5)Whenever you copy is finished make the tablespace out of backup mode. You can issue BEGIN BACKUP .. SCP serially (Take one tablespace in begin backup mode and then copy the associated datafiles and make the tablespace out of backup mode) or you can do it parallely(Take all tablespaces in begin backup mode and then copy the associated datafiles of all tabelspaces and then make the tablespace out of backup mode).

You here also make a script like,

SQL>SELECT 'ALTER TABLESPACE ' ||t.name ||' END BACKUP;' "End Backup Script"
FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS='ACTIVE';

End Backup Script
----------------------------------------------------------
ALTER TABLESPACE SYSTEM END BACKUP;

You if you have taken Database in backup mode then issue

SQL>ALTER DATABASE END BACKUP;


Related Documents:
-----------------------

Types and Methods of Taking database backup