Error Description
Whenever I try to rebuild index online then it fails with message ORA-00604: error occurred at recursive SQL level 1 along with ORA-01450: maximum key length (3215) exceeded. Below is the scenario.
SQL> create table tab1(a varchar2(3000),b varchar2(2000));
Table created.
SQL> create index tab1_I on tab1(a,b);
Index created.
SQL> alter index tab1_I rebuild online;
alter index tab1_I rebuild online
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01450: maximum key length (3215) exceeded
Let's now create one table with column length 3000+199=3199 bytes and see what happens.
SQL> create table tab3(a varchar2(3000),b varchar2(199));
Table created.
SQL> create index tab3_I on tab3(a,b);
Index created.
Try to rebuild it online and it works.
SQL> alter index tab3_I rebuild online;
Index altered.
Now just add extra 1 bytes on column b. And whenever we try to rebuild it online it will fail.
SQL> alter table tab3 modify b varchar2(200);
Table altered.
SQL> alter index tab3_I rebuild online;
alter index tab3_I rebuild online
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01450: maximum key length (3215) exceeded
Cause of the Problem
When creating a index the total length of the index cannot exceed a certain value. Primarily this value depends on DB_BLOCK_SIZE.
If 2K block size then maximum index key length=758
If 4K block size then maximum index key length=1578
If 8K block size then maximum index key length=3218
If 16K block size then maximum index key length=6498
How the maximum index key length is measured by?
Maximum index key length=Total index length (Sum of width of all indexed column+the number of indexed columns)+Length of the key(2 bytes)+ROWID(6 bytes)+the length of the rowid(1 byte)
The index key size is limited by the value of db_block_size, because a key value may not span multiple blocks. So, based on the size of the block size of index depends. In fact, it is required that any index block must contain at least TWO index entries per block.
So we can say that the maximum key length for an index will be less than half of
the DB_BLOCK_SIZE. But we know that in a block there also needed space for PCTFREE, INITRANS and space for block overhead(Block Header,ROW Directory, Table Directory, etc). After considering these bytes the actual space that can be used for the Index key is actually just over 1/3 of the DB_BLOCK_SIZE.
Solution of the Problem
The causes already indicates what might be the solutions. Solution may be,
1)Increase your database block size. Create a tablespace with bigger block size and create index on that tablespace.
2)If you have index on multiple columns then you can split index to single or 2 columns so that size does not extended over it can handle.
3)Rebuild the index without online clause. That is
ALTER INDEX index_name REBUILD;
Because The online rebuild of the index creates a journal table and index. This internal journal IOT table contains more columns in its index. This is a feature of online rebuild. This time the error arises because that current value of the initialization parameter db_block_size is not large enough to create internal journal IOT.
Showing posts with label Limitation. Show all posts
Showing posts with label Limitation. Show all posts
Thursday, September 25, 2008
Tuesday, July 22, 2008
Process and Runtime Limits
1)Instances per database
Maximum number of cluster database instances per database: Operating system-dependent
2)Locks
Row-level: Unlimited
Distributed Lock Manager: Operating system dependent
3)SGA size
Maximum value: Operating system-dependent;
For 32 bit OS, 2GB - 4GB.
For 64 bit OS, >4GB.
4)Advanced Queuing Processes
Maximum per instance: 10
5)Job Queue Processes
Maximum per instance: 1000
6)I/O Slave Processes
Maximum per background process (DBWR, LGWR, ....): 15
Maximum per Backup session: 15
7)Sessions
Maximum per instance: 32 KB; Restricted by the PROCESSES and SESSIONS initialization parameters.
8)Global Cache Service Processes
Maximum per instance: 10
9)Shared Servers
Maximum per instance: Unlimited. Restricted by the PROCESSES and SESSIONS initialization parameters.
10)Dispatchers
Maximum per instance: Unlimited. Restricted by the PROCESSES and SESSIONS initialization parameters.
11)Parallel Execution Slaves
Maximum per instance: Unlimited. Restricted by the PROCESSES and SESSIONS initialization parameters.
12)Backup Sessions
Maximum per instance: Unlimited. Restricted by the PROCESSES and SESSIONS initialization parameters.
Related Document
Physical Database Limits in oracle
Datatype Limits in Oracle
Logical Database Limits in Oracle
Maximum number of cluster database instances per database: Operating system-dependent
2)Locks
Row-level: Unlimited
Distributed Lock Manager: Operating system dependent
3)SGA size
Maximum value: Operating system-dependent;
For 32 bit OS, 2GB - 4GB.
For 64 bit OS, >4GB.
4)Advanced Queuing Processes
Maximum per instance: 10
5)Job Queue Processes
Maximum per instance: 1000
6)I/O Slave Processes
Maximum per background process (DBWR, LGWR, ....): 15
Maximum per Backup session: 15
7)Sessions
Maximum per instance: 32 KB; Restricted by the PROCESSES and SESSIONS initialization parameters.
8)Global Cache Service Processes
Maximum per instance: 10
9)Shared Servers
Maximum per instance: Unlimited. Restricted by the PROCESSES and SESSIONS initialization parameters.
10)Dispatchers
Maximum per instance: Unlimited. Restricted by the PROCESSES and SESSIONS initialization parameters.
11)Parallel Execution Slaves
Maximum per instance: Unlimited. Restricted by the PROCESSES and SESSIONS initialization parameters.
12)Backup Sessions
Maximum per instance: Unlimited. Restricted by the PROCESSES and SESSIONS initialization parameters.
Related Document
Physical Database Limits in oracle
Datatype Limits in Oracle
Logical Database Limits in Oracle
Logical Database Limits in Oracle
1)CREATE MATERIALIZED VIEW definition size Limit
Maximum size: 64K Bytes
2)GROUP BY clause size Limit
Maximum length: Must fit within a single database block.
3)Indexes Limit
Maximum per table: Unlimited
Total size of indexed column: 75% of the database block size minus some overhead
4)Columns Limit
Maximum Per table: 1000
Maximum Per index (or clustered index): 32
Maximum Per bitmapped index: 30
5)Constraints
Maximum per column: Unlimited
6)Subqueries
Maximum levels of subqueries in a SQL statement in the FORM clause of top-level query: Unlimited
Maximum levels of subqueries in a SQL statement in the WHERE clause: 255
7)Partitions
Maximum length of linear partitioning key: 4 KB - overhead
Maximum number of columns in partition key: 16 columns
Maximum number of partitions allowed per table or index: 1024K - 1
8)Rows
Maximum number per table: Unlimited
9)Stored Packages
Maximum size: Limits typically range from 2000 to 3000 lines of code.
10)Trigger Cascade Limit
Maximum value: Operating system-dependent, typically 32
11)Users and Roles
Maximum: 2,147,483,638
12)Tables
Maximum per clustered table: 32 tables
Maximum per database: Unlimited
Related Documents
Physical Database Limits in oracle
Datatype Limits in Oracle
Maximum size: 64K Bytes
2)GROUP BY clause size Limit
Maximum length: Must fit within a single database block.
3)Indexes Limit
Maximum per table: Unlimited
Total size of indexed column: 75% of the database block size minus some overhead
4)Columns Limit
Maximum Per table: 1000
Maximum Per index (or clustered index): 32
Maximum Per bitmapped index: 30
5)Constraints
Maximum per column: Unlimited
6)Subqueries
Maximum levels of subqueries in a SQL statement in the FORM clause of top-level query: Unlimited
Maximum levels of subqueries in a SQL statement in the WHERE clause: 255
7)Partitions
Maximum length of linear partitioning key: 4 KB - overhead
Maximum number of columns in partition key: 16 columns
Maximum number of partitions allowed per table or index: 1024K - 1
8)Rows
Maximum number per table: Unlimited
9)Stored Packages
Maximum size: Limits typically range from 2000 to 3000 lines of code.
10)Trigger Cascade Limit
Maximum value: Operating system-dependent, typically 32
11)Users and Roles
Maximum: 2,147,483,638
12)Tables
Maximum per clustered table: 32 tables
Maximum per database: Unlimited
Related Documents
Physical Database Limits in oracle
Datatype Limits in Oracle
Physical Database Limits in oracle
1)Data Block Size Limit
Minimum: 2048 bytes; must be a multiple of operating system physical block size
Maximum: Operating system dependent; Not more than 32 KB
2)Data Blocks Limit
Minimum in initial extent of a segment: 2 data blocks
Maximum per datafile: Platform dependent; typically power(2,22) - 1 blocks
3)Controlfiles Limit
Number of control files: 1 minimum; 2 or more (on separate devices) strongly recommended.
Size of a control file: Maximum of 20,000 * (database block size)
4)Datafiles Limit
Maximum per tablespace: 1022
Maximum per database: 65533
Also, limited by the DB_FILES initialization parameter for a particular instance.
5)Database extents Limit
Maximum per dictionary managed tablespace: 4 GB * physical block size.
Maximum per locally managed (uniform) tablespace: 2 GB * physical block size.
6)Database file size Limit
Limited by maximum operating system file size.
7)MAXEXTENTS Limit
Default value: Derived from tablespace default storage or DB_BLOCK_SIZE initialization parameter
Maximum: Unlimited
8)Redo Log Files Limit
Maximum number of logfiles: Limited by value of MAXLOGFILES parameter in the CREATE DATABASE statement.
Maximum number of logfiles per group: Unlimited
9)Redo Log File Size Limit
Minimum size: 50 KB
Maximum size: Operating system limit.
10)Tablespaces:
Maximum number per database: 65533
11)Bigfile Tablespaces
Number of blocks: power(2,32)
12)Smallfile Tablespaces:
Number of blocks: power(2,22)
13)External Tables file
Maximum size: OS dependent.
Minimum: 2048 bytes; must be a multiple of operating system physical block size
Maximum: Operating system dependent; Not more than 32 KB
2)Data Blocks Limit
Minimum in initial extent of a segment: 2 data blocks
Maximum per datafile: Platform dependent; typically power(2,22) - 1 blocks
3)Controlfiles Limit
Number of control files: 1 minimum; 2 or more (on separate devices) strongly recommended.
Size of a control file: Maximum of 20,000 * (database block size)
4)Datafiles Limit
Maximum per tablespace: 1022
Maximum per database: 65533
Also, limited by the DB_FILES initialization parameter for a particular instance.
5)Database extents Limit
Maximum per dictionary managed tablespace: 4 GB * physical block size.
Maximum per locally managed (uniform) tablespace: 2 GB * physical block size.
6)Database file size Limit
Limited by maximum operating system file size.
7)MAXEXTENTS Limit
Default value: Derived from tablespace default storage or DB_BLOCK_SIZE initialization parameter
Maximum: Unlimited
8)Redo Log Files Limit
Maximum number of logfiles: Limited by value of MAXLOGFILES parameter in the CREATE DATABASE statement.
Maximum number of logfiles per group: Unlimited
9)Redo Log File Size Limit
Minimum size: 50 KB
Maximum size: Operating system limit.
10)Tablespaces:
Maximum number per database: 65533
11)Bigfile Tablespaces
Number of blocks: power(2,32)
12)Smallfile Tablespaces:
Number of blocks: power(2,22)
13)External Tables file
Maximum size: OS dependent.
Monday, June 30, 2008
LONG Datatype and its restriction in Oracle
•Before going into detail oracle strongly recommend not to use LONG datatype in oracle. LONG datatype is remained for backward compatibility. If you have LONG datatype in your database then convert it to LOB data type using TO_LOB function which is discussed on How to Convert LOB .
•LONG datatype store variable-length character strings containing up to 2 gigabytes -1, or power(2,31)-1 bytes.
The use of LONG datatype is subject to the following restriction.
•A table can contain only one LONG column.
•You cannot create an object type with a LONG attribute.
•LONG columns cannot appear in WHERE clauses or in integrity constraints (except that they can appear in NULL and NOT NULL constraints).
•Index can't be created on LONG columns.
•In regular expressions LONG datatype can't be specified.
•Stored function can't return a LONG value.
•You can declare a variable or argument of a PL/SQL program unit using the LONG datatype. However, you cannot then call the program unit from SQL.
•LONG and LONG RAW columns can't be replicated.
•All LONG columns, updated tables, and locked tables must be located on the same database within an SQL statement.
•LONG column can't appear in GROUP BY, ORDER BY clause, UNIQUE / DISTINCT operator or CONNECT BY clause in SELECT statements.
•LONG columns cannot appear in these parts of SQL statements
ALTER TABLE ... MOVE statement.
SELECT lists in subqueries in INSERT statements
SELECT lists of subqueries or queries combined by the UNION, INTERSECT, or MINUS set operators
SQL built-in functions, expressions, or conditions
Example:
----------------------
Create table with_long (long_dt LONG);
Table created.
SQL> insert into with_long values('This is a long datatype');
1 row created.
SQL> select * from with_long;
LONG_DT
--------------------------------------------------------------------------------
This is a long datatype
•LONG datatype store variable-length character strings containing up to 2 gigabytes -1, or power(2,31)-1 bytes.
The use of LONG datatype is subject to the following restriction.
•A table can contain only one LONG column.
•You cannot create an object type with a LONG attribute.
•LONG columns cannot appear in WHERE clauses or in integrity constraints (except that they can appear in NULL and NOT NULL constraints).
•Index can't be created on LONG columns.
•In regular expressions LONG datatype can't be specified.
•Stored function can't return a LONG value.
•You can declare a variable or argument of a PL/SQL program unit using the LONG datatype. However, you cannot then call the program unit from SQL.
•LONG and LONG RAW columns can't be replicated.
•All LONG columns, updated tables, and locked tables must be located on the same database within an SQL statement.
•LONG column can't appear in GROUP BY, ORDER BY clause, UNIQUE / DISTINCT operator or CONNECT BY clause in SELECT statements.
•LONG columns cannot appear in these parts of SQL statements
ALTER TABLE ... MOVE statement.
SELECT lists in subqueries in INSERT statements
SELECT lists of subqueries or queries combined by the UNION, INTERSECT, or MINUS set operators
SQL built-in functions, expressions, or conditions
Example:
----------------------
Create table with_long (long_dt LONG);
Table created.
SQL> insert into with_long values('This is a long datatype');
1 row created.
SQL> select * from with_long;
LONG_DT
--------------------------------------------------------------------------------
This is a long datatype
Monday, June 9, 2008
Feature and Restriction of Temporary Table
Feature of Temporary Table
-----------------------------------
•Indexes can be created on temporary tables. They are also temporary and the data in the index has the same session or transaction scope as the data in the underlying table.
•Unlike permanent tables, temporary tables and their indexes do not automatically allocate a segment when they are created. Instead, segments are allocated when the first INSERT (or CREATE TABLE AS SELECT) is performed. This means that if a SELECT, UPDATE, or DELETE is performed before the first INSERT, the table appears to be empty.
•DDL operations (except TRUNCATE) are allowed on an existing temporary table only if no session is currently bound to that temporary table.
•If you rollback a transaction, the data you entered is lost, although the table definition persists.
•A transaction-specific temporary table allows only one transaction at a time. If there are several autonomous transactions in a single transaction scope, each autonomous transaction can use the table only as soon as the previous one commits.
•Because the data in a temporary table is, by definition, temporary, backup and recovery of temporary table data is not available in the event of a system failure.
•It is good to know about that temporary table itself is not temporary, the data within it is temporary.
Restriction of Temporary Table
-----------------------------------------------
•Temporary tables cannot be partitioned, clustered, or index organized.
•You cannot specify any foreign key constraints on temporary tables.
•Temporary tables cannot contain columns of nested table.
•You cannot specify the following clauses of the LOB_storage_clause: TABLESPACE, storage_clause, or logging_clause.
•Parallel DML and parallel queries are not supported for temporary tables. Parallel hints are ignored. Specification of the parallel_clause returns an error.
•You cannot specify the segment_attributes_clause, nested_table_col_properties, or parallel_clause.
•Distributed transactions are not supported for temporary tables.
Related Documents
-----------------------------
Create Temporary Table in Oracle
-----------------------------------
•Indexes can be created on temporary tables. They are also temporary and the data in the index has the same session or transaction scope as the data in the underlying table.
•Unlike permanent tables, temporary tables and their indexes do not automatically allocate a segment when they are created. Instead, segments are allocated when the first INSERT (or CREATE TABLE AS SELECT) is performed. This means that if a SELECT, UPDATE, or DELETE is performed before the first INSERT, the table appears to be empty.
•DDL operations (except TRUNCATE) are allowed on an existing temporary table only if no session is currently bound to that temporary table.
•If you rollback a transaction, the data you entered is lost, although the table definition persists.
•A transaction-specific temporary table allows only one transaction at a time. If there are several autonomous transactions in a single transaction scope, each autonomous transaction can use the table only as soon as the previous one commits.
•Because the data in a temporary table is, by definition, temporary, backup and recovery of temporary table data is not available in the event of a system failure.
•It is good to know about that temporary table itself is not temporary, the data within it is temporary.
Restriction of Temporary Table
-----------------------------------------------
•Temporary tables cannot be partitioned, clustered, or index organized.
•You cannot specify any foreign key constraints on temporary tables.
•Temporary tables cannot contain columns of nested table.
•You cannot specify the following clauses of the LOB_storage_clause: TABLESPACE, storage_clause, or logging_clause.
•Parallel DML and parallel queries are not supported for temporary tables. Parallel hints are ignored. Specification of the parallel_clause returns an error.
•You cannot specify the segment_attributes_clause, nested_table_col_properties, or parallel_clause.
•Distributed transactions are not supported for temporary tables.
Related Documents
-----------------------------
Create Temporary Table in Oracle
Monday, May 12, 2008
Purpose and Restriction of Recover Command in Oracle
Purpose of Recover Command:
----------------------------------------
1)Performing complete recovery of one or more restored datafiles or entire database.
2)Performing incomplete recovery of a database (DBPITR) or tablespace (TSPITR).
3)Applying incremental backups to a datafile image copy (not a restored datafile) to roll it forward in time.
Restriction of Recover Command:
-------------------------------------
•You cannot arbitrarily recover individual datafiles to different points in time. You can recover the whole database to a single point in time (DBPITR, in which case you should use SET UNTIL, followed by RESTORE DATABASE and RECOVER DATABASE) or recover wholly contained tablespaces to a point in time different from the rest of the database (TSPITR, in which case you must use RECOVER TABLESPACE... UNTIL...).
•For whole database recovery database must be in mounted stage but not open. For individual datafile or individual tablespace recovery database must be either mounted or open. If database is open state then the intended datafile or tablespace must be in offline state.
•The RECOVER DATABASE command does not recover any files that are offline normal or read-only at the point in time to which the files are being recovered. However if CHECK READONLY is specified, then RMAN checks each read-only file on disk to ensure that it is already current at the desired point in time.
•You must open the database with the RESETLOGS option after incomplete recovery or recovery with a backup control file.
•RMAN can recover through RESETLOGS operations transparently if the datafiles to be recovered are from a parent incarnation. If required, the RECOVER command can also restore and apply archived logs and incremental backups from prior incarnations.
•If, during recovery of a tablespace or database, the database encounters redo for adding a datafile, RMAN automatically creates a new datafile.
•You cannot manually allocate channels and then run RECOVER with the DEVICE TYPE option.
----------------------------------------
1)Performing complete recovery of one or more restored datafiles or entire database.
2)Performing incomplete recovery of a database (DBPITR) or tablespace (TSPITR).
3)Applying incremental backups to a datafile image copy (not a restored datafile) to roll it forward in time.
Restriction of Recover Command:
-------------------------------------
•You cannot arbitrarily recover individual datafiles to different points in time. You can recover the whole database to a single point in time (DBPITR, in which case you should use SET UNTIL, followed by RESTORE DATABASE and RECOVER DATABASE) or recover wholly contained tablespaces to a point in time different from the rest of the database (TSPITR, in which case you must use RECOVER TABLESPACE... UNTIL...).
•For whole database recovery database must be in mounted stage but not open. For individual datafile or individual tablespace recovery database must be either mounted or open. If database is open state then the intended datafile or tablespace must be in offline state.
•The RECOVER DATABASE command does not recover any files that are offline normal or read-only at the point in time to which the files are being recovered. However if CHECK READONLY is specified, then RMAN checks each read-only file on disk to ensure that it is already current at the desired point in time.
•You must open the database with the RESETLOGS option after incomplete recovery or recovery with a backup control file.
•RMAN can recover through RESETLOGS operations transparently if the datafiles to be recovered are from a parent incarnation. If required, the RECOVER command can also restore and apply archived logs and incremental backups from prior incarnations.
•If, during recovery of a tablespace or database, the database encounters redo for adding a datafile, RMAN automatically creates a new datafile.
•You cannot manually allocate channels and then run RECOVER with the DEVICE TYPE option.
Thursday, May 8, 2008
Limitations of Tablespace point-in-time Recovery
•If you have dropped a tablespace then TSPITR can't be used to recover that tablespace.
•You cannot recover a renamed tablespace to a point in time before it was renamed.
•You cannot recover tables without their associated constraints, or constraints
without the associated tables.
•Tablespaces containing undo or rollback segments cannot be recovered.
•Tablespaces that contain objects owned by SYS can't be recovered.
•In case of partitioning table which spread over multiple tablespaces in that case a single tablespace(only a partition/subset of partition) can't be recovered. You must recover all of the tablespace where partitioning table resides.
•If a tablespace is recovered by TSPITR statistics are not gathered of recovered objects. Therefore you must manually gather statistics of the recovered objects.
•Tablespace containing tables with VARRAY columns, nested tables, or external files can't be recovered with TSPITR.
•The tablespace containing replicated master tables, snapshot logs, snapshot tables can't be recovered with TSPITR.
•If a datafile was added after the point to which RMAN is recovering, an empty datafile by the same name will be included in the tablespace after RMAN TSPITR.
•You cannot recover a renamed tablespace to a point in time before it was renamed.
•You cannot recover tables without their associated constraints, or constraints
without the associated tables.
•Tablespaces containing undo or rollback segments cannot be recovered.
•Tablespaces that contain objects owned by SYS can't be recovered.
•In case of partitioning table which spread over multiple tablespaces in that case a single tablespace(only a partition/subset of partition) can't be recovered. You must recover all of the tablespace where partitioning table resides.
•If a tablespace is recovered by TSPITR statistics are not gathered of recovered objects. Therefore you must manually gather statistics of the recovered objects.
•Tablespace containing tables with VARRAY columns, nested tables, or external files can't be recovered with TSPITR.
•The tablespace containing replicated master tables, snapshot logs, snapshot tables can't be recovered with TSPITR.
•If a datafile was added after the point to which RMAN is recovering, an empty datafile by the same name will be included in the tablespace after RMAN TSPITR.
Sunday, May 4, 2008
ORA-19566: exceeded limit of 0 corrupt blocks for file
Error Description:
---------------------
While I am taking a database backup it halt with returning an error,
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 05/04/2008 14:21:41
ORA-19566: exceeded limit of 0 corrupt blocks for file /oradata2/data1/dbase1/system01.dbf
Error Explanation:
---------------------------
Whenever DB_BLOCK_CHECKSUM parameter is set to TYPICAL or FULL (by default it is TYPICAL) then DBWn process and the direct loader will calculate a checksum and store it in the cache header of every data block when writing it to disk. However DBWn always calculates checksums for the SYSTEM tablespace regardless on the settings on DB_BLOCK_CHECKSUM parameter. Now while reading the contents of the block the checksum is verified if the block already contains a checksum. If the validation fails, then the block is marked corrupt in the backup. During RMAN backup the block verification fail and the error returns.
Solution:
-------------
1)Run DBVerify utility. Have a look at Verify Physical Data Structure
For example, $ dbv file=/oradata2/data1/dbase1/system01.dbf
Also run this operation for against the all datafiles you suspect which has corrupt block.
2)While taking backup within RUN block specify SET MAXCORRUPT in order to say how many corruption it can handle. I ran dbverify utility and I saw 1 block corrupt each in SYSTEM(1) and SYSAUX(3) datafile.
3)Now backup database with SET MAXCORRUPT option.
RMAN> RUN{
2> SET MAXCORRUPT FOR DATAFILE 1,3 to 1;
3> BACKUP DATABASE;
4> }
For datafile 1 and 3 maximum 1 block corruption can be tolerated.
---------------------
While I am taking a database backup it halt with returning an error,
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 05/04/2008 14:21:41
ORA-19566: exceeded limit of 0 corrupt blocks for file /oradata2/data1/dbase1/system01.dbf
Error Explanation:
---------------------------
Whenever DB_BLOCK_CHECKSUM parameter is set to TYPICAL or FULL (by default it is TYPICAL) then DBWn process and the direct loader will calculate a checksum and store it in the cache header of every data block when writing it to disk. However DBWn always calculates checksums for the SYSTEM tablespace regardless on the settings on DB_BLOCK_CHECKSUM parameter. Now while reading the contents of the block the checksum is verified if the block already contains a checksum. If the validation fails, then the block is marked corrupt in the backup. During RMAN backup the block verification fail and the error returns.
Solution:
-------------
1)Run DBVerify utility. Have a look at Verify Physical Data Structure
For example, $ dbv file=/oradata2/data1/dbase1/system01.dbf
Also run this operation for against the all datafiles you suspect which has corrupt block.
2)While taking backup within RUN block specify SET MAXCORRUPT in order to say how many corruption it can handle. I ran dbverify utility and I saw 1 block corrupt each in SYSTEM(1) and SYSAUX(3) datafile.
3)Now backup database with SET MAXCORRUPT option.
RMAN> RUN{
2> SET MAXCORRUPT FOR DATAFILE 1,3 to 1;
3> BACKUP DATABASE;
4> }
For datafile 1 and 3 maximum 1 block corruption can be tolerated.
Sunday, April 27, 2008
Limitation of Flashback Database.
The Flashback Database allows you to get back the entire database to a specific point-in-time.
It is best used as a replacement for incomplete recovery of a complete database. The main benefit of the Oracle Flashback Database over incomplete database recovery is that the Flashback Database is much quicker and more efficient. The Flashback Database is based on flashback logs.
It has some limitations such as,
•Flashback Database can only undo changes to a datafile made by an Oracle database. If any media failure occurs in database then flashback database can't be used. That means if a datafile is dropped then we can't use flashback feature.
•If you have shrink your any datafile then flashback database can't be used.
•If the database control file is restored from backup or re-created, then you cannot use FLASHBACK DATABASE to return to a point in time before the restore or re-creation of a control file.
•If you get back to a time by flashback database when nologging operation was running then it is likely happen block corruption on database.
It is best used as a replacement for incomplete recovery of a complete database. The main benefit of the Oracle Flashback Database over incomplete database recovery is that the Flashback Database is much quicker and more efficient. The Flashback Database is based on flashback logs.
It has some limitations such as,
•Flashback Database can only undo changes to a datafile made by an Oracle database. If any media failure occurs in database then flashback database can't be used. That means if a datafile is dropped then we can't use flashback feature.
•If you have shrink your any datafile then flashback database can't be used.
•If the database control file is restored from backup or re-created, then you cannot use FLASHBACK DATABASE to return to a point in time before the restore or re-creation of a control file.
•If you get back to a time by flashback database when nologging operation was running then it is likely happen block corruption on database.
Subscribe to:
Posts (Atom)