Showing posts with label Data Block. Show all posts
Showing posts with label Data Block. Show all posts

Saturday, December 5, 2009

Logical Database Architecture Exercises


Often times, the job of a DBA involves making assessments and decisions about the database, using data that is accumulated through testing and monitoring the database activity. There are all sorts of things that can affect the performance of a database, and most of them are not restricted to the number of people logged on and using the database.
Obviously, the rate at which the database grows can have a direct effect on how the database reacts to use and traffic. The things that can cause the database to start using excessive amounts of storage space are not always so obvious, but many of them are logical. Oracle's dictionary managed files cause the most administrative overhead, and knowing or being aware of some of the more common things can help eliminate obvious problems more easily. The first part of this exercise addresses some of these problem areas.
The second part of the exercise deals with the storage makeup of the Oracle database. Oracle's architecture allows for storage management and configuration in multiple areas. Understanding how the different parts of the storage architecture work together and relate to each other is an integral part of understanding how to configure a database at creation. In this part of the exercise, you will need to relate your understanding of Oracle's storage structures and how they relate to and work with one another.
Now let's get started.

I N D I V I D U A L     E X E R C I S E S

HANDS-ON #1: Storage space not adequate


Your current database uses dictionary-managed tablespaces. In running various performance tuning scripts, you have discovered that one of these tablespaces seems to have run out of space long before you calculated that it would. In tracking activity in the tablespace, you discover that the process of tables being created, then dropped and re-created, with different storage settings, has been excessive. Give an explanation for why your tablespace might have a problem, and list two actions you can take to lessen the problem.
Place and save your answers in a Word document named week4_exercise.doc.
HANDS-ON #2: Data blocks, extents, segments and tablespaces


Understanding the various parts of data storage and allocation in Oracle can help identify problems more easily.

  • In your own words, provide an explanation of the difference between a data block, an extent, a segment, and a tablespace.
  • Give an example of how the four are related.
  • What is the relationship between a segment and an object created in a tablespace?

Thursday, September 25, 2008

ORA-01450: maximum key length (3215) exceeded

Error Description
Whenever I try to rebuild index online then it fails with message ORA-00604: error occurred at recursive SQL level 1 along with ORA-01450: maximum key length (3215) exceeded. Below is the scenario.
SQL> create table tab1(a varchar2(3000),b varchar2(2000));
Table created.

SQL> create index tab1_I on tab1(a,b);
Index created.

SQL> alter index tab1_I rebuild online;
alter index tab1_I rebuild online
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01450: maximum key length (3215) exceeded

Let's now create one table with column length 3000+199=3199 bytes and see what happens.
SQL> create table tab3(a varchar2(3000),b varchar2(199));
Table created.

SQL> create index tab3_I on tab3(a,b);
Index created.

Try to rebuild it online and it works.
SQL> alter index tab3_I rebuild online;
Index altered.

Now just add extra 1 bytes on column b. And whenever we try to rebuild it online it will fail.

SQL> alter table tab3 modify b varchar2(200);
Table altered.

SQL> alter index tab3_I rebuild online;
alter index tab3_I rebuild online
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01450: maximum key length (3215) exceeded

Cause of the Problem
When creating a index the total length of the index cannot exceed a certain value. Primarily this value depends on DB_BLOCK_SIZE.
If 2K block size then maximum index key length=758
If 4K block size then maximum index key length=1578
If 8K block size then maximum index key length=3218
If 16K block size then maximum index key length=6498
How the maximum index key length is measured by?
Maximum index key length=Total index length (Sum of width of all indexed column+the number of indexed columns)+Length of the key(2 bytes)+ROWID(6 bytes)+the length of the rowid(1 byte)

The index key size is limited by the value of db_block_size, because a key value may not span multiple blocks. So, based on the size of the block size of index depends. In fact, it is required that any index block must contain at least TWO index entries per block.

So we can say that the maximum key length for an index will be less than half of
the DB_BLOCK_SIZE. But we know that in a block there also needed space for PCTFREE, INITRANS and space for block overhead(Block Header,ROW Directory, Table Directory, etc). After considering these bytes the actual space that can be used for the Index key is actually just over 1/3 of the DB_BLOCK_SIZE.

Solution of the Problem
The causes already indicates what might be the solutions. Solution may be,
1)Increase your database block size. Create a tablespace with bigger block size and create index on that tablespace.

2)If you have index on multiple columns then you can split index to single or 2 columns so that size does not extended over it can handle.

3)Rebuild the index without online clause. That is
ALTER INDEX index_name REBUILD;
Because The online rebuild of the index creates a journal table and index. This internal journal IOT table contains more columns in its index. This is a feature of online rebuild. This time the error arises because that current value of the initialization parameter db_block_size is not large enough to create internal journal IOT.

Wednesday, July 30, 2008

How to determine OS block size for windows or unix

In many times you probably have heard about set the oracle db block size as a multiple of the OS block size. But whenever you ask how I can determine or find OS block size for windows or Unix then many one get stopped. In fact this question is OS related. In this post I will show how I can get OS block.

1)UNIX or Linux System.
On my Solaris Machine,
SQL> !perl -e '$a=(stat ".")[11]; print $a'
8192
On my debian Linux,
$ perl -e '$a=(stat ".")[11]; print $a'
4096

On my Solaris Machine another way,
SQL> !df -g /oracle
/oracle (/dev/dsk/c1d0s3 ): 8192 block size 1024 frag size
Where /oracle is disk partition.

Another example,
SQL> !df -g | grep 'block size'
/ (/dev/dsk/c1d0s0 ): 8192 block size 1024 frag size
/devices (/devices ): 512 block size 512 frag size
/system/contract (ctfs ): 512 block size 512 frag size
/proc (proc ): 512 block size 512 frag size
/etc/mnttab (mnttab ): 512 block size 512 frag size
/etc/svc/volatile (swap ): 4096 block size 4096 frag size
/system/object (objfs ): 512 block size 512 frag size
/lib/libc.so.1 (/usr/lib/libc/libc_hwcap1.so.1): 8192 block size 1024 frag size
/dev/fd (fd ): 1024 block size 1024 frag size
/tmp (swap ): 4096 block size 4096 frag size
/var/run (swap ): 4096 block size 4096 frag size
/oradata1 (/dev/dsk/c1d0s4 ): 8192 block size 1024 frag size
/oracle (/dev/dsk/c1d0s3 ): 8192 block size 1024 frag size
/oradata2 (/dev/dsk/c1d0s5 ): 8192 block size 1024 frag size
/export/home (/dev/dsk/c1d0s7 ): 8192 block size 1024 frag size

2)Windows Machine.
If you use ntfs file system you can use
fsutil fsinfo ntfsinfo drivename: to get information of block size.

c:\>fsutil fsinfo ntfsinfo f:
NTFS Volume Serial Number : 0x1e345b64345b3e49
Version : 3.1
Number Sectors : 0x0000000004a852c0
Total Clusters : 0x0000000000950a58
Free Clusters : 0x00000000003ae433
Total Reserved : 0x0000000000000000
Bytes Per Sector : 512
Bytes Per Cluster : 4096 -- This is OS block size.
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x000000000d72a000
Mft Start Lcn : 0x00000000000c0000
Mft2 Start Lcn : 0x00000000004a812c
Mft Zone Start : 0x0000000000800720
Mft Zone End : 0x000000000083b920

To get All possible size on windows machine use,
c:\ format /?
In the /A:size you will get the all possible size for both FAT and NTFS file system. Whenever you format a drive you can select default allocation unit as block size.

3)On HPUX issue like "vgdisplay -v /dev/vg01/lvol"


Related Documents

http://arjudba.blogspot.com/2008/06/choose-optimal-data-block-size-in.html
http://arjudba.blogspot.com/2008/06/advantages-and-disadvantages-of-using.html

Monday, July 21, 2008

Overview of Extents and when extents are allocated

Extents is made up of a number of contiguous data blocks. One or more extents make a segment. When in a segment there is no space i.e segment is full, then to allow more space oracle allocates a new extent in that segment.

When Extents Are Allocated
When you create a table or index then oracle automatically creates an initial extent with a number of specified data blocks (with how many data blocks will be discussed later on) under the table's/ index's data segment.

In this case though no rows are inserted into the table or index but yet extents are allocated and reserve for the table's or index's rows.

Now, if the data blocks of a segment's initial extent become full and more space is required to hold new data, then oracle automatically allocates an incremental extent for that segment.

In this way, subsequent increment extents are allocated whenever data blocks of a segment's extent full.

When Extents Are Deallocated
With the Segment Advisor oracle automatically advice on which schema objects or on which tablespaces has space available for reclamation based on the level of space fragmentation within the object.

To access Segment Advisor from enterprise manager home go to Related Links> Select Advisor Central>Select Segment Advisor.

You can reclaim space manually of LB table by

alter table "LB" enable row movement;
alter table "LB" shrink space;


In general the extents of a segment do not return to the tablespace until you drop the schema objects using DROP TABLE or DROP CLUSTER statement.

The exception is following,
•TRUNCATE...DROP STORAGE statement.
•ALTER TABLE table_name DEALLOCATE UNUSED;
•Periodically, Oracle deallocates one or more extents of a rollback segment if it has the OPTIMAL size specified.

Overview of Oracle Data Blocks

•Oracle data block is smallest unit of data that oracle use to store data.

•Oracle store data in the datafile in terms of oracle data block.

•As we know each operating system has a block size. Oracle data block is not same as operating system block size. In fact oracle block size is multiple of OS block size.

•How much of a standard oracle block size will be is defined by the DB_BLOCK_SIZE initialization parameter.

•We can also define up to five non standard block sizes.

•Data block format is discussed in Oracle data block format

•How we can choose non standard block size is discussed in Choose of a non standard Blocksize

Sunday, June 22, 2008

Playing with Oracle data block size

In this example I have experiment of oracle data block size with 8k and 16K. I performed DML operation against both 8k and 16k data block size. I created two tables table_8k under 8k tablespace tbs_8k and table_16k under 16k tablespace tbs_16k. The summary of the experiment is bigger data block cause more time while update but less time while insert operation.

A)I used OMF file system. So I set db_create_file_dest.
SQL> show parameter db_create_file_dest
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest string /oradata2

B)Create 8k and 16K blocksize tablespace
SQL> create tablespace tbs_8k blocksize 8k;
Tablespace created.

SQL> alter system set db_16k_cache_size=20M;
System altered.

SQL> create tablespace tbs_16k blocksize 16k;
Tablespace created.

C)create table_8k under 8k blocksize tablespace and table_16k under 16k blocksize tablespace and insert random data into them.
SQL> create table table_8k (n number ,k varchar2(15)) tablespace tbs_8K;

Table created.

SQL> begin
for i in 1 .. 100000
loop
insert into table_8k values(i,'pc-'||round(dbms_random.value(1,20000),0));
end loop;
end;
/

PL/SQL procedure successfully completed.
Elapsed: 00:00:12.51

SQL> create table table_16K (n number ,k varchar2(15)) tablespace tbs_16k;

Table created.

SQL> begin
for i in 1 .. 100000
loop
insert into table_16k values(i,'pc-'||round(dbms_random.value(1,20000),0));
end loop;
end;
/


PL/SQL procedure successfully completed.
Elapsed: 00:00:09.42

D)I created one script named flush will will flush data from buffer cache and shared pool so that we get actual result to complete operation.
SQL> !vi flush.sql
alter system flush buffer_cache;
alter system flush shared_pool;


SQL> @flush

System altered.
Elapsed: 00:00:00.03

System altered.
Elapsed: 00:00:00.03

SQL> select * from table_8k where k like '%888%';

176 rows selected.
Elapsed: 00:00:00.14

SQL> select * from table_16k where k like '%888%';

195 rows selected.
Elapsed: 00:00:00.04


So we see for 16k select is faster.
E) Now test for update operation.
SQL> @flush
SQL> update table_16k set k='Testing';

100000 rows updated.

Elapsed: 00:00:06.88

SQL> @flush
System altered.
Elapsed: 00:00:00.24
System altered.
Elapsed: 00:00:00.02

SQL> update table_8k set k='Testing';
100000 rows updated.
Elapsed: 00:00:01.91

We see update is fairly faster in 8k tablespace.
SQL> delete from table_8k;
100000 rows deleted.
Elapsed: 00:00:09.53

SQL> delete from table_16K;
100000 rows deleted.
Elapsed: 00:00:09.27

Related Documents:
-------------------------
General Idea of Database Block Size and BLOCKSIZE
Choose an optimal Data block size in Oracle
Advantage and Disadvantage of small and bigger data block
Data Block Format in Oracle

Saturday, June 21, 2008

Row Chaining, Migrating, PCTFREE, PCTUSE

In this post I like to discuss several aspects of oracle data block. We all know that data block is the smallest unit of storage space managed by oracle database. Oracle requests data in multiples of Oracle data blocks.

Row Chaining
---------------------------

There may be the scenario that a row to be inserted is too large that can not fit into a single data block. Suppose standard data block is 8k and row size is larger that it. In this case, Oracle stores the data for the row in a chain of data blocks -one or more data blocks that are reserved for that segment. Row chaining most often occurs with large rows, such as rows that contain a column of datatype LONG or LONG RAW. Row chaining in these cases can't be avoided.

Row Migrating
-----------------------------

The second case in which rows can not fit into a single case is the row is updated, the overall free space of the data block is fully filled up. In this case, Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit in a new block. Oracle preserves the original row piece of a migrated row to point to the new block containing the migrated row. The rowid of a migrated row does not change.

Whenever row chaining or row migrating happens the I/O performance associated with the row is decreased as to retrieve the information of one row multiple data blocks need to be scanned.

PCTFREE parameter
-------------------------------------

The PCTFREE parameter specifies the minimum percentage of data block to be reserved as free space. If you used PCTFREE 20 then at least 20% size of data block will be reserved as free space. For example if you use data block size 8K (DB_BLOCK_SIZE=8192) then PCTFREE 20 will reserved 1638 bytes as free space in a data block.

This parameter is used to update to the existing rows already within each block.

You can specify this parameter which issuing CREATE TABLE statement.
PCTUSED Parameter
----------------------------------

This parameter specifies the minimum percentage of a block that can be used for row data plus overhead(data block header, table directory, and row directory) before new rows are added to the block.

After a data block is filled to the limit determined by PCTFREE, Oracle considers the block unavailable for the insertion of new rows until the percentage of that block falls below the parameter PCTUSED.

If we set PCTUSED 40 in the CREATE TABLE statement then data block used for this table's data segment is considered unavailable for the insertion of any new rows until the amount of used space in the block falls to 39% or less (Here the block's used space has previously reached PCTFREE).

Thus with the help of PCTFREE and PCTUSED oracle optimize the use of space in the data blocks of the extents within a data segment. For example, whether oracle will try to use the data block or not in order to insert a new row.

Data Block Format in Oracle

You already might know that oracle manages the storage space in the datafiles of the database in units called data blocks. You might be curious about what the data block actually holds, in other words what data block format is.

The data block holds the following format.
1)Header Information (Common and Variable Header)
2)Table Directory
3)Row Directory
4)Row Data
5)Free Space

1)Header Information (Common and Variable Header)
---------------------------------------------------------------------

• The header contains general information about data block. It contains information like data block address, type of segment.
2)Table Directory
----------------------------

• This part of data block contains information about the table having rows in this block.
3)Row Directory
--------------------------------

• This part of data block contains information about the actual rows in the block.
• After the space has been allocated in the row directory of a data block's overhead, this space is not reclaimed when the row is deleted.

The data block header, table directory, and row directory are collectively known as overhead.

4)Row Data
---------------------

• This part of data block contains row data. A row can span between multiple blocks.

5)Free Space
-----------------------

• The free space part is for insertion of new rows and for updates to rows that require additional space.

• The free space can also hold transaction entries like INSERT, UPDATE, DELETE, or SELECT...FOR UPDATE statement accessing one or more rows in the block.

• The space taken by the transaction entry is approximately 23 bytes.

Thursday, June 19, 2008

Advantages and Disadvantages of using smaller and bigger data blocks

Whether I will use bigger or smaller data blocks in my database it can be specified by parameter DB_BLOCK_SIZE or DB_nK_CACHE_SIZE. With the settings I can I can differentiate three types of data blocks in oracle.

1)Small Block(2KB-4KB)
2)Medium Block(8KB)
3)Large Block(16KB-32KB)

Advantages of Bigger Blocks
•Using bigger blocks means more data transfer per I/O call. So faster data transfer from disk to memory.

•Using bigger blocks means more space for key storage in the branch nodes of B*-tree indexes, which reduces index height, which improves the performance of indexed queries.

•When using large block there are less probability of chained and migrated rows, which in turn reduced the number of reads required to get the information.

Disadvantages of bigger Blocks

•If the rows are predominated random then you are increasing the possibility of contention in the buffer cache. Because now with same same amount of memory in buffer cache as it was in small blocks, we need more memory in the buffer cache to keep the same amount of buffers in memory in the buffer cache.

•If you have high transactional concurrency to a segment, using bigger blocks is just going to make the concurrency even higher.


Advantages and disadvantages of these blocks
1)Small Block(2KB-4KB):
The advantage of small blocks are they reduce block contention and they are really good where there is small rows or the selectivity of rows are highly random.

The disadvantages of small blocks are they have relatively larger overhead.

2)Medium Block(8KB): The advantage of medium blocks are if the rows are of medium size then you can bring a number of rows in a single I/O.

The disadvantage of it is space in the buffer cache will be wasted if you are doing random access to small rows and have a large block size. For example, with an 8KB block size and 60 byte row size, you are wasting 8000-60=7940 bytes in the buffer cache when doing random access.

3)Large Block(16KB-32KB): If you use larger block then relatively less overhead. Per I/O you can fetch more data. This is very good for sequential access, or very large rows.

Large block size is not good for index blocks used in an OLTP(Online Transaction Processing) type environment, because they increase block contention on the index leaf blocks.

Related Documents:

General Idea of Database Block Size and BLOCKSIZE
Choose an optimal Data block size in Oracle

DB_BLOCK_SIZE, DB_nK_CACHE_SIZE and BLOCKSIZE

First I want to say about BLOCKSIZE cluase in oracle. The BLOCKSIZE cluase is used to specify the block size for the tablespace. If you don't specify this clause while tablespace creation then standard that is default blocksize is used which is specified by parameter DB_BLOCK_SIZE. For example, You have DB_BLOCK_SIZE set to 8K=8192 and you specified CREATE TABLESPACE command without any BLOCKSIZE clause then database use 8k blocksize for the specified tablespace.

The parameter DB_BLOCK_SIZE specifies the size of Oracle database blocks in bytes. The default value of this parameter is 8192 and value ranges between 2048 to 32768. But it must be multiple of physical block size at device level.

Now lets have an attention of the DB_nK_CACHE_SIZE parameters. Here n is variable and can be 2, 4, 8, 16, 32. That is DB_2K_CACHE_SIZE, DB_4K_CACHE_SIZE, DB_16K_CACHE_SIZE etc are available. Now question may come why we will set DB_nK_CACHE_SIZE? What advantage we will get. Before look for advantage (which will be discussed in another topic) let me why we need to set this parameter? This parameter will need to be set whenever I wish to make or want to make a tablespace with non-standard data block size.

Suppose your standard block size that block size set by DB_BLOCK_SIZE is 8K and you want to make a tablespace with block size 16K then at first you need to set DB_16K_CACHE_SIZE and then you need to create tablespace with BLOCKSIZE clause specifying 16K.

Here is an example which shows database standard block size is 8k and you have made an tablespace with 16k blocksize.

Example:
------------------------------
SQL> show parameter db_block_size
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_block_size integer 8192

ALTER SYSTEM SET DB_16K_CACHE_SIZE=200M SCOPE=BOTH;
ALTER SYSTEM SET DB_CREATE_FILE_DEST='/oradata2';
CREATE TABLESPACE TEST BLOCKSIZE 16K;


Remember You can set this parameter only when DB_BLOCK_SIZE has a value other than nK. For example, if DB_BLOCK_SIZE=8192, then it is illegal to specify the parameter DB_8K_CACHE_SIZE Because the size for the 8 KB block cache is already specified by DB_CACHE_SIZE.
Related Documents

Choose an Optimal Block Size in Oracle

Choose an optimal Data block size in Oracle

Before going to proceed it will be better if you have an idea about DB_BLOCK_SIZE, BLOCKSIZE and DB_nK_CACHE_SIZE. In my post B_BLOCK_SIZE, BLOCKSIZE and DB_nK_CACHE_SIZE I tried to give an idea.

The default database data block buffer size is specified by the parameter DB_BLOCK_SIZE. It is common to ask what will be the data block size of my oracle database? Should I increase or decrease data block size whenever there is performance issue of my database. There is common scenario that with one blocksize a query takes 30 minutes and with another query of a tablespace a query takes 2 or 3 minutes. So choosing an optimal data block size is very necessary task.

The general rule while choosing an optimal data block size is,

•For OLTP (Online Transaction Processing) systems use smaller block sizes. I saw Default or 8K block size does fit well. However if you have enough memory for SGA you might think about 16K blocksize.

•For DSS (Decision Support Systems) systems use larger block sizes.

This is because a larger block size in a heavy duty OLTP system with lots and lots of scattered reads all over the places which might be bad as no one is going after the same blocks i.e I fetched a block in the memory and no one else needs that block- the data is cached but no one using that data.

A larger block size in a query intensive system that is in DSS system it may be very good as I was going to read the blocks right around my data anyway. And - people are going to be needing that data again and again.

Whatever the size of the data, the goal is to minimize the number of reads required to retrieve the desired data.

•If the rows are small and access to the rows are random , then choose a smaller block size.
•If the rows are small and access is sequential, then choose a larger block size.
•If the rows are small and access is both random and sequential, then it might be effective to choose a larger block size.
•If the rows are large, such as rows containing large object (LOB) data, then choose a larger block size.


In my next topics I demonstrated on advantage and disadvantage of settings different types of blocks.