Saturday, January 31, 2009

Copy files between Unix and Windows with rcp

There are many third party tools by which you can copy files between windows and unix machine. Some common tools are winscp, SSHSecureShellClient etc. Instead of using any third party tool, you can use original unix and windows built-in tool to do the task.

The RCP (Remote CoPy) is a standard UNIX tool and a Microsoft Windows 2000 tool that copies files/folder to and from computer running the RCP service.

Before you can use the RCP tool from a Windows-based computer you must do something in your UNIX computer.

you must turn on the RCP service and update the security files to allow the Windows-based computer to connect to your unix machine over this service.

The following steps will help you.
Step 01: Turn on RCP service on UNIX Host

Turn on the RCP service on unix machine by using inetd:

1. Log on as root.
2. Edit the file /etc/Inetd.conf.
3. Uncomment the lines that start with shell and that start with exec.
4. Save the file.
5. Use ps -ef |grep inetd and determine inetd service pid. Send the Host User Profile (HUP) signal to inetd by using kill or by using pkill.
6. Run inetd again.

To turn on the RCP service by using xinetd:

1. Log on as root.
2. Edit the file /etc/Xinetd.d/rsh. You may get another file based on your linux distrinution.
3. Change the line disable to no.
4. Save the file.
5. Use ps -ef |grep inetd and determine inetd service pid. Send the HUP signal to xinetd by using kill or by using pkill.
6. Run xinetd again.

Step 02: Set the security permissions
After turning on RCP services you must set the security permissions to allow the Windows host to connect to your computer.

1. Edit the file /etc/hosts.equiv. Based on your linux distribution this file may vary. You can get like /etc/hosts.allow.
2. In the file add a line that contains the name of your Windows host.
3. Add a second line that contains the name of your Windows host and the name of a user who can access the directory that you want to transfer. Separate the two elements with a tab character.
4. Save the file.

Step 03: Transfer the files or directory

From windows computer you can transfer files or folders using rcp using following syntax.
RCP [-a | -b] [-h] [-r] [host][.user:]source [host][.user:] path\destination

For example, to copy the file index.html from the path /home/oracle on the computer unixhost as the user oracle you must use from windows computer,
rcp unixhost.oracle:/home/oracle/index.html index.html

In order to copy whole directory you need to use -r(for recursive) with rcp command.
To copy test directory under /home/oracle on unix, from windows computer issue,
rcp -r unixhost.oracle:/home/oracle/test test
Related Documents
How to change/configure IP Address on Linux/ Fedora /Ubuntu
Screen -A very useful unix tool to work with remote system
Different ways to take Screenshot on Linux
How to change the hostname in Linux
Memory Usage of Solaris Operating System

h323-disconnect-cause for CISCO of Radius Server

In case of CISCO router, from the Radius server h323-disconnect-cause with code is returned. Like h323-disconnect-cause=4 or h323-disconnect-cause=7. In the following mini dictionary disconnect cause along description corresponding code is shown which will help instantly to identify disconnect issue if we find any code.

          
Local-Clear 0
Local-No-Accept 1
Local-Decline 2
Remote-Clear 3
Remote-Refuse 4
Remote-No-Answer 5
Remote-Caller-Abort 6
Transport-Error 7
Transport-Connect-Fail 8
Gatekeeper-Clear 9
Fail-No-User 10
Fail-No-Bandwidth 11
No-Common-Capabilities 12
FACILITY-Forward 13
Fail-Security-Check 14
Local-Busy 15
Local-Congestion 16
Remote-Busy 17
Remote-Congestion 18
Remote-Unreachable 19
Remote-No-Endpoint 20
Remote-Off-Line 21
Remote-Temporary-Error 22

Friday, January 30, 2009

ORA-00600 arguments: [keltnfy-ldmInit], [46], [1]

Problem Description
In oracle 10.2.0.1 while creating database with dbca it fails with message below.
ORA-00600: internal error code, arguments: [keltnfy-ldmInit], [46], [1], [], [], [], [], []

If you try to create your database manually then also the command startup nomount fails with above error message.

Cause of the Problem
In the error ldmInit indicates that the problem is related while getting host information of oracle during startup. The first argument 46 indicates the exception LDMERR_HOST_NOT_FOUND which is "gethostbyname system call fails". Oracle was unable to get host information from OS and bug fires.

Solution of the Problem
Step 01: Check permission on /etc/hosts
$ ls -l /etc/hosts
-rw-r--r-- 1 root root 153 Nov 24 2007 /etc/hosts

Note that you need read permission of all users.

Step 02: Check the contents of /etc/hosts
Open the contents of /etc/hosts and check the contents inside it.
$ less /etc/hosts
Note that the contents of this files follow following format.
IP Address      fully qualified hostname    simple or short hostname     Alias

A simple example,
$ cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 database localhost.localdomain localhost

Step 03: Check the hostname and make sure you can ping your hostname.
$ hostname
database

$ ping database

PING database (127.0.0.1) 56(84) bytes of data.
64 bytes from database (127.0.0.1): icmp_seq=0 ttl=64 time=0.057 ms
64 bytes from database (127.0.0.1): icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from database (127.0.0.1): icmp_seq=2 ttl=64 time=0.041 ms

If you get the following message,
$ ping database
ping: unknown host database

then possibly you will hit above bug.
And you need to modify /etc/hosts files. In the alias section you can give the name of your machine name. If your machine name is "database" you can give /etc/hosts entry as,
127.0.0.1               database localhost.localdomain localhost

And then ping database again. Make sure you are able to ping your host.

Step 04: Diagnosis DNS problem if you have DNS setup
If you have DNS setup, ping is not a tool to diagnose DNS problem. A better tool to use is nslookup, dnsquery, or dig.

$nslookup www.google.com
The forward and reverse lookup should succeed.

Step 05: Check nsswitch.conf
$ cat /etc/nsswitch.conf
hosts: files dns

Make sure host lookup is also done through the /etc/hosts file and not just dns. The keyword files should come before dns.

Step 06: Check resolv.conf
$ cat /etc/resolv.conf
nameserver 4.2.2.2

Make sure nameserver with DNS name is added there.

Related Documents

Thursday, January 29, 2009

Capitalize every words using JavaScript

Save the below text in a text file and save it as html and then open it with any browser where javascript is enabled.


<html>
<head>
<script language="JavaScript" type="text/javascript">
function ConvertToLetterCase(frmObj) {
var i;
var tmpStr;
var tmpChar;
var preString;
var postString;
var strlen;
tmpStr = frmObj.value.toLowerCase();
strLen = tmpStr.length;
if (strLen > 0) {
for (i = 0; i < strLen; i++) {
if (i == 0) {
tmpChar = tmpStr.substring(0,1).toUpperCase();
postString = tmpStr.substring(1,strLen);
tmpStr = tmpChar + postString;
}
else {
tmpChar = tmpStr.substring(i, i+1);
if (tmpChar == " " && i < (strLen-1)) {
tmpChar = tmpStr.substring(i+1, i+2).toUpperCase();
preString = tmpStr.substring(0, i+1);
postString = tmpStr.substring(i+2,strLen);
tmpStr = preString + tmpChar + postString;
}
}
}
}
frmObj.value = tmpStr;
}
</script>
</head>
<body>
<center>
Convert First Letter In Each Word To Capitalize Using Javascript<br/>
<form name="caseConvert">
<textarea name="txt_content" cols="40" rows="5">any LeTTer that you WILL wRiTe heRe will BE CapitaLiZed.
</textarea><br/><br/>
<input type=button value="Initialize Capitalized"onClick="javascript: ConvertToLetterCase(this.form.txt_content);">
</form>
</center>
</body>
</html>

ORA-31655: no data or metadata objects selected for job

Problem Description
You are going to do data pump export operation in order to export objects based on filtering via EXCLUDE or INCLUDE parameter of expdp.
In this example you wanted tables 'ACCOUNT_GROUP','ADDRESS','AREA_GROUP' and all the tables starting with word CV(like 'CV', 'CV_EXPERIENCE', 'CV_EDUCATION' etc)

Your parameter file is like below.
userid=smilebd/a
directory=d
dumpfile=b.dmp
include =TABLE:"IN('ACCOUNT_GROUP','ADDRESS','AREA_GROUP')"
include =TABLE:"LIKE 'CV%' "

And you invoke expdp as
expdp parfile=d:\parfile.txt
from command line. But it fails with below message on my windows PC.
C:\>expdp parfile=d:\parfile.txt

Export: Release 10.1.0.4.2 - Production on Thursday, 29 January, 2009 14:53

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Starting "SMILEBD"."SYS_EXPORT_SCHEMA_01": parfile=d:\parfile.txt
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
ORA-31655: no data or metadata objects selected for job
Job "SMILEBD"."SYS_EXPORT_SCHEMA_01" completed with 1 error(s) at 14:53:50

As you expected 'ACCOUNT_GROUP','ADDRESS','AREA_GROUP','CV', 'CV_EXPERIENCE', 'CV_EDUCATION' will be exported but not a single table is exported and error ORA-31655: no data or metadata objects selected for job returned.

Cause of the Problem
This problem happens because of multiple INCLUDE options was set in expdp. Note that multiple INCLUDE parameter can be given in expdp but I recommend not to do that because if multiple filters(EXCLUDE/INCLUDE) are specified , an implicit AND operation is applied to them.

In this example data pump interprets
include =TABLE:"IN('ACCOUNT_GROUP','ADDRESS','AREA_GROUP')"  
include =TABLE:"LIKE 'CV%' "

as,
TABLE:"IN('ACCOUNT_GROUP','ADDRESS','AREA_GROUP')" AND TABLE:"LIKE 'CV%'"

As AND operation is applied, so all tables is filtered out (because no tables is there that starts with CV and between 'ACCOUNT_GROUP','ADDRESS','AREA_GROUP') and no tables are exported.

A bit clear example I will show you.

Your parameter file is like below.
userid=smilebd/a
directory=d
dumpfile=b.dmp
include =TABLE:"IN('ACCOUNT_GROUP','ADDRESS','AREA_GROUP')"
include =TABLE:"LIKE 'CV%' "

And you invoke datapump
C:\>expdp parfile=d:\parfile.txt

Export: Release 10.1.0.4.2 - Production on Thursday, 29 January, 2009 14:54

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Starting "SMILEBD"."SYS_EXPORT_SCHEMA_01": parfile=d:\parfile.txt
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SMILEBD"."ACCOUNT_GROUP" 11.75 KB 132 rows
Master table "SMILEBD"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SMILEBD.SYS_EXPORT_SCHEMA_01 is:
G:\B.DMP
Job "SMILEBD"."SYS_EXPORT_SCHEMA_01" successfully completed at 14:54:38

Note that here only one table is exported.
Because with first INCLUDE parameter,
include =TABLE:"IN('ACCOUNT_GROUP','ADDRESS','AREA_GROUP')"

first 3 tables are selected
and with second INCLUDE parameter
include =TABLE:"LIKE 'ACC%' "
both output are ANDed and only one table is selected that is ACCOUNT_GROUP (which starts with word ACC.)

Solution of the Problem
It is recommended not to use multiple INCLUDE or EXCLUDE parameter in data pump jobs and read http://arjudba.blogspot.com/2008/04/datapump-parameters-include-and-exclude.html If you are in above scenario then only add one INCLUDE parameter and add all the tables within IN clause like below,

include =TABLE:"IN('ACCOUNT_GROUP','ADDRESS','AREA_GROUP','CV', 'CV_EXPERIENCE', 'CV_EDUCATION')"

There is no valid reason to use multiple INCLUDE or EXCLUDE parameter in your data pump operation. With only one INCLUDE or EXCLUDE parameter you can do your job.

Monday, January 26, 2009

Listener Hangs, Child listener process remains persistence

Problem Description
Some days ago in our database server we got a problem regarding listener issue. Our TNS Listener hangs. Below is the problem symtompts.

•The lsnrctl status or lsnrctl stop or lsnrctl reload does not respond. Just like it hangs after displays message connecting to ..... .

•No one from outside can connect to database.

•Local connection without listener was ok.

•Listener process takes high cpu than normal usage.

•Listener process forks. The word fork is an UNIX OS related term and it indicates listener process creates a copy of itself. The copied process is called child process and the original process is called a parent process. Due to load of the listener a child listener process is created and it remains persistent. Whenever we give ps -ef then two tnslsnr is shown as below.

$ ps -ef | grep tnslsnr
oracle 3102 1 0 Jan 01 ? 12:28 /var/opt/oracle/bin/tnslsnr LISTENER -inherit
oracle 5012 3102 0 Jan 25 ? 10:15 /var/opt/oracle/bin/tnslsnr LISTENER -inherit


From the output first one is parent listener process and second line is child listener process. For child listener process parent id is 3102.

Just killing the child process allows new connections to work until the problem reoccurs. So after seeing above and if listener hangs then do,
$kill -9 5012 3102

Cause of the Problem
This problem remains in oracle 10.1.0.3, 10.1.0.4, 10.1.0.4.2, 10.1.0.5, 10.2.0.1 and 10.2.0.2. The listener hangs if the child listener process is not closed i.e after creating child process it persists. Note that, child listener processes are not unusual, depending on traffic as well as when the OS grep snapshot is taken. However, a persistent secondary process (longer than 5 seconds) is not normal and may be a result of this referenced problem.

This listener hanging event can happen on a standalone server or on a RAC server.

Solution of the Problem
1)The issue is fixed in patchset 10.2.0.3 and in 10.2.0.4. So apply patchset.

2)Apply Patch 4518443 which is available in metalink. Download from metalink and apply on your databse server.

3)As a workaround, you can follow the following two steps if you don't like to apply patch now.

Step 01: Add the following entry in your listener.ora file.
SUBSCRIBE_FOR_NODE_DOWN_EVENT_{listener_name}=OFF

Where {listener_name} should be replaced with the actual listener name configured in the LISTENER.ORA file.

Suppose your have default listener name and it is LISTENER. Then in the listener.ora file(by default in location $ORACLE_HOME/network/admin on unix) add the following entry in a new line,

SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF


Step 02: Go to directory cd $ORACLE_HOME/opmn/conf , find ons.config and move it to another location. Like,

cd $ORACLE_HOME/opmn/conf
mv ons.config ons.config.bak


After completing above two steps bounce the listener.

lsnrctl stop
lsnrctl start


Alternatively, you can simply issue,
$lsnrctl reload
if database availability is important.
Note that adding the SUBSCRIBE_FOR_NODE_DOWN_EVENT_{listener_name} to listener.ora file on RAC and disabling the ONS file, will mean that FAN (fast application notification) will not be possible. So, if you have a RAC configuration, then apply the patch and do not disable ONS or FAN.
Related Documents

In 10g listener log WARNING: Subscription for node down event still pending

Symptoms of the Problem
In the listener log file you constantly get the following warning message.
WARNING: Subscription for node down event still pending

If you have oracle database greater than 10g or newer version or 11g then in the listener log file you get the warning message.

Cause of the Problem
The warning messages are related to the Oracle TNS Listener's default subscription to the Oracle Notification Service (ONS). This subscription to ONS is introduced in Oracle 10g for RAC environment. Listener subscription to ONS is useful to use advanced features like Fast Application Notification events(FAN) , Fast Application Fail over (FAF) and Fast Connection Failover (FCN) in RAC. So in a non-RAC environment subscription to ONS is not needed. So in a standalone system we can disable it and thus avoid warning message.

Solution of the Problem
Disable subscription for listener to ONS. This can be done by setting the following parameter in the listener.ora.

SUBSCRIBE_FOR_NODE_DOWN_EVENT_{listener_name}=OFF

Where {listener_name} should be replaced with the actual listener name configured in the LISTENER.ORA file.

Suppose your have default listener name and it is LISTENER. Then in the listener.ora file(by default in location $ORACLE_HOME/network/admin on unix) add the following entry in a new line,

SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF

After that you need to stop and start the listener by,
lsnrctl stop
lsnrctl start


Alternatively you can reload the listener if availability is important to you. Do it just by,
lsnrctl reload

This will prevent the messages from being written to the log file.

This changes also prevent the TNS listener hanging intermittently which will be discussed in another topic.

Note that, setting the above parameter OFF in listener.ora disables a necessary RAC functionality.

Sunday, January 25, 2009

ORA-12557: TNS:protocol adapter not loadable

Problem Description
In my machine I had oracle 10g home , using sqlplus of 10g I could connect to an Oracle database 10g. Now I have installed a new oracle 11g home, but using sqlplus of 11g I could not connect to Oracle database 10g. Below is an example,

With 10.2g sqlplus I can connect to 10g database.
C:\>e:\oracle\product\10.2.0\db_1\bin\BIN\sqlplus.exe maximsg/a@192.168.100.160/tiger

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Jan 26 01:54:10 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

With 11g sqlplus I could not connect to oracle database 10g. It returns error message, ORA-12557: TNS:protocol adapter not loadable.
C:\>d:\app\oracle\BIN\sqlplus.exe maximsg/a@192.168.100.160/tiger

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Jan 26 01:55:00 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

ERROR:
ORA-12557: TNS:protocol adapter not loadable

Cause of the Problem
The problem happens because of two ORACLE_HOME are installed on your system. As after 10g you hav e installed 11g so whenever you write sqlplus by default new 11g binaries are selected and raises ORA-12557. But working with old home 10g works fine.

Simply sqlplus does not work but 10g home location sqlplus (e:\oracle\product\10.2.0\db_1\bin\BIN\sqlplus.exe ) works.

C:\>sqlplus.exe maximsg/a@192.168.100.160/tiger

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Jan 26 22:47:08 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

ERROR:
ORA-12557: TNS:protocol adapter not loadable

Connecting to old oracle database using new binaries are not supported in oracle and error will return.

Solution of the Problem
Only setting ORACLE_HOME is not sufficient on windows environment. Because the location is taken from windows registry. So either uninstall newer oracle home or explicitly pointing to old oracle binaries will solve the problem.

Here using pointing to old home,
C:\>e:\oracle\product\10.2.0\db_1\bin\BIN\sqlplus.exe maximsg/a@192.168.100.160/tiger

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Jan 26 01:54:10 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Alternatively you can follow below.

Step 01: Remove registry entries of new ORACLE_HOME.
To do this,
i)Type regedit on Run.
ii)Press enter and expand HKEY_LOCAL_MACHINE.
iii)Then expand SOFTWARE and then expand ORACLE tab. There you will see two oracle home. Right click on the one that you want to delete and then select delete. If prompting click yes.

Step 02: Remove any environmental variable.
i)Right click on My computer icon. Then select properties.
ii)System properties window will appear. Click on Advanced tab.
iii)Select environmental variables.
iv)Find the variable/system variable path and ORACLE_HOME. Edit or modify them so that it point to you desired sql*plus.
Usually in the PATH system variable you will get both ORACLE_HOME path. Just remove one path. Of course if you have ORACLE_HOME variable settings first delete the key.

Toad displays error 'IN' is not a valid integer value

Problem Description
You are using TOAD version that is less than 8.6.1 and is connecting to Oracle database release 2(10.2.0.2 or higher) , then while looking source of the stored procedure toad displays error message
TOAD ERROR:
'IN' is not a valid integer value

in a pop-up window.

This problem remains if you try to connect to oracle database 11g with toad version less than 8.6.1.

Cause of the Problem
The problem happens because Oracle made a change to the ALL_ARGUMENTS view at release 10.2.0.2 and this 'broke' TOAD. Until 10.2.0.1 there is no problem. But starting with 10.2.0.2 oracle has added a new column named SUBPROGRAM_ID in the ALL_ARGUMENTS view and this causes toad unusable.

Have a look at ALL_ARGUMENTS view between two releases.
In 10.2.0.1,

SQL> desc all_arguments;
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER NOT NULL VARCHAR2(30)
OBJECT_NAME VARCHAR2(30)
PACKAGE_NAME VARCHAR2(30)
OBJECT_ID NOT NULL NUMBER
OVERLOAD VARCHAR2(40)
ARGUMENT_NAME VARCHAR2(30)
POSITION NOT NULL NUMBER
SEQUENCE NOT NULL NUMBER
DATA_LEVEL NOT NULL NUMBER
DATA_TYPE VARCHAR2(30)
DEFAULT_VALUE LONG
DEFAULT_LENGTH NUMBER
IN_OUT VARCHAR2(9)
DATA_LENGTH NUMBER
DATA_PRECISION NUMBER
DATA_SCALE NUMBER
RADIX NUMBER
CHARACTER_SET_NAME VARCHAR2(44)
TYPE_OWNER VARCHAR2(30)
TYPE_NAME VARCHAR2(30)
TYPE_SUBNAME VARCHAR2(30)
TYPE_LINK VARCHAR2(128)
PLS_TYPE VARCHAR2(30)
CHAR_LENGTH NUMBER
CHAR_USED VARCHAR2(1)

In 11g,

SQL> desc all_arguments
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER NOT NULL VARCHAR2(30)
OBJECT_NAME VARCHAR2(30)
PACKAGE_NAME VARCHAR2(30)
OBJECT_ID NOT NULL NUMBER
OVERLOAD VARCHAR2(40)
SUBPROGRAM_ID NUMBER
ARGUMENT_NAME VARCHAR2(30)
POSITION NOT NULL NUMBER
SEQUENCE NOT NULL NUMBER
DATA_LEVEL NOT NULL NUMBER
DATA_TYPE VARCHAR2(30)
DEFAULTED VARCHAR2(1)
DEFAULT_VALUE LONG
DEFAULT_LENGTH NUMBER
IN_OUT VARCHAR2(9)
DATA_LENGTH NUMBER
DATA_PRECISION NUMBER
DATA_SCALE NUMBER
RADIX NUMBER
CHARACTER_SET_NAME VARCHAR2(44)
TYPE_OWNER VARCHAR2(30)
TYPE_NAME VARCHAR2(30)
TYPE_SUBNAME VARCHAR2(30)
TYPE_LINK VARCHAR2(128)
PLS_TYPE VARCHAR2(30)
CHAR_LENGTH NUMBER
CHAR_USED VARCHAR2(1)


Solution of the Problem

1.With toad version less than 8.6.1 don't connect to oracle version 10.2.0.2 or higher. Not a actual solution.

2.Upgrade to toad version. On version 9.5.0 it works fine.

3.Alter the ALL_ARGUMENTS view and move the SUBPROGRAM_ID column to the end on the database. This is not supported according to oracle. However on your test database it is safer to do it. But it is not recommended to do it on your production database though no side effect I ever get.

In your 11.1g database connect as "sys as sysdba" and create the view as below.

CREATE OR REPLACE FORCE VIEW "SYS"."ALL_ARGUMENTS" ("OWNER", "OBJECT_NAME",
"PACKAGE_NAME", "OBJECT_ID", "OVERLOAD", "ARGUMENT_NAME", "POSITION", "SEQUENCE",
"DATA_LEVEL", "DATA_TYPE", "DEFAULT_VALUE", "DEFAULT_LENGTH", "IN_OUT",
"DATA_LENGTH", "DATA_PRECISION", "DATA_SCALE", "RADIX", "CHARACTER_SET_NAME",
"TYPE_OWNER", "TYPE_NAME", "TYPE_SUBNAME", "TYPE_LINK", "PLS_TYPE", "CHAR_LENGTH",
"CHAR_USED") AS
select
u.name, /* OWNER */
nvl(a.procedure$,o.name), /* OBJECT_NAME */
decode(a.procedure$,null,null, o.name), /* PACKAGE_NAME */
o.obj#, /* OBJECT_ID */
decode(a.overload#,0,null,a.overload#), /* OVERLOAD */
a.argument, /* ARGUMENT_NAME */
a.position#, /* POSITION */
a.sequence#, /* SEQUENCE */
a.level#, /* DATA_LEVEL */
decode(a.type#, /* DATA_TYPE */
0, null,
1, decode(a.charsetform, 2, 'NVARCHAR2', 'VARCHAR2'),
2, decode(a.scale, -127, 'FLOAT', 'NUMBER'),
3, 'NATIVE INTEGER',
8, 'LONG',
9, decode(a.charsetform, 2, 'NCHAR VARYING', 'VARCHAR'),
11, 'ROWID',
12, 'DATE',
23, 'RAW',
24, 'LONG RAW',
29, 'BINARY_INTEGER',
69, 'ROWID',
96, decode(a.charsetform, 2, 'NCHAR', 'CHAR'),
100, 'BINARY_FLOAT',
101, 'BINARY_DOUBLE',
102, 'REF CURSOR',
104, 'UROWID',
105, 'MLSLABEL',
106, 'MLSLABEL',
110, 'REF',
111, 'REF',
112, decode(a.charsetform, 2, 'NCLOB', 'CLOB'),
113, 'BLOB', 114, 'BFILE', 115, 'CFILE',
121, 'OBJECT',
122, 'TABLE',
123, 'VARRAY',
178, 'TIME',
179, 'TIME WITH TIME ZONE',
180, 'TIMESTAMP',
181, 'TIMESTAMP WITH TIME ZONE',
231, 'TIMESTAMP WITH LOCAL TIME ZONE',
182, 'INTERVAL YEAR TO MONTH',
183, 'INTERVAL DAY TO SECOND',
250, 'PL/SQL RECORD',
251, 'PL/SQL TABLE',
252, 'PL/SQL BOOLEAN',
'UNDEFINED'),
default$, /* DEFAULT_VALUE */
deflength, /* DEFAULT_LENGTH */
decode(in_out,null,'IN',1,'OUT',2,'IN/OUT','Undefined'), /* IN_OUT */
length, /* DATA_LENGTH */
precision#, /* DATA_PRECISION */
decode(a.type#, 2, scale, 1, null, 96, null, scale), /* DATA_SCALE */
radix, /* RADIX */
decode(a.charsetform, 1, 'CHAR_CS', /* CHARACTER_SET_NAME */
2, 'NCHAR_CS',
3, NLS_CHARSET_NAME(a.charsetid),
4, 'ARG:'||a.charsetid),
a.type_owner, /* TYPE_OWNER */
a.type_name, /* TYPE_NAME */
a.type_subname, /* TYPE_SUBNAME */
a.type_linkname, /* TYPE_LINK */
a.pls_type, /* PLS_TYPE */
decode(a.type#, 1, a.scale, 96, a.scale, 0), /* CHAR_LENGTH */
decode(a.type#,
1, decode(bitand(a.properties, 128), 128, 'C', 'B'),
96, decode(bitand(a.properties, 128), 128, 'C', 'B'), 0) /* CHAR_USED */
from obj$ o,argument$ a,user$ u
where o.obj# = a.obj#
and o.owner# = u.user#
and (owner# = userenv('SCHEMAID')
or exists
(select null from v$enabledprivs where priv_number in (-144,-141))
or o.obj# in (select obj# from sys.objauth$ where grantee# in
(select kzsrorol from x$kzsro) and privilege# = 12));
View created.


After creating now try to connect to oracle database with your existing toad and problem should go away.

Changing a DBA user to a normal user in oracle

Many times you have granted a user DBA super role instead of giving individual privilege to a user. Later whenever you want to revoke DBA role you need to care of which privilege you need to give the user.

Before example let's take a overview about some views related to privileges and roles in oracle.

1)DBA_SYS_PRIVS describes system privileges granted to users and roles.

SQL> desc dba_sys_privs
Name Null? Type
----------------------------------------- -------- ----------------------------
GRANTEE NOT NULL VARCHAR2(30)
PRIVILEGE NOT NULL VARCHAR2(40)
ADMIN_OPTION VARCHAR2(3)

2)USER_SYS_PRIVS describes system privileges granted to the current user.

SQL> desc user_sys_privs
Name Null? Type
----------------------------------------- -------- ----------------------------
USERNAME VARCHAR2(30)
PRIVILEGE NOT NULL VARCHAR2(40)
ADMIN_OPTION VARCHAR2(3)

3)DBA_ROLE_PRIVS describes the roles granted to all users and roles in the database.

SQL> desc dba_role_privs
Name Null? Type
----------------------------------------- -------- ----------------------------
GRANTEE VARCHAR2(30)
GRANTED_ROLE NOT NULL VARCHAR2(30)
ADMIN_OPTION VARCHAR2(3)
DEFAULT_ROLE VARCHAR2(3)

4)DBA_TAB_PRIVS describes all object grants in the database. Note that in the table the column TABLE_NAME does not display only table rather it displays any object, including tables, packages, indexes, sequences, and so on.

SQL> desc dba_tab_privs
Name Null? Type
----------------------------------------- -------- ----------------------------
GRANTEE NOT NULL VARCHAR2(30)
OWNER NOT NULL VARCHAR2(30)
TABLE_NAME NOT NULL VARCHAR2(30)
GRANTOR NOT NULL VARCHAR2(30)
PRIVILEGE NOT NULL VARCHAR2(40)
GRANTABLE VARCHAR2(3)
HIERARCHY VARCHAR2(3)

In this example, we want to change a DBA user named "OMS" to a normal user.
Let's see the user OMS has the available roles granted.

SQL> select * from dba_role_privs where grantee='OMS';

GRANTEE GRANTED_ROLE ADM DEF
------------------------------ ------------------------------ --- ---
OMS RESOURCE NO YES
OMS JAVAUSERPRIV NO YES
OMS DBA NO YES

These roles may contain many privilege. For example the role RESOURCE contains following privileges.

SQL> select * from dba_sys_privs where grantee='RESOURCE';

GRANTEE PRIVILEGE ADM
------------------------------ ---------------------------------------- ---
RESOURCE CREATE TRIGGER NO
RESOURCE CREATE SEQUENCE NO
RESOURCE CREATE TYPE NO
RESOURCE CREATE PROCEDURE NO
RESOURCE CREATE CLUSTER NO
RESOURCE CREATE OPERATOR NO
RESOURCE CREATE INDEXTYPE NO
RESOURCE CREATE TABLE NO

8 rows selected.

Let's see the privilege assigned to user OMS.

SQL> select * from dba_sys_privs where grantee='OMS';

GRANTEE PRIVILEGE ADM
------------------------------ ---------------------------------------- ---
OMS UNLIMITED TABLESPACE NO

Now let's see which tablespaces contain the objects owned by the user OMS. We need to assign quota on those tablespaces and then revoking DBA role.

The tablespaces contain objects of user OMS.


SQL> DEFINE owner='OMS'
SQL> select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name ||';'
from dba_tables where owner='&OWNER'
UNION select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name
||';' from dba_indexes
where owner='&OWNER'
UNION select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name
||';' from dba_tab_partitions
where table_owner='&OWNER'
UNION select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name
||';' from dba_ind_partitions
where index_owner='&OWNER';
old 1: select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name ||';'
new 1: select distinct 'Alter user OMS quota unlimited on '|| tablespace_name ||';'
old 2: from dba_tables where owner='&OWNER'
new 2: from dba_tables where owner='OMS'
old 3: UNION select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name ||';' from dba_indexes
new 3: UNION select distinct 'Alter user OMS quota unlimited on '|| tablespace_name ||';' from dba_indexes
old 4: where owner='&OWNER'
new 4: where owner='OMS'
old 5: UNION select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name ||';' from dba_tab_partitions
new 5: UNION select distinct 'Alter user OMS quota unlimited on '|| tablespace_name ||';' from dba_tab_partitions
old 6: where table_owner='&OWNER'
new 6: where table_owner='OMS'
old 7: UNION select distinct 'Alter user &OWNER quota unlimited on '|| tablespace_name ||';' from dba_ind_partitions
new 7: UNION select distinct 'Alter user OMS quota unlimited on '|| tablespace_name ||';' from dba_ind_partitions
old 8: where index_owner='&OWNER'
new 8: where index_owner='OMS'

'ALTERUSEROMSQUOTAUNLIMITEDON'||TABLESPACE_NAME||';'
-----------------------------------------------------------------
Alter user OMS quota unlimited on OMS_INDX_SPC;
Alter user OMS quota unlimited on OMS_SPC;



Let's see if any objects privileges granted to user OMS.

SQL> select * from dba_tab_privs where grantee='OMS';

no rows selected

Now we give privilege and assign quota to user OMS and then revoking DBA role.

Assigning privilege by,
GRANT CREATE SESSION, CREATE TRIGGER, CREATE SEQUENCE, CREATE TYPE, CREATE PROCEDURE,
CREATE CLUSTER, CREATE OPERATOR, CREATE INDEXTYPE, CREATE TABLE TO OMS;


Giving quota on the tablespaces by,

ALTER USER OMS QUOTA UNLIMITED on OMS_SPC;
ALTER USER OMS QUOTA UNLIMITED on OMS_INDX_SPC;


Now revoking DBA role by,

REVOKE DBA FROM OMS;

Saturday, January 24, 2009

ORA-04062: timestamp of procedure has been changed

Problem Description
In the database I have created one procedure named a as below.

create or replace procedure a(a number) as
begin
insert into t1 values(1);
commit;
end;
/


Now after creating database link using remote database machine whenever I access this procedure "A" it executes successfully and I get value "1" in table t1. Like below in example where orastdby_m is the database link, maestro is the schema name and value 1 is the argument value though argument value is not used in the procedure.

SQL> exec maestro.a@orastdby_m(1);

PL/SQL procedure successfully completed.

Now in the source database machine I changed the procedure as below. Though you can change anything like any literal; adding space or remove space. I changed value to be inserted from 1 to 2.

create or replace procedure a(a number) as
begin
insert into t1 values(2);
commit;
end;
/


Now in the other database whenever I execute the procedure using database link it throws error ORA-04062. But subsequent execution goes ok without any error unless I change something inside the original procedure.

SQL> exec maestro.a@orastdby_m(1);
BEGIN maestro.a@orastdby_m(1); END;

*
ERROR at line 1:
ORA-04062: timestamp of procedure "MAESTRO.A" has been changed
ORA-06512: at line 1


SQL> exec maestro.a@orastdby_m(1);

PL/SQL procedure successfully completed.

Problem Analysis
ORA-4062 indicates that TIMESTAMP or SIGNATURE of NAME has been changed.
When a local piece of PL/SQL references a remote package, function, or procedure, the local PL/SQL engine needs to know if the reference is still valid, or, if the remote procedure has changed.

The locally compiled PL/SQL code is dependent on the remote code. This dependency is tracked by two models either TIMESTAMPS OR SIGNATURES in oracle.

The initialization parameter REMOTE_DEPENDENCIES_MODE is responsible which method to choose. This parameter can be set to either TIMESTAMP or SIGNATURE and can be set at the instance level(By setting ALTER SYSTEM) or at the session level(By setting ALTER SESSION). This can also be set at the client side session.

Also oracle allows "runtime binding" by which client PLSQL allows to delay for the actual binding up of a reference to a SCHEMA.OBJECT.

REMOTE_DEPENDENCIES_MODE = Timestamp
If the dependency mode is set to TIMESTAMP, the local PL/SQL block can only execute the remote PL/SQL block if the timestamp on the remote procedure matches the timestamp stored in the locally compiled PL/SQL block. If the timestamps do not match, the local PL/SQL must be recompiled.

REMOTE_DEPENCIES_MODE = Signature
If the dependency mode is set to SIGNATURE, the local PL/SQL block can still execute the remote PL/SQL block if its "signature" is the same, even if the timestamp has changed.

The term "signature" basically means the interface (procedure name, parameter types or modes) is the same, even if the underlying implementation has changed.

The error "ORA-04062: timestamp of procedure has been changed" is reported if the local PL/SQL block cannot call the remote procedure, since the timestamp or signature has changed on the remote end. A local recompilation may be required to make the call.

In the case of server to server calls, the local PL/SQL block is implicitly recompiled on the next call after an ORA-4062 error. In the case of client tools to server calls, the client Form or Report usually needs to be recompiled explicitly.

Solution of the Problem
When client-side PL/SQL tools are used OR when server-side PL/SQL calls are used across database links , set REMOTE_DEPENDENCIES_MODE to SIGNATURE. This reduces the chances of the ORA-4062 errors and the need for unnecessary recompilations.
You can change in client side by,

SQL> alter session set REMOTE_DEPENDENCIES_MODE=SIGNATURE;
Session altered.

Now changing the definition of procedure "A" in the source database will not result ORA-4062 in the remote database.

ORA-02082: a loopback database link must have a connection qualifier

Problem Description
You are trying to create or drop a database link to the same database name. This may be true if you have a database that was cloned from another database on a different machine and now you try to create or drop a database link with the name of the original database. Below is an example.

SQL> create database link tiger;
create database link tiger
*
ERROR at line 1:
ORA-02082: a loopback database link must have a connection qualifier


SQL> drop database link tiger;
drop database link tiger
*
ERROR at line 1:
ORA-02082: a loopback database link must have a connection qualifier

Cause of the Problem
This is an expected behavior if database global name match with the database link creation name. Now let's see the global_name of the database.

SQL> select * from global_name;

GLOBAL_NAME
---------------------------------------
TIGER.REGRESS.RDBMS.DEV.US.ORACLE.COM

We see the global name of the database is started with tiger(db_name) and the default db_domain is REGRESS.RDBMS.DEV.US.ORACLE.COM.

Now whenever we try to create a database link named TIGER (without any domain) it takes name as TIGER.REGRESS.RDBMS.DEV.US.ORACLE.COM (original database link name+ default domain) which is equivalent to global_name of the database and thus error will occur because database link name must not be equal to the global database name.

Solution of the Problem
Two different solution of this problem.
1)Change the database link name so that it is different from global database name.

SQL> create database link tiger.net connect to user_name identified by password using 'TNS_NAME';

Database link created.

2)Change the global name of the database, create/drop database link and then back to global name of the database to the original name.
i)Error when creating database link name with same of global_name.

SQL> create database link tiger connect to user_name identified by password using 'TNS_NAME';
create database link tiger connect to user_name identified by password using 'TNS_NAME'
*
ERROR at line 1:
ORA-02082: a loopback database link must have a connection qualifier

ii)Change the global database name.
SQL> alter database rename global_name to test;
Database altered.

iii)Now dropping the database link tiger will work as now it (TIGER.REGRESS.RDBMS.DEV.US.ORACLE.COM) is not same as the global name (TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM)

SQL> select * from global_name;

GLOBAL_NAME
---------------------------------------
TEST.REGRESS.RDBMS.DEV.US.ORACLE.COM

SQL> drop database link tiger;
Database link dropped.

iv)Also creating the database link with named Tiger (by default take as TIGER.REGRESS.RDBMS.DEV.US.ORACLE.COM) will work.
SQL> create database link tiger connect to user_name identified by password using 'TNS_NAME';

Database link created.

You can check it by,

SQL> col host for a10
SQL> set lines 140
SQL> col owner for a10
SQL> col db_link for a40
SQL> select * from dba_db_links where host='TNS_NAME';

OWNER DB_LINK USERNAME HOST CREATED
---------- ---------------------------------------- ---------------- ---------- ---------
MAXIMSG TIGER.REGRESS.RDBMS.DEV.US.ORACLE.COM USER_NAME TNS_NAME 24-JAN-09
MAXIMSG TIGER.NET USER_NAME TNS_NAME 24-JAN-09


v)Back to the original global database name.
SQL> alter database rename global_name to tiger;

Database altered.

Related Documents
Troubleshooting ORA-2085 "database link %s connects to %s"
ORA-02070: database does not support in this context

Tuesday, January 13, 2009

ORA-00845: MEMORY_TARGET not supported on this system

Problem Description
While creating a startup database using dbca the database creation GUI gives error message in a pop up window,
ORA-00845: MEMORY_TARGET not supported on this system
from where you can ignore the error message.
The similar scenario also occur whenever you try to start your database then startup shows error message like below.

SQL> STARTUP
ORA-00845: MEMORY_TARGET not supported on this system

Cause of the Problem
•Starting from Oracle 11g the automatic memory management feature is now defined with parameter MEMORY_TARGET and MEMMORY_MAX_TARGET.

•On linux file system the shared memory need to be mounted on /dev/shm directory on the operating system.

•And the size of /dev/shm needs to be greater than MEMORY_TARGET or MEMMORY_MAX_TARGET.

•The AMM (Automatic Memory Management) now in 11g manages both SGA and PGA together by MMAN process.

•The MEMORY_TARGET parameter in 11g comes for (SGA_TARGET+PGA_AGGREGATE_TARGET) which was in 10g.

•And MEMORY_MAX_TARGET parameter in 11g comes instead of SGA_MAX_TARGET parameter which was in 10g.

•The ORA-00845:can arises for the following two reasons on linux system.

1)If the shared memory which is mapped to /dev/shm directory is less than the size of MEMORY_TARGET or MEMORY_MAX_TARGET.
or,
2)If the shared memory is not mapped to /dev/shm directory.

Solution of the Problem
Make sure /dev/shm is properly mounted. You can see it by,
#df -h or #df -k command.
The output should be similar like
$ df -k
Filesystem Size Used Avail Use% Mounted on
...
shmfs 1G 512M 512M 50% /dev/shm

We see here for /dev/shm we have assigned 1G memory. Now if you set MEMORY_TARGET more than 1G then above ORA-845 will arise. For example if you have MEMORY_TARGET or MEMORY_MAX_TARGET set to 12G then you can mount shared memory to 13g like below.
As a root user,
# mount -t tmpfs shmfs -o size=13g /dev/shm
In order to make the settings persistence so that it will affect after restarting machine add an entry in /etc/fstab similar to the following:

shmfs /dev/shm tmpfs size=13g 0

Adding a default value to a column on a table

In oracle there is two ways by which you can add a default value to a column. This means if you don't provide any value to the column that column will get automatically a default value. With ALTER TABLE ... ADD statement and ALTER TABLE ... MODIFY statement you can provide a default value to a column. First one is for adding a new column and then assigned it to a default value and second one is modify existing column to a default value. Below is the description along with examples.

1)ALTER TABLE ... ADD Statement:
With
ALTER TABLE table_name ADD column_name data_type DEFAULT default_value
you can add a column provided a default value to the column. If you add a default value in this way Oracle Database updates each row in the new column with the value you specify for DEFAULT. As it is an update operation so it, in turn, fires any AFTER UPDATE triggers defined on the table.

SQL> create table test_default (col1 number);
Table created.

SQL> insert into test_default values(1);

1 row created.

SQL> insert into test_default values(2);
1 row created.

SQL> select * from test_default;
COL1
----------
1
2

SQL> alter table test_default add col2 varchar2(10) default 'Dhaka';
Table altered.

SQL> select * from test_default;
COL1 COL2
---------- ----------
1 Dhaka
2 Dhaka
We see every column of col2 gets a default value which means every column is updated.

You can see the default value of a column from user_tab_columns view or from user_tab_cols or cols synonym.

SQL> select data_default,column_name from cols where table_name='TEST_DEFAULT';

DATA_DEFAULT COLUMN_NAME
---------------------------------------- ------------------------------
COL1
'Dhaka' COL2

Always remember once you assign a default value to a column you can never remove the default value of the column completely. You can assign the default value to NULL but still data dictionary will show it NULL. Below is the example.

The following statement has no effect in changing the default value and hence following statement is useless.

SQL> alter table test_default modify col2 varchar2(10);

Table altered.

In the data dictionary it will display the default value.
SQL> select data_default,column_name from cols where table_name='TEST_DEFAULT';

DATA_DEFAULT COLUMN_NAME
---------------------------------------- ------------------------------
COL1
'Dhaka' COL2

In order to assign default value to NULL issue,
SQL> alter table test_default modify col2 varchar2(10) default NULL;

Table altered.

Now data dictionary will display NULL in the data_default field.
SQL> select data_default,column_name from cols where table_name='TEST_DEFAULT';

DATA_DEFAULT COLUMN_NAME
---------------------------------------- ------------------------------
COL1
NULL COL2

In the column NULL value will be inserted if no value is given.
SQL> insert into test_default(col1) values(3);

1 row created.

SQL> select * from test_default;

COL1 COL2
---------- ----------
1 Dhaka
2 Dhaka
3
2)ALTER TABLE ... MODIFY statement:
With,
ALTER TABLE table_name MODIFY column_name DEFAULT default_value
you can modify a column value to the default one. But like ALTER TABLE ... ADD, MODIFY will not update the all the column values in the table. Subsequent insert will be assigned to default value if no value is provided.

SQL> alter table test_default modify col2 DEFAULT 'Jhenidah';

Table altered.

SQL> select * from test_default;

COL1 COL2
---------- ----------
1 Dhaka
2 Dhaka
3

SQL> insert into test_default(col1) values(4);

1 row created.

SQL> select * from test_default;

COL1 COL2
---------- ----------
1 Dhaka
2 Dhaka
3
4 Jhenidah
Related Documents
http://arjudba.blogspot.com/2008/09/how-to-disable-and-enable-all.html
http://arjudba.blogspot.com/2008/05/create-user-in-oracle.html
http://arjudba.blogspot.com/2008/06/create-temporary-table-in-oracle.html
Create, Alter , Rename, Modify Table SQL
http://arjudba.blogspot.com/2008/06/drop-table-in-oracle.html

Monday, January 12, 2009

Resolve of DIM-00019: create service error

Problem Scenario
You have cold backup of oracle database directory structure along with oracle software. Now your Windows OS got corrupted. So you reinstall your windows operating system and you try to restore the database.

After installing software you set the ORACLE_SID and ORACLE_HOME environmental variable and whenever you create oracle service with oradim it fails with below message.
C:\>oradim -new -sid orcl -intpwd orcl -startmode manual -pfile 'F:\oracle10g\pr
oduct\10.2.0\db_1\database\initorcl.ora'

Instance created.
DIM-00019: create service error
O/S-Error: (OS 2) The system cannot find the file specified.

Solution of the Problem

On linux environment you don't need these things. But in windows environment you need to do a lot of things more than restoring files. Whenever a windows OS gets corrupted you loose the Oracle Universal Installer repository and the regedit entries along with oracle home structure. So to solve the problem it is better to install new oracle software in your windows machine without any database and then create oracle instance and service using oradim.

Friday, January 9, 2009

ORA-12838: cannot read/modify an object after modifying it in parallel

Problem Description
SQL> INSERT /*+APPEND NOLOGGING */ INTO STATION_RATE(ID,COUNTRY,CODE,PURCHASE_RATE,STATION_ID) SELECT
ID.NEXTVAL, COUNTRY,CODE,PURCHASE_RATE, 4 from STATION_RATE_DUMMY;


7561 rows created.

SQL> INSERT /*+APPEND NOLOGGING */ INTO STATION_RATE(ID,COUNTRY,CODE,PURCHASE_RATE,STATION_ID) SELECT
ID.NEXTVAL, COUNTRY,CODE,PURCHASE_RATE, 5 from STATION_RATE_DUMMY;

INSERT /*+APPEND NOLOGGING */ INTO STATION_RATE(ID,COUNTRY,CODE,PURCHASE_RATE,STATION_ID) SELECT
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

Cause of the Problem
A table is modified in parallel or with direct path load in a transaction. Now within the same transaction if an attempt was made to read or modification statements on a table then ORA-12838 will occur. In oracle within same transaction table is modified with direct path load or parallel and then access of it is not permitted.

Solution of the Problem
Break up the transaction into two or rewrite the transaction. You can break the transaction into two simply after doing a commit after direct path load or parallel modification of the table.

SQL> commit;

Commit complete.

SQL> INSERT /*+APPEND NOLOGGING */ INTO STATION_RATE(ID,COUNTRY,CODE,PURCHASE_RATE,STATION_ID) SELECT
ID.NEXTVAL, COUNTRY,CODE,PURCHASE_RATE, 5 from STATION_RATE_DUMMY;


7561 rows created.

As now data is loaded direct path load so we can't read data from the table unless we do a commit.

SQL> select count(*) from station_rate;
select count(*) from station_rate
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


SQL> commit;

Commit complete.

SQL> select count(*) from station_rate;

COUNT(*)
----------
53228
Related Documents
http://arjudba.blogspot.com/2009/12/oracle-object-type-exercises-varray.html
http://arjudba.blogspot.com/2009/12/practice-oracle-joins-examples.html
http://arjudba.blogspot.com/2009/12/oracle-security-practices.html
http://arjudba.blogspot.com/2009/12/exercises-with-oracle-create-table-add.html
http://arjudba.blogspot.com/2009/12/oracle-database-creation-exercises.html
http://arjudba.blogspot.com/2009/12/basic-oracle-sql-exercise.html
http://arjudba.blogspot.com/2009/08/format-model-modifiers-fx-and-fm.html
http://arjudba.blogspot.com/2009/08/number-format-models-in-oracle.html
http://arjudba.blogspot.com/2009/08/format-models-in-oracle.html
http://arjudba.blogspot.com/2009/07/sql-decode.html
http://arjudba.blogspot.com/2009/07/how-to-know-row-of-table-belong-to.html
http://arjudba.blogspot.com/2009/06/how-to-know-which-objects-are-being.html
http://arjudba.blogspot.com/2009/06/ddl-with-wait-option-in-11g.html
http://arjudba.blogspot.com/2009/06/ora-00939-too-many-arguments-when-case.html
http://arjudba.blogspot.com/2009/03/oracle-datatype-internal-code.html
http://arjudba.blogspot.com/2009/03/how-to-know-list-of-constraints-and.html
http://arjudba.blogspot.com/2009/02/how-to-know-dependent-objectswhich.html
http://arjudba.blogspot.com/2009/02/how-to-search-stringkey-value-from.html
http://arjudba.blogspot.com/2009/02/how-to-know-when-tableobjects-ddlcode.html
http://arjudba.blogspot.com/2009/02/ora-00920-invalid-relational-operator.html
http://arjudba.blogspot.com/2009/01/adding-default-value-to-column-on-table.html
http://arjudba.blogspot.com/2009/01/ora-12838-cannot-readmodify-object.html
http://arjudba.blogspot.com/2009/01/ora-01779-cannot-modify-column-which.html
http://arjudba.blogspot.com/2009/01/updating-table-based-on-another-table.html
http://arjudba.blogspot.com/2009/01/ora-00054-resource-busy-and-acquire.html
http://arjudba.blogspot.com/2008/12/troubleshoot-ora-02292-ora-02449-and.html

Wednesday, January 7, 2009

ORA-01779: cannot modify a column which maps to a non key-preserved table

Problem Description
I wanted to update a table column based on another table data but it fails with error
ORA-01779: cannot modify a column which maps to a non key-preserved table.
SQL> update
(select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id)
set col1=col2;
set col1=col2
*
ERROR at line 4:
ORA-01779: cannot modify a column which maps to a non key-preserved table

Cause, Description and Solution of the Problem
Let's look both of table's data.
SQL> select * from table_1;

ID CODE
---------- --------------------
1 First Row
2 Rows to be updated

SQL> select * from table_2;

ID CODE
---------- --------------------
2 Second Row

From the above update query, the select part return following rows,

SQL> select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id;

COL1 COL2
-------------------- --------------------
Rows to be updated Second Row

In this case col1 value will be replaced by col2 value. Now if source table's id column is not unique then see the below case:
SQL> insert into table_2 values(2,'Test Row');

1 row created.

SQL> select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id;


COL1 COL2
-------------------- --------------------
Rows to be updated Second Row
Rows to be updated Test Row

Now the col1 data "Rows to be updated" will be replaced by which value of col2? "Second Row" or "Test Row" which is ambiguous. So in order to update col1 in this way the id of the table containing col2 must be unique so that ambiguous situation will not occur.

SQL> rollback;

Rollback complete.

Adding an unique constraint will solve the problem.
SQL> alter table table_2 add constraint table_2_uk unique(id);

Table altered.

Now we have unique on id. So no ambiguous situation will occur.
SQL> select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id;

COL1 COL2
-------------------- --------------------
Rows to be updated Second Row

So update will work fine.
SQL> update
(select t1.code col1, t2.code col2 from table_1 t1
JOIN table_2 t2 ON t1.id=t2.id)
set col1=col2;

1 row updated.

SQL> select * from table_1;

ID CODE
---------- --------------------
1 First Row
2 Second Row
Related Documents
http://arjudba.blogspot.com/2009/12/oracle-object-type-exercises-varray.html
http://arjudba.blogspot.com/2009/12/practice-oracle-joins-examples.html
http://arjudba.blogspot.com/2009/12/oracle-security-practices.html
http://arjudba.blogspot.com/2009/12/exercises-with-oracle-create-table-add.html
http://arjudba.blogspot.com/2009/12/oracle-database-creation-exercises.html
http://arjudba.blogspot.com/2009/12/basic-oracle-sql-exercise.html
http://arjudba.blogspot.com/2009/08/format-model-modifiers-fx-and-fm.html
http://arjudba.blogspot.com/2009/08/number-format-models-in-oracle.html
http://arjudba.blogspot.com/2009/08/format-models-in-oracle.html
http://arjudba.blogspot.com/2009/07/sql-decode.html
http://arjudba.blogspot.com/2009/07/how-to-know-row-of-table-belong-to.html
http://arjudba.blogspot.com/2009/06/how-to-know-which-objects-are-being.html
http://arjudba.blogspot.com/2009/06/ddl-with-wait-option-in-11g.html
http://arjudba.blogspot.com/2009/06/ora-00939-too-many-arguments-when-case.html
http://arjudba.blogspot.com/2009/03/oracle-datatype-internal-code.html
http://arjudba.blogspot.com/2009/03/how-to-know-list-of-constraints-and.html
http://arjudba.blogspot.com/2009/02/how-to-know-dependent-objectswhich.html
http://arjudba.blogspot.com/2009/02/how-to-search-stringkey-value-from.html
http://arjudba.blogspot.com/2009/02/how-to-know-when-tableobjects-ddlcode.html
http://arjudba.blogspot.com/2009/02/ora-00920-invalid-relational-operator.html
http://arjudba.blogspot.com/2009/01/adding-default-value-to-column-on-table.html
http://arjudba.blogspot.com/2009/01/ora-12838-cannot-readmodify-object.html
http://arjudba.blogspot.com/2009/01/ora-01779-cannot-modify-column-which.html
http://arjudba.blogspot.com/2009/01/updating-table-based-on-another-table.html
http://arjudba.blogspot.com/2009/01/ora-00054-resource-busy-and-acquire.html
http://arjudba.blogspot.com/2008/12/troubleshoot-ora-02292-ora-02449-and.html

Monday, January 5, 2009

How to install or run .iso files

From rapidshare or megaupload or from any other server you downloaded .rar file. And now whenever you unrar/uncompress it you get a file with an extention of .iso. Now how you will run that or make it install the file or to see the contents of the .iso file.

In fact an iso file is a disk image of an optical disk(CD/DVD). ISO stands for International Organization for Standardization.

An iso file contains the image of disk which means it contains all the files and folders that were on that disk, much like a zip or cab file contains a collection of files and folders. The real difference is that an iso is a byte-for-byte copy of the low-level data actually stored on a disk.

In order to run the iso file you need a software that can read .iso file. You can think it as just like winrar, as winrar can read a file that have .rar extentions or just like unzip, ad unzip can read a file that have .zip extentions.

There are lots of software that can run iso files. The most common is CloneCD, CDRWIN, MagicISO etc.

Improving Index creation speed in Oracle

It is sometimes a time consuming task if you like to create index with much number of rows. For example you are asked to created an index over 1 billion of data. It may take over 6 hours on your computer and you want to make it faster.

With providing several options while creating index you can improve index creation speed dramatically.

1)PARALLEL Option:
While creating index oracle must first collect the symbolic key/ROWID pairs with a full-table scan. With Parallel option supplied to the create index script oracle will scan full table parallel based on the number of CPUs, table partitioning and disk configuration and thus do the tasks faster.

On a server that have 6 CPUs you may give parallel 5 as below.
create index table_1_I on table_1(id,code) parallel 5;

2)NOLOGGING Option: With the NOLOGGING option provided while creating index you can restrict database to generate a large redo log. NOLOGGING option generates minimal redo. So you will achieve higher performance.
create index table_1_I on table_1(id,code) parallel 5 nologging;

3)COMPRESS Option: With the COMPRESS option you will enable key compression, which eliminates repeated occurrence of key column values and may substantially reduce storage. Use integer to specify the prefix length (number of prefix columns to compress).
For unique indexes, the valid range of prefix length values is from 1 to the number of key columns minus 1. The default prefix length is the number of key columns minus 1.

For nonunique indexes, the valid range of prefix length values is from 1 to the number of key columns. The default prefix length is the number of key columns.

You can use as,
create index table_1_I on table_1(id,code) parallel 5 nologging compress;

4)Index in a bigger block: You can create an index in a tablespace that uses bigger block size. If you have DSS environment then you can do it. This will improve performance while creating index.

You can do it by first creating a tablespace with 32k blocksize and then create index under it,

create tablespace 32k_ts
datafile '/u01/32k_file.dbf'
blocksize 32k;


create index table_1_I on table_1(id,code) parallel 5 nologging compress tablespace 32K;