If you decide to delete all objects of a certain schema then better drop the schema. It will save timing as well as easy process. However inside a schema a user can manually drop all his objects. Below is the procedure about how a user can drop all object under his schema.
1)Connect to database as the user that you want to drop all his objects.
Suppose I want to drop test schema objects.
$sqlplus test/test
2)Script for dropping all objects.
declare
cursor c_get_objects is
select object_type,'"'||object_name||'"'||decode(object_type,'TABLE' ,' cascade constraints',null) obj_name from user_objects
where object_type in ('TABLE','VIEW','PACKAGE','SEQUENCE','PROCEDURE','FUNCTION',
'SYNONYM', 'MATERIALIZED VIEW')
order by object_type;
cursor c_get_objects_type is select object_type, '"'||object_name||'"' obj_name from user_objects where object_type in ('TYPE');
begin
for object_rec in c_get_objects loop
execute immediate ('drop '||object_rec.object_type||' ' ||object_rec.obj_name);
end loop;
for object_rec in c_get_objects_type loop
begin execute immediate ('drop '||object_rec.object_type||' ' ||object_rec.obj_name);
end;
end loop;
end;
/
3)Purge the recyclebin objects.
SQL>PURGE RECYCLEBIN;
4)Drop the queues if you use advanced queuing.
declare
cursor active_queues is select OBJECT_NAME from user_objects where OBJECT_TYPE = 'QUEUE';
cursor active_queue_tables is select OBJECT_NAME from user_objects where OBJECT_TYPE = 'TABLE';
begin
for v_queue in active_queues loop
DBMS_AQADM.STOP_QUEUE (queue_name => v_queue.object_name);
DBMS_AQADM.DROP_QUEUE (queue_name => v_queue.object_name);
end loop;
for v_table in active_queue_tables loop
DBMS_AQADM.DROP_QUEUE_TABLE (queue_table => v_table.object_name, force => TRUE);
end loop;
end;
/
5)Check for if any objects inside schema.
Be sure no rows return after issuing following query.
SQL> select * from user_objects;
no rows selected
Friday, September 12, 2008
How to generate External Table from sql loader
In many cases we might want to use external table to load data rather than Sql*Loader because in external table we can load data parallel. In this post I will show how we can generate external table easily with help of sql loader. With this method we can easily get rid of writing too many codes for external tables.
Suppose my data file is 3.txt which is under C drive on my windows machine and contains a single record,
1, momin
The next step is to create a control file for SQL loader. I named it control.ctl made it as follows,
LOAD DATA
infile 'c:\3.txt'
into table std
fields terminated by ','
(id , name)
Now invoke the Sql loader as below. Note that addition word that need is external_table=generate_only. With this keyword only externa table will be created but will not load any data actually on table.
sqlldr user_id/password control=c:\control.ctl.txt log=5.txt external_table=generate_only
After invoking I open the log file and important contents from logfile is,
CREATE DIRECTORY statements needed for files
------------------------------------------------------------------------
CREATE DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000 AS 'C:\'
CREATE DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000 AS 'c:\'
CREATE TABLE statement for external table:
------------------------------------------------------------------------
CREATE TABLE "SYS_SQLLDR_X_EXT_STD"
(
"ID" NUMBER,
"NAME" VARCHAR2(10)
)
ORGANIZATION external
(
TYPE oracle_loader
DEFAULT DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE CHARACTERSET WE8MSWIN1252
BADFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'3.bad'
LOGFILE 'c:\5.txt_xt'
READSIZE 1048576
FIELDS TERMINATED BY "," LDRTRIM
REJECT ROWS WITH ALL NULL FIELDS
(
"ID" CHAR(255)
TERMINATED BY ",",
"NAME" CHAR(255)
TERMINATED BY ","
)
)
location
(
'3.txt'
)
)REJECT LIMIT UNLIMITED
INSERT statements used to load internal tables:
------------------------------------------------------------------------
INSERT /*+ append */ INTO STD
(
ID,
NAME
)
SELECT
"ID",
"NAME"
FROM "SYS_SQLLDR_X_EXT_STD"
statements to cleanup objects created by previous statements:
------------------------------------------------------------------------
DROP TABLE "SYS_SQLLDR_X_EXT_STD"
DROP DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
DROP DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
Related Documents
How to load data using external table into oracle database
How to Load or copy data from SQL Server or excel to Oracle
Suppose my data file is 3.txt which is under C drive on my windows machine and contains a single record,
1, momin
The next step is to create a control file for SQL loader. I named it control.ctl made it as follows,
LOAD DATA
infile 'c:\3.txt'
into table std
fields terminated by ','
(id , name)
Now invoke the Sql loader as below. Note that addition word that need is external_table=generate_only. With this keyword only externa table will be created but will not load any data actually on table.
sqlldr user_id/password control=c:\control.ctl.txt log=5.txt external_table=generate_only
After invoking I open the log file and important contents from logfile is,
CREATE DIRECTORY statements needed for files
------------------------------------------------------------------------
CREATE DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000 AS 'C:\'
CREATE DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000 AS 'c:\'
CREATE TABLE statement for external table:
------------------------------------------------------------------------
CREATE TABLE "SYS_SQLLDR_X_EXT_STD"
(
"ID" NUMBER,
"NAME" VARCHAR2(10)
)
ORGANIZATION external
(
TYPE oracle_loader
DEFAULT DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE CHARACTERSET WE8MSWIN1252
BADFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'3.bad'
LOGFILE 'c:\5.txt_xt'
READSIZE 1048576
FIELDS TERMINATED BY "," LDRTRIM
REJECT ROWS WITH ALL NULL FIELDS
(
"ID" CHAR(255)
TERMINATED BY ",",
"NAME" CHAR(255)
TERMINATED BY ","
)
)
location
(
'3.txt'
)
)REJECT LIMIT UNLIMITED
INSERT statements used to load internal tables:
------------------------------------------------------------------------
INSERT /*+ append */ INTO STD
(
ID,
NAME
)
SELECT
"ID",
"NAME"
FROM "SYS_SQLLDR_X_EXT_STD"
statements to cleanup objects created by previous statements:
------------------------------------------------------------------------
DROP TABLE "SYS_SQLLDR_X_EXT_STD"
DROP DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
DROP DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
Related Documents
How to load data using external table into oracle database
How to Load or copy data from SQL Server or excel to Oracle
Archive Destination Settings fails with ORA-32017 and ORA-16179
Problem Description
While enabling my archived log file to more than one location whenever I set log_archive_dest_1 it fails with error ORA-32017 and ORA-16179.
SQL> alter system set log_archive_dest_1='c:\test';
alter system set log_archive_dest_1='c:\test'
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-16179: incremental changes to "log_archive_dest_1" not allowed with SPFILE
Cause of The Problem
To set LOG_ARCHIVE_DEST_n you must include either the LOCATION or the SERVICE attribute for each destination to specify where to archive the redo data. But in the command this is not used.
Solution of The Problem
While setting LOG_ARCHIVE_DEST_n there are several usage notes like,
•Either the LOCATION or the SERVICE attribute must be specified. There is no default.
•With the LOCATION attribute local destination is specified. You can set local disk path or USE_DB_RECOVERY_FILE_DEST that will serve as the flash recovery area using the DB_RECOVERY_FILE_DEST initialization parameter.
•With the SERVICE attribute you can specify remote destination.
•SERVICE attribute is specified with a valid Oracle Net service name (SERVICE=net_service_name) that identifies the remote Oracle database instance to which the redo data will be sent.
Example:
To a local disk destination,
SQL> alter system set log_archive_dest_1='LOCATION=c:\test';
System altered.
SQL> alter system set log_archive_dest_2='LOCATION=g:\';
System altered.
Using flash recovery area,
ALTER SYSTEM SET LOG_ARCHIVE_DEST_3='LOCATION=USE_DB_RECOVERY_FILE_DEST';
Setting to a remote location,
ALTER SYSTEM SET LOG_ARCHIVE_DEST_4='SERVICE=stby1';
In the tnsnames.ora stby1 is defined to a remote location where archived redo logs to be located.
Related Documents
ORA-16018 And ORA-16019 LOG_ARCHIVE_DEST with LOG_ARCHIVE_DEST_n
What will be the Archived Redo Log Destination in Oracle
While enabling my archived log file to more than one location whenever I set log_archive_dest_1 it fails with error ORA-32017 and ORA-16179.
SQL> alter system set log_archive_dest_1='c:\test';
alter system set log_archive_dest_1='c:\test'
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-16179: incremental changes to "log_archive_dest_1" not allowed with SPFILE
Cause of The Problem
To set LOG_ARCHIVE_DEST_n you must include either the LOCATION or the SERVICE attribute for each destination to specify where to archive the redo data. But in the command this is not used.
Solution of The Problem
While setting LOG_ARCHIVE_DEST_n there are several usage notes like,
•Either the LOCATION or the SERVICE attribute must be specified. There is no default.
•With the LOCATION attribute local destination is specified. You can set local disk path or USE_DB_RECOVERY_FILE_DEST that will serve as the flash recovery area using the DB_RECOVERY_FILE_DEST initialization parameter.
•With the SERVICE attribute you can specify remote destination.
•SERVICE attribute is specified with a valid Oracle Net service name (SERVICE=net_service_name) that identifies the remote Oracle database instance to which the redo data will be sent.
Example:
To a local disk destination,
SQL> alter system set log_archive_dest_1='LOCATION=c:\test';
System altered.
SQL> alter system set log_archive_dest_2='LOCATION=g:\';
System altered.
Using flash recovery area,
ALTER SYSTEM SET LOG_ARCHIVE_DEST_3='LOCATION=USE_DB_RECOVERY_FILE_DEST';
Setting to a remote location,
ALTER SYSTEM SET LOG_ARCHIVE_DEST_4='SERVICE=stby1';
In the tnsnames.ora stby1 is defined to a remote location where archived redo logs to be located.
Related Documents
ORA-16018 And ORA-16019 LOG_ARCHIVE_DEST with LOG_ARCHIVE_DEST_n
What will be the Archived Redo Log Destination in Oracle
Wednesday, September 10, 2008
How to recover or recreate temporary tablespace in 10g
In database you may discover that your temporary tablespace is deleted from OS or it might got corrupted. In order to get it back you might think about recover it. The recovery process is simply recover the temporary file from backup and roll the file forward using archived log files if you are in archivelog mode.
Another solution is simply drop the temporary tablespace and then create a new one and assign new one to the database users.
In order to do that follow the steps here.
1)Find out the temporary datafiles.
2)Make the affected temporary files offline.
SQL> Alter database tempfile 1,2,4 offline;
Database altered.
3)Create a new temporary tablespace and make it database default tablespace.
SQL> create temporary tablespace temp01 tempfile '/oradata2/temp.dbf' size 10M;
Tablespace created.
SQL> alter database default temporary tablespace temp01;
Database altered.
3)Check for users who are not pointed to this default temporary tablespaces. Make this default for those users also.
4)Explicitly assign temporary tablespace for users TEST2 and ARJU.
SQL> alter user arju temporary tablespace temp01;
User altered.
SQL> alter user test2 temporary tablespace temp01;
User altered.
3)Drop the old temporary tablespace.
SQL> drop tablespace temp;
Tablespace dropped.
SQL> drop tablespace temp2;
Tablespace dropped.
SQL> drop tablespace temp3;
Tablespace dropped.
Related Documents
Drop Temporary Tablespace Hangs
ORA-12906 cannot drop default temporary tablespace
Free space in Temporary Tablespace
Implicitly Assigned temp tablespace changes after database default tablespace change
Information about Temporary Segments.
The operation that require sort area or Temporary Tablespace
Another solution is simply drop the temporary tablespace and then create a new one and assign new one to the database users.
In order to do that follow the steps here.
1)Find out the temporary datafiles.
SQL> col file_name format a50
SQL> set linesize 200
SQL> select file_name,file_id, tablespace_name from dba_temp_files;
FILE_NAME FILE_ID TABLESPACE_NAME
-------------------------------------------------- ---------- ------------------------------
/oradata2/temp2.dbf 1 TEMP2
/oradata2/temp.dbf 2 TEMP
/oradata2/temp3.dbf 4 TEMP3
2)Make the affected temporary files offline.
SQL> Alter database tempfile 1,2,4 offline;
Database altered.
3)Create a new temporary tablespace and make it database default tablespace.
SQL> create temporary tablespace temp01 tempfile '/oradata2/temp.dbf' size 10M;
Tablespace created.
SQL> alter database default temporary tablespace temp01;
Database altered.
3)Check for users who are not pointed to this default temporary tablespaces. Make this default for those users also.
SQL> select temporary_tablespace , username from dba_users where temporary_tablespace<>'TEMP01';
TEMPORARY_TABLESPACE USERNAME
------------------------------ ------------------------------
TEMP TEST2
TEMP2 ARJU
4)Explicitly assign temporary tablespace for users TEST2 and ARJU.
SQL> alter user arju temporary tablespace temp01;
User altered.
SQL> alter user test2 temporary tablespace temp01;
User altered.
3)Drop the old temporary tablespace.
SQL> drop tablespace temp;
Tablespace dropped.
SQL> drop tablespace temp2;
Tablespace dropped.
SQL> drop tablespace temp3;
Tablespace dropped.
Related Documents
Drop Temporary Tablespace Hangs
ORA-12906 cannot drop default temporary tablespace
Free space in Temporary Tablespace
Implicitly Assigned temp tablespace changes after database default tablespace change
Information about Temporary Segments.
The operation that require sort area or Temporary Tablespace
Drop Temporary Tablespace Hangs
Problem Description
The DROP temporary tablespace operations take long time and in fact it hangs. If you take a 10046 trace of the session it shows "enqueue" wait.
Cause of The Problem In the section http://arjudba.blogspot.com/2008/05/operation-that-require-sort-area-or.html I discussed about the operation that needs sort space. Whenever an operation is using sort space an entry is found in the v$sort_usage. After the operation finishes entry from $sort_usage vanishes. But dead connections (while running a query) may leave entries in v$session (status inactive) and in v$sort_usage. The query about the users who is Performing Sort operation in Temp Segments can be found in http://arjudba.blogspot.com/2008/05/information-about-temporary-segments.html
Solution of The Problem
1.Create a new temporary tablespace and assign all users to this new tablespace. You can easily do this task on unix system as,
CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/oradata2/temp02.dbf' SIZE 100m;
Change it for all by
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP2;
2.Find out the all the sessions that are not active and have an entry in V$sort_usage.
You can do it by,
SELECT b.tablespace,b.segfile#,b.segblk#,b.blocks,a.sid,a.serial#,
a.username,a.osuser, a.status
FROM v$session a,v$sort_usage b
WHERE a.saddr = b.session_addr;
3.Kill those session.
Using
alter system kill session 'SID_NUMBER, SERIAL#NUMBER'; kill those session that are not being used actually.
where SID_NUMBER and SERIAL#NUMBER is found in step 2.
4. Now dropping the previous tablespace
DROP TABLESPACE previous_temp_tbs;
The DROP temporary tablespace operations take long time and in fact it hangs. If you take a 10046 trace of the session it shows "enqueue" wait.
Cause of The Problem In the section http://arjudba.blogspot.com/2008/05/operation-that-require-sort-area-or.html I discussed about the operation that needs sort space. Whenever an operation is using sort space an entry is found in the v$sort_usage. After the operation finishes entry from $sort_usage vanishes. But dead connections (while running a query) may leave entries in v$session (status inactive) and in v$sort_usage. The query about the users who is Performing Sort operation in Temp Segments can be found in http://arjudba.blogspot.com/2008/05/information-about-temporary-segments.html
Solution of The Problem
1.Create a new temporary tablespace and assign all users to this new tablespace. You can easily do this task on unix system as,
CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/oradata2/temp02.dbf' SIZE 100m;
Change it for all by
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP2;
2.Find out the all the sessions that are not active and have an entry in V$sort_usage.
You can do it by,
SELECT b.tablespace,b.segfile#,b.segblk#,b.blocks,a.sid,a.serial#,
a.username,a.osuser, a.status
FROM v$session a,v$sort_usage b
WHERE a.saddr = b.session_addr;
3.Kill those session.
Using
alter system kill session 'SID_NUMBER, SERIAL#NUMBER'; kill those session that are not being used actually.
where SID_NUMBER and SERIAL#NUMBER is found in step 2.
4. Now dropping the previous tablespace
DROP TABLESPACE previous_temp_tbs;
Tuesday, September 9, 2008
Clusterware Installation fails at the end of CRS on Red Hat Linux 5
Problem Description
While installing oracle clusterware on Red Hat Enterprise Linux 5.0 at then end of the installation of Cluster Ready Services OUI prompts for the $CRS_HOME/root.sh script to be run on all
of the nodes in the cluster. The fact is when the root.sh script is run on the last node in
the cluster, the script calls the VIPCA utility which fails in RHEL 5. The same failure occurs on Red Hat Enterprise Linux 5.0, and SUSE Linux Enterprise Linux 10.
Solution of The Problem
As a workaround to solve the problem before running the root.sh script on the last node in the cluster,
alter the $CRS_HOME/bin/vipca script commenting out lines 119 through 123:
arch=’uname -m’
# if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
# then
# LD_ASSUME_KERNEL=2.4.19
# export LD_ASSUME_KERNEL
# fi
After commenting out run root.sh and it should be able to call VIPCA successfully.
But don't comment out the line 118 that is arch=’uname -m’ as it is needed by the root.sh script to set the arch variable.
Related Documents
Installing Clusterware through OUI fails Cheking OS version in RHL-5
While installing oracle clusterware on Red Hat Enterprise Linux 5.0 at then end of the installation of Cluster Ready Services OUI prompts for the $CRS_HOME/root.sh script to be run on all
of the nodes in the cluster. The fact is when the root.sh script is run on the last node in
the cluster, the script calls the VIPCA utility which fails in RHEL 5. The same failure occurs on Red Hat Enterprise Linux 5.0, and SUSE Linux Enterprise Linux 10.
Solution of The Problem
As a workaround to solve the problem before running the root.sh script on the last node in the cluster,
alter the $CRS_HOME/bin/vipca script commenting out lines 119 through 123:
arch=’uname -m’
# if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
# then
# LD_ASSUME_KERNEL=2.4.19
# export LD_ASSUME_KERNEL
# fi
After commenting out run root.sh and it should be able to call VIPCA successfully.
But don't comment out the line 118 that is arch=’uname -m’ as it is needed by the root.sh script to set the arch variable.
Related Documents
Installing Clusterware through OUI fails Cheking OS version in RHL-5
Monday, September 8, 2008
How to get port number list of EM and isqlplus
You can get your portlist information of enterprise manger or isqlplus or others web services in the location of $ORACLE_HOME/install/portlist.ini.
On my system the output is,
bash-3.00$ cat /oracle/app/oracle/product/10.2.0/db_1/install/portlist.ini
iSQL*Plus HTTP port number =5560
Enterprise Manager Console HTTP Port (DBASE) = 1158
Enterprise Manager Agent Port (DBASE) = 3938
Enterprise Manager Console HTTP Port (arjudba) = 5500
Enterprise Manager Agent Port (arjudba) = 1830
Enterprise Manager Console HTTP Port (arju) = 5501
Enterprise Manager Agent Port (arju) = 1831
But note that the ports that are listed in portlist.ini file were assigned during the installation. This file is not updated if port numbers are changed after the installation.
Related Documents
How to Notify or send email Event in Oracle from EM
In EM connection fails with ERROR: NMO not setuid-root (Unix-only)
On my system the output is,
bash-3.00$ cat /oracle/app/oracle/product/10.2.0/db_1/install/portlist.ini
iSQL*Plus HTTP port number =5560
Enterprise Manager Console HTTP Port (DBASE) = 1158
Enterprise Manager Agent Port (DBASE) = 3938
Enterprise Manager Console HTTP Port (arjudba) = 5500
Enterprise Manager Agent Port (arjudba) = 1830
Enterprise Manager Console HTTP Port (arju) = 5501
Enterprise Manager Agent Port (arju) = 1831
But note that the ports that are listed in portlist.ini file were assigned during the installation. This file is not updated if port numbers are changed after the installation.
Related Documents
How to Notify or send email Event in Oracle from EM
In EM connection fails with ERROR: NMO not setuid-root (Unix-only)
Subscribe to:
Posts (Atom)