Thursday, April 30, 2009

How to get DDL both as SQL statements and as XML data

We know with usage of sqlfile=file_name.txt all DDL statements inside dump file is populated inside file_name.txt file. The details of this is discussed on http://arjudba.blogspot.com/2008/04/extract-ddl-from-dump.html.

In many cases however you may want to extract metadata as XML format. With usage of both SQLFILE and TRACE you can achieve your goal.

C:>impdp full=y userid=arju/a dumpfile=arju_30_04.dmp logfile=arju_30_04.log SQLFILE=metadata_with_xml.sql TRACE=2

Import: Release 10.2.0.1.0 - Production on Friday, 01 May, 2009 9:30:56

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "ARJU"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "ARJU"."SYS_SQL_FILE_FULL_01": full=y userid=arju/******** dumpfile=arju_30_04.dmp logfile=arju_30_04.log SQLF
ILE=metadata_with_xml.sql TRACE=2
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "ARJU"."SYS_SQL_FILE_FULL_01" successfully completed at 09:31:08

A sample output from the logfile is,
-- CONNECT ARJU
-- new object type path is: SCHEMA_EXPORT/USER
<?xml version="1.0"?><ROWSET><ROW>
<USER_T><VERS_MAJOR>1</VERS_MAJOR><VERS_MINOR>0</VERS_MINOR><USER_ID>61</USER_ID><NAME>ARJU</NAME><TYPE_NUM>1</TYPE_NUM><PASSWORD>55E19EAC6BA480EA</PASSWORD><DATATS>USERS</DATATS><TEMPTS>TEMP</TEMPTS><CTIME>16-APR-09</CTIME><PTIME>16-APR-09</PTIME><PROFNUM>0</PROFNUM><PROFNAME>DEFAULT</PROFNAME><DEFROLE>1</DEFROLE><ASTATUS>0</ASTATUS><LCOUNT>0</LCOUNT><DEFSCHCLASS>DEFAULT_CONSUMER_GROUP</DEFSCHCLASS><SPARE1>0</SPARE1></USER_T>
</ROW></ROWSET>
-- CONNECT SYSTEM
CREATE USER "ARJU" IDENTIFIED BY VALUES '55E19EAC6BA480EA'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP";

-- new object type path is: SCHEMA_EXPORT/SYSTEM_GRANT
<?xml version="1.0"?><ROWSET><ROW>
<SYSGRANT_T><VERS_MAJOR>1</VERS_MAJOR><VERS_MINOR>1</VERS_MINOR><PRIVILEGE>-15</PRIVILEGE><GRANTEE>ARJU</GRANTEE><PRIVNAME>UNLIMITED TABLESPACE</PRIVNAME><SEQUENCE>918</SEQUENCE><WGO>0</WGO></SYSGRANT_T>
</ROW></ROWSET>
GRANT UNLIMITED TABLESPACE TO "ARJU";

-- new object type path is: SCHEMA_EXPORT/ROLE_GRANT
<?xml version="1.0"?><ROWSET><ROW>
<ROGRANT_T><VERS_MAJOR>1</VERS_MAJOR><VERS_MINOR>0</VERS_MINOR><GRANTEE_ID>61</GRANTEE_ID><GRANTEE>ARJU</GRANTEE><ROLE>DBA</ROLE><ROLE_ID>4</ROLE_ID><ADMIN>0</ADMIN><SEQUENCE>919</SEQUENCE></ROGRANT_T>
</ROW></ROWSET>
GRANT "DBA" TO "ARJU";

-- new object type path is: SCHEMA_EXPORT/DEFAULT_ROLE
<?xml version="1.0"?><ROWSET><ROW>
<DEFROLE_T><VERS_MAJOR>1</VERS_MAJOR><VERS_MINOR>0</VERS_MINOR><USER_ID>61</USER_ID><USER_NAME>ARJU</USER_NAME><USER_TYPE>1</USER_TYPE><DEFROLE>1</DEFROLE><ROLE_LIST/></DEFROLE_T>
</ROW></ROWSET>
ALTER USER "ARJU" DEFAULT ROLE ALL;

-- new object type path is: SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
<?xml version="1.0"?><ROWSET><ROW>
<PROCACTSCHEMA_T><VERS_MAJOR>1</VERS_MAJOR><VERS_MINOR>0</VERS_MINOR><USER_NAME>ARJU</USER_NAME><PACKAGE>DBMS_LOGREP_EXP</PACKAGE><SCHEMA>SYS</SCHEMA><LEVEL_NUM>5000</LEVEL_NUM><CLASS>2</CLASS><PREPOST>0</PREPOST><PLSQL><PLSQL_ITEM><LOCS><LOCS_ITEM><NEWBLOCK>0</NEWBLOCK><LINE_OF_CODE>sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','CURRENT_SCHEMA'), export_db_name=>'ORCL.REGRESS.RDBMS.DEV.US.ORACLE.COM', inst_scn=>'1531485');</LINE_OF_CODE></LOCS_ITEM></LOCS></PLSQL_ITEM></PLSQL></PROCACTSCHEMA_T>
</ROW></ROWSET>

Related Documents

How to get timing details on data pump processed objects

We can get the number of objects processed and timing information needed to process object types in data pump jobs.

We can achieve this goal by using the undocumented parameter METRICS. By setting parameter METRICS to y we can get timing details.

An example is given below.

E:\Documents and Settings\Arju>expdp schemas=arju userid=arju/a dumpfile=arju_30_04.dmp logfile=arju_20_04.log metrics=y

Export: Release 10.2.0.1.0 - Production on Friday, 01 May, 2009 7:24:07

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "ARJU"."SYS_EXPORT_SCHEMA_03": schemas=arju userid=arju/******** dumpfile=arju_30_04.dmp logfile=arju_20_04.lo
g metrics=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1024 KB
Processing object type SCHEMA_EXPORT/USER
Completed 1 USER objects in 0 seconds
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Completed 1 SYSTEM_GRANT objects in 1 seconds
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Completed 1 ROLE_GRANT objects in 0 seconds
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Completed 1 DEFAULT_ROLE objects in 0 seconds
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Completed 1 PROCACT_SCHEMA objects in 0 seconds
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Completed 1 SEQUENCE objects in 3 seconds
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Completed 11 TABLE objects in 4 seconds
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Completed 9 INDEX objects in 0 seconds
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Completed 8 CONSTRAINT objects in 0 seconds
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Completed 10 INDEX_STATISTICS objects in 0 seconds
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Completed 2 COMMENT objects in 1 seconds
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Completed 3 VIEW objects in 2 seconds
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Completed 11 TABLE_STATISTICS objects in 0 seconds
. . exported "ARJU"."SYS_EXPORT_SCHEMA_02" 214.3 KB 1125 rows
. . exported "ARJU"."SYS_EXPORT_SCHEMA_01" 31.55 KB 12 rows
. . exported "ARJU"."AUTHOR" 5.835 KB 14 rows
. . exported "ARJU"."BOOKAUTHOR" 5.609 KB 20 rows
. . exported "ARJU"."BOOKS" 7.781 KB 14 rows
. . exported "ARJU"."BOOK_CUSTOMER" 8.234 KB 21 rows
. . exported "ARJU"."BOOK_ORDER" 8.398 KB 21 rows
. . exported "ARJU"."ORDERITEMS" 6.742 KB 32 rows
. . exported "ARJU"."PROMOTION" 5.710 KB 4 rows
. . exported "ARJU"."PUBLISHER" 6.265 KB 8 rows
. . exported "ARJU"."T" 4.914 KB 1 rows
Master table "ARJU"."SYS_EXPORT_SCHEMA_03" successfully loaded/unloaded
******************************************************************************
Dump file set for ARJU.SYS_EXPORT_SCHEMA_03 is:
E:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\DPDUMP\ARJU_30_04.DMP
Job "ARJU"."SYS_EXPORT_SCHEMA_03" successfully completed at 07:24:41

Note that with usage of additional undocumented parameter METRICS=Y, the following output is displayed in the logfile as well as on the screen.
Completed %n %s objects in %n seconds
Related Documents
http://arjudba.blogspot.com/2009/04/data-pump-process-architecture-master.html

Data pump Process Architecture -Master Table, Worker process

Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of the progress. For every Data Pump Export and Data Pump Import job, a master process is created. The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.

Let's see an example.
1)Starting data pump jobs in one session.
expdp schemas=arju userid=arju/a dumpfile=arju_30_04_2009.dmp logfile=arju_20_04_09.log

2)Query from dba_datapump_session to know the data pump job status.

set lines 150 pages 100
col program for a20
col username for a5
col spid for a7
col job_name for a25
select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid,
s.status, s.username, d.job_name, p.spid, s.serial#, p.pid
from v$session s, v$process p, dba_datapump_sessions d
where p.addr=s.paddr and s.saddr=d.saddr;

DATE PROGRAM SID STATUS USERN JOB_NAME SPID SERIAL# PID
------------------- -------------------- ------- -------- ----- ------------------------- ------- ------- -------
2009-04-30 16:49:46 expdp.exe 148 ACTIVE ARJU SYS_EXPORT_SCHEMA_01 3164 14 16
2009-04-30 16:49:46 ORACLE.EXE (DM00) 144 ACTIVE ARJU SYS_EXPORT_SCHEMA_01 5376 20 17

SQL> /

DATE PROGRAM SID STATUS USERN JOB_NAME SPID SERIAL# PID
------------------- -------------------- ------- -------- ----- ------------------------- ------- ------- -------
2009-04-30 16:49:50 expdp.exe 148 ACTIVE ARJU SYS_EXPORT_SCHEMA_01 3164 14 16
2009-04-30 16:49:50 ORACLE.EXE (DM00) 144 ACTIVE ARJU SYS_EXPORT_SCHEMA_01 5376 20 17
2009-04-30 16:49:50 ORACLE.EXE (DW01) 141 ACTIVE ARJU SYS_EXPORT_SCHEMA_01 7352 7 20

SQL> /

no rows selected

You see in the above output I ran the same query three times. First one is just after submitting data pump jobs. Second query I ran few seconds after data pump job is submitted. And third one is after data pump job is completed.

In the first output of the query just a master table is created as well as master process(DM00). This master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.

In the second output, worker process is created (DW01). Since we did not set any parallelism so by default PARALLEL is 1 (one) and there is only one worker process. If we set PARALLEL to more than 1, then we would see multiple worker process like DW01, DW02 etc. The worker processes are responsible for updating the master table with information on the status (pending, completed, failed) of each object being processed. This information is used to provide the detailed information required to restart stopped Data Pump jobs.

In the third output we see no rows as data pump process is disappeared when the data pump job is completed or it stops.

If you say architecture of data pump job then,
data pump job is initiated after creating Master Table. This table is created at the beginning of a Data Pump operation and is dropped at the end of the successful completion of a Data Pump operation. However if you kill the job by command kill_job in the interactive prompt then the Master Table can also be dropped. If a job is stopped using the stop_job interactive command or if the job is terminated unexpectedly, the Master Table will be retained.

Note that during data pump jobs we can set the keep_master parameter Y which ensure to retain the Master Table at the end of a successful job.

Suppose my datapump job command is,
C:>expdp schemas=arju userid=arju/a dumpfile=arju_30_04_.dmp logfile=arju_20_04_09.log keep_master=y
Export: Release 10.2.0.1.0 - Production on Thursday, 30 April, 2009 17:11:09

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "ARJU"."SYS_EXPORT_SCHEMA_02": schemas=arju userid=arju/******** dumpfile=arju_30_04_.dmp logfile=arju_20_04_0
9.log keep_master=y

As we set keep_master=y so we can see master table "ARJU"."SYS_EXPORT_SCHEMA_02" anytime after data pump jobs is completed.

Structure of a master table is shown below.

SQL> desc "ARJU"."SYS_EXPORT_SCHEMA_02";
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
PROCESS_ORDER NUMBER
DUPLICATE NUMBER
DUMP_FILEID NUMBER
DUMP_POSITION NUMBER
DUMP_LENGTH NUMBER
DUMP_ALLOCATION NUMBER
COMPLETED_ROWS NUMBER
ERROR_COUNT NUMBER
ELAPSED_TIME NUMBER
OBJECT_TYPE_PATH VARCHAR2(200)
OBJECT_PATH_SEQNO NUMBER
OBJECT_TYPE VARCHAR2(30)
IN_PROGRESS CHAR(1)
OBJECT_NAME VARCHAR2(500)
OBJECT_LONG_NAME VARCHAR2(4000)
OBJECT_SCHEMA VARCHAR2(30)
ORIGINAL_OBJECT_SCHEMA VARCHAR2(30)
PARTITION_NAME VARCHAR2(30)
SUBPARTITION_NAME VARCHAR2(30)
FLAGS NUMBER
PROPERTY NUMBER
COMPLETION_TIME DATE
OBJECT_TABLESPACE VARCHAR2(30)
SIZE_ESTIMATE NUMBER
OBJECT_ROW NUMBER
PROCESSING_STATE CHAR(1)
PROCESSING_STATUS CHAR(1)
BASE_PROCESS_ORDER NUMBER
BASE_OBJECT_TYPE VARCHAR2(30)
BASE_OBJECT_NAME VARCHAR2(30)
BASE_OBJECT_SCHEMA VARCHAR2(30)
ANCESTOR_PROCESS_ORDER NUMBER
DOMAIN_PROCESS_ORDER NUMBER
PARALLELIZATION NUMBER
UNLOAD_METHOD NUMBER
GRANULES NUMBER
SCN NUMBER
GRANTOR VARCHAR2(30)
XML_CLOB CLOB
NAME VARCHAR2(30)
VALUE_T VARCHAR2(4000)
VALUE_N NUMBER
IS_DEFAULT NUMBER
FILE_TYPE NUMBER
USER_DIRECTORY VARCHAR2(4000)
USER_FILE_NAME VARCHAR2(4000)
FILE_NAME VARCHAR2(4000)
EXTEND_SIZE NUMBER
FILE_MAX_SIZE NUMBER
PROCESS_NAME VARCHAR2(30)
LAST_UPDATE DATE
WORK_ITEM VARCHAR2(30)
OBJECT_NUMBER NUMBER
COMPLETED_BYTES NUMBER
TOTAL_BYTES NUMBER
METADATA_IO NUMBER
DATA_IO NUMBER
CUMULATIVE_TIME NUMBER
PACKET_NUMBER NUMBER
OLD_VALUE VARCHAR2(4000)
SEED NUMBER
LAST_FILE NUMBER
USER_NAME VARCHAR2(30)
OPERATION VARCHAR2(30)
JOB_MODE VARCHAR2(30)
CONTROL_QUEUE VARCHAR2(30)
STATUS_QUEUE VARCHAR2(30)
REMOTE_LINK VARCHAR2(4000)
VERSION NUMBER
DB_VERSION VARCHAR2(30)
TIMEZONE VARCHAR2(64)
STATE VARCHAR2(30)
PHASE NUMBER
GUID RAW(16)
START_TIME DATE
BLOCK_SIZE NUMBER
METADATA_BUFFER_SIZE NUMBER
DATA_BUFFER_SIZE NUMBER
DEGREE NUMBER
PLATFORM VARCHAR2(101)
ABORT_STEP NUMBER
INSTANCE VARCHAR2(60)

Let's see the number of rows populated inside master table.
SQL> select count(*) from "ARJU"."SYS_EXPORT_SCHEMA_02";

COUNT(*)
--------
1125

We see in the master table the total rows are 1125. In fact master table contains all data pump log messages. It is used to track the detailed progress information of a Data Pump job - which is more than the log messages. It conatins following information.

- Completed rows of a table.

- Total number of errors during data pump operation.

- Elapsed time for each table to do data pump export/import operation.

- The current set of dump files.

- The current state of every object exported or imported and their locations in the dump file set.

- The job's user-supplied parameters.

- The status of every worker process.

- The state of current job status and restart information.

- The dump file location, the directory name information.

And many other useful information.

Related Documents
http://arjudba.blogspot.com/2009/02/important-guideline-about-data-pump.html

How to trace/diagnosis oracle data pump jobs

Whenever you issue, impdp help=y or expdp help=y you can see a list of parameters that can be used for oracle data pump export/import jobs. From there you don't see any parameter by which you can trace data pump jobs. But tracing datapump job is an important issue in case of diagnosing incorrect behavior and/or troubleshooting Data Pump errors. The undocumented parameter TRACE is really useful to troubleshoot data pump jobs.

Before going into original discussion let's see how data pump job is done. The architecture of data pump job is discussed on http://arjudba.blogspot.com/2009/04/data-pump-process-architecture-master.html

The tracing of data pump is done by TRACE parameter. This parameter takes value as 7 digit hexadecimal number. Specifying the parameter value follow some rules.
Out of 7 digit hexadecimal number,
- first 3 digits are responsible to enable tracing for a specific data pump component.

- Rest 4 digits are usually 0300

- Specifying more than 7 hexadecimal number is not allowed. Doing so will result,
UDE-00014: invalid value for parameter, 'trace'.

- Specifying leading 0x (hexadecimal specification characters) is not allowed.

- Value to be specified in hexadecimal. You can't specify it in decimal.

- Leading zero can be omitted. So it may be less than 7 hexadecimal digit.

- Values are not case sensitive.

Before starting tracing be sure you have large enough value setting of the MAX_DUMP_FILE_SIZE initialization parameter because this size is used to capture all the trace information. The default value is UNLIMITED which is ok.

SQL> show parameter max_dump_file
NAME TYPE VALUE
------------------------------------ ----------- -------------------
max_dump_file_size string UNLIMITED

SQL> select value from v$parameter where name='max_dump_file_size';

VALUE
--------------------------------------------------------------------
UNLIMITED

If it is not unlimited then you can set it by,
SQL> alter system set max_dump_file_size=UNLIMITED;

System altered.

The majority of errors that occur during a Data Pump job, can be diagnosed by creating a trace file for the Master Control Process (MCP) and the Worker Process(es) only.

In case of standard tracing trace files are generated in BACKGROUND_DUMP_DEST. In case of standard tracing,

- If it is Master Process trace file then generated file name is,
<SID>_dm<number>_<process_id>.trc

- If it is Worker Process trace file then generated file name is,
<SID>_dw<number>_<process_id>.trc

In case of full tracing two trace files are generated in BACKGROUND_DUMP_DEST just like standard tracing. And one trace file is generated in USER_DUMP_DEST.

Shadow Process trace file: <SID>_ora_<process_id>.trc

The list of trace level in data pump is shown below.

 Trace   DM   DW  ORA  Lines
  level  trc  trc  trc     in
  (hex) file file file  trace                                         Purpose
------- ---- ---- ---- ------ -----------------------------------------------
  10300    x    x    x  SHDW: To trace the Shadow process (API) (expdp/impdp)
  20300    x    x    x  KUPV: To trace Fixed table
  40300    x    x    x  'div' To trace Process services
  80300    x            KUPM: To trace Master Control Process (MCP)      (DM)
 100300    x    x       KUPF: To trace File Manager
 200300    x    x    x  KUPC: To trace Queue services
 400300         x       KUPW: To trace Worker process(es)                (DW)
 800300         x       KUPD: To trace Data Package
1000300         x       META: To trace Metadata Package
--- +
1FF0300    x    x    x  'all' To trace all components          (full tracing)

Individual tracing level values in hexadecimal are shown except last one in the list. You can use individual value or combination of values. If you sum all the individual values you will get 1FF0300 which is full tracing.

To use full level tracing issue data pump export as,
expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300

To use full level tracing for data pump import operation issue import as,
impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300

However for most cases full level tracing is not required. As trace 400300 is to trace Worker process(es) and trace 80300 is to trace Master Control Process (MCP). So combining them is trace 480300 and by using trace 480300 you will be able to trace both Master Control process (MCP) and the Worker process(es). This would serve the purpose.

So to solve any data pump export problem issue,
expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=480300

To solve any data pump import problem issue,
impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=480300

Related Documents

Monday, April 27, 2009

How to install and configure magento on windows.

Pre-installation Tasks
Before installing magento make sure that you have the following software installed on your system.
-Apache Web Server.
-PHP 5.2.0 or newer version (You need extension of PDO/MySQL, MySQLi, mcrypt, mhash, simplexml, DOM, curl, gd, soap).
-MySQL 4.1.20 or newer.

I, myself have installed XAMPP software(which contains PHP + MySQL + Apache) and then enable extension of necessary addons by uncommenting php.ini file under apache/bin directory. If you don't do it you might have http://arjudba.blogspot.com/2009/04/installing-magneto-fails-with-php.html

Installing Magento
1)Download magento software from magentocommerce.com/download. You can download.zip or .tar.gz installer package and decompress it with any compression utility.

2)Rename the decompressed folder to magento and place this (magento) folder into your web server directory.

3)Create a mysql database For example name the database magento. Also create username and password for the database.

D:\xampp\mysql\bin>mysql.exe -h localhost -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 5.1.30-community MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> create database magento;
Query OK, 1 row affected (0.00 sec)

mysql> create user 'magento' identified by 'magento';
Query OK, 0 rows affected (0.05 sec)

mysql> grant all privileges on *.* to 'magento'@'localhost' identified by 'magento';
Query OK, 0 rows affected (0.06 sec)

mysql> commit;
Query OK, 0 rows affected (0.00 sec)

mysql> Bye

Ensure that you can log in to database using newly created user.

D:\xampp\mysql\bin>mysql.exe -h localhost -u magento -p
Enter password: *******
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 18
Server version: 5.1.30-community MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

4)You must have the correct permission onto magento directory in order to proceed. On windows if you have NTFS file system then make sure you have read, write and execute permission on the folder. On unix change the permission to 755 by chmod.

5)Already mentioned that before installing magento you need php 5. However if your server runs on php 4 then you need PHP5 CGI Binary.

6)From your browser write http://localhost/magento. You will get http://localhost/magento/index.php/install/ on the browser and the word "Welcome to Magento's Installation Wizard!"

7)Click on the checkbox if you are agree and proceed to install. And then press Continue.

8)In the localization tab select Locale, Time Zone and Default Currency and then press Continue.

9)In the configuration window fill up the field as indicated. In this step you may face a problem defined here http://arjudba.blogspot.com/2009/04/magneto-installation-crashes-apache.html. So care about that and note that this step may take few minutes to continue next step.

10)Create Admin Account window will appear. Fill up the necessary information. And press Continue.

11)You have done installation task. A window will appear cotaining word "You're All Set!". Now you can login to admin panel. You can Go to backend or frontend as your wish.
Related Documents
http://arjudba.blogspot.com/2009/04/after-installing-magento-cant-log-in-to.html
http://arjudba.blogspot.com/2009/04/magneto-installation-crashes-apache.html
http://arjudba.blogspot.com/2009/04/after-installing-magento-cant-log-in-to.html

Sunday, April 26, 2009

What is sharp-bang appear at first inside shell script

You might have seen that in many shell scripts the first line starts with a #! immediately following a path. The ! sign is called the exclamation mark and # sign is called the number sign/pound/hash. The another name of # sign is sharp and the ! sign is bang. The concatenation of # and ! is called the sha-bang (#!) or she-bang or sh-bang.

The sha-bang (#!) at the head of a script tells your system that this file is a set of commands to be fed to the command interpreter indicated. After #! there is path name which is the path to the program that interprets the commands in the script, whether it be a shell, a programming language, or a utility. This command interpreter then executes the commands in the script, starts at the top and ignore comments.

We know that in shell script any command starts with # is considered as comment. But if #! line in a shell script appear the first thing, then the command interpreter (sh or bash) sees and correctly interpret it but finally when the script executes the command interpreter consider the line as a comment. The line has already served its purpose - calling the command interpreter.

If, in fact, the script includes an extra #! line, then bash will interpret it as a comment.

The examples of commonly uses sha-bang are,
#!/bin/sh
#!/bin/bash -execute using bash shell.
#!/bin/ksh — Execute using korn shell.
#!/bin/zsh — Execute using Z shell.
#!/bin/sh — On Solaris this indicates Bourne shell. However on linux it points to bash shell.
#!/bin/csh — Execute using C shell.
#!/usr/bin/perl -Execute using perl.
#!/usr/bin/php — Execute using PHP.
#!/usr/bin/python — Execute using Python.
#!/usr/bin/ruby — Execute using Ruby.


The /bin/sh invokes the default shell interpreter which is /bin/bash on linux machine.

The /bin/bash explicitly invokes bash shell interpreter.

The path after sha-bang may also contain any utility path like,
#!/bin/sed -f
#!/usr/awk -f

While you scripting you can omit #! if the script consists only of a set of generic system commands, using no internal shell directives.

Note that the path given at the "sha-bang" must be correct, otherwise an error message -- usually "Command not found." will be the outcome of the script execution.

Related Documents
Introduction -What is kernel, shell, shell script.
Basic Steps to write a shell script
Variables in shell script
Output text with echo command

Saturday, April 25, 2009

Find pattern from a file using grep, egrep or fgrep utility

The grep or egrep or fgrep utility is used to find pattern from a file and print the output. We can use these utilities to search words from multiple files.

You can put the searching patterns inside single quotes.

The syntax of using grep is,
grep pattern file_1 file_2 ..... file_n

Let's do our experiment on two files named blog_info.txt and myself.txt. The contents of this two files are shown below.

# vi blog_info.txt
Welcome to http://arjudba.blogspot.com blog.
This blog contains day to day tasks of Arju.
Most of the contents of this blog is oracle+php+shell scripts.
Thanks for visiting this blog.

# vi myself.txt
Assalamu Alaikum - which means peace be upon you.
I am Arju. I work as DBA Consultant.
I also help people to learn oracle online.
I completed by B.Sc from IUT.
I try to update my blog regularly.

Let's now search for Arju keyword from both of these files.

# grep 'Arju' blog_info.txt myself.txt
blog_info.txt:This blog contains day to day tasks of Arju.
myself.txt:I am Arju. I work as DBA Consultant.

With grep command you can use several options. Below is some of them.

1)-h : If you search from more than one files like in the above output after searching "Arju" keyword the lines from both of the files are displayed containing Arju plus the file name are shown before the output. If you don't want file name to be displayed then you can use -h option.

# grep -h 'Arju' blog_info.txt myself.txt
This blog contains day to day tasks of Arju.
I am Arju. I work as DBA Consultant.

2)-w : Let's search for 'is' keyword from both files. Any word containing 'is' keyword will appear. So word containing this also will appear.

# grep 'is' blog_info.txt myself.txt
blog_info.txt:This blog contains day to day tasks of Arju.
blog_info.txt:Most of the contents of this blog is oracle+php+shell scripts.
blog_info.txt:Thanks for visiting this blog.

Now if we want to restrict search containing just 'is' keyword only then use grep with -w option.

# grep -w 'is' blog_info.txt myself.txt
blog_info.txt:Most of the contents of this blog is oracle+php+shell scripts.

3)-b : If you want to print the word position number of the search text within the file then use -b option.

# grep -b 'is' blog_info.txt myself.txt
blog_info.txt:45:This blog contains day to day tasks of Arju.
blog_info.txt:90:Most of the contents of this blog is oracle+php+shell scripts.
blog_info.txt:153:Thanks for visiting this blog.

In the first line 45 is printed as position of T is 45 of word "This" which conatins "is".

4)-c : If you use -c option, then it displays only a count of the number of matched lines and not the lines themselves.

# grep -c 'is' blog_info.txt myself.txt
blog_info.txt:3
myself.txt:0

So within blog_info.txt the "is" keyword is appeared three times but within myself.txt there is no such "is" keyword.

5)-e : With -e option you can specify one or more patterns for which grep is to search. You may indicate each pattern with a separate -e option character, or with newlines within pattern. For example, the following two commands are equivalent:
grep -e pattern_1 -e pattern_2 file
grep -e 'pattern_1 pattern_2' file

For example to search either keyword "Arju" or "IUT." from both file you can issue,

# grep -e 'Arju' -e 'IUT.' blog_info.txt myself.txt
blog_info.txt:This blog contains day to day tasks of Arju.
myself.txt:I am Arju. I work as DBA Consultant.
myself.txt:I completed by B.Sc from IUT.

6)-f : The -f patternfile reads one or more patterns from patternfile. Patterns in patternfile are separated by newlines.


7)-i : The -i option tells grep to ignore case. So if you use -i option keyword "blog" and "BlOg" treated same.

8)-l : The -l option prints the file name that contain the matching lines.
# grep -l 'IUT' blog_info.txt myself.txt
myself.txt

As "IUT" word is present inside file myself.txt so that is printed.

9)-n : The -n option precedes each line with the line number where it was found.
# grep -n 'IUT' myself.txt
4:I completed by B.Sc from IUT.

Before printing line 4 is printed as it is found on line 4.

10)-q : The -q option suppresses output and simply returns appropriate return code.

11)-s : The -s option suppresses the display of any error messages for nonexistent or unreadable files.

12)-U[b|B|l|L]: The -U option forces the specified files to be treated as Unicode files. By default, these utilities assume that Unicode characters are little-endian. If a byte-order marker is present, that is used to determine the byte order for the characters. You can force Unicode characters to be treated as big-endian by specifying -Ub or -UB. Similarly, you can force them to be treated as little-endian by specifying -Ul or -UL.

13)-v : The -v option displays all lines not matching a pattern.
For example if we want to print the line numbers where "I" letter is not found then issue,

# grep -v 'I' myself.txt
Assalamu Alaikum - which means peace be upon you.

14)-x : The -x option is used to find line that requires a string to match an entire line.

# grep -x 'I completed by B.Sc from IUT.' myself.txt
I completed by B.Sc from IUT.

fgrep searches files for one or more pattern arguments, but does not use regular expressions. It does direct string comparison to find matching lines of text in the input.

egrep works similarly, but uses extended regular expression matching. If you include special characters in patterns typed on the command line, escape them by enclosing them in apostrophes to prevent inadvertent misinterpretation by the shell or command interpreter. To match a character that is special to egrep, a backslash (\) should be put in front of the character. It is usually easier to use fgrep if you don't need special pattern matching.