Thursday, April 30, 2009

How to get DDL both as SQL statements and as XML data

We know with usage of sqlfile=file_name.txt all DDL statements inside dump file is populated inside file_name.txt file. The details of this is discussed on http://arjudba.blogspot.com/2008/04/extract-ddl-from-dump.html.

In many cases however you may want to extract metadata as XML format. With usage of both SQLFILE and TRACE you can achieve your goal.

C:>impdp full=y userid=arju/a dumpfile=arju_30_04.dmp logfile=arju_30_04.log SQLFILE=metadata_with_xml.sql TRACE=2

Import: Release 10.2.0.1.0 - Production on Friday, 01 May, 2009 9:30:56

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "ARJU"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "ARJU"."SYS_SQL_FILE_FULL_01": full=y userid=arju/******** dumpfile=arju_30_04.dmp logfile=arju_30_04.log SQLF
ILE=metadata_with_xml.sql TRACE=2
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "ARJU"."SYS_SQL_FILE_FULL_01" successfully completed at 09:31:08

A sample output from the logfile is,
-- CONNECT ARJU
-- new object type path is: SCHEMA_EXPORT/USER
<?xml version="1.0"?><ROWSET><ROW>
<USER_T><VERS_MAJOR>1</VERS_MAJOR><VERS_MINOR>0</VERS_MINOR><USER_ID>61</USER_ID><NAME>ARJU</NAME><TYPE_NUM>1</TYPE_NUM><PASSWORD>55E19EAC6BA480EA</PASSWORD><DATATS>USERS</DATATS><TEMPTS>TEMP</TEMPTS><CTIME>16-APR-09</CTIME><PTIME>16-APR-09</PTIME><PROFNUM>0</PROFNUM><PROFNAME>DEFAULT</PROFNAME><DEFROLE>1</DEFROLE><ASTATUS>0</ASTATUS><LCOUNT>0</LCOUNT><DEFSCHCLASS>DEFAULT_CONSUMER_GROUP</DEFSCHCLASS><SPARE1>0</SPARE1></USER_T>
</ROW></ROWSET>
-- CONNECT SYSTEM
CREATE USER "ARJU" IDENTIFIED BY VALUES '55E19EAC6BA480EA'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP";

-- new object type path is: SCHEMA_EXPORT/SYSTEM_GRANT
<?xml version="1.0"?><ROWSET><ROW>
<SYSGRANT_T><VERS_MAJOR>1</VERS_MAJOR><VERS_MINOR>1</VERS_MINOR><PRIVILEGE>-15</PRIVILEGE><GRANTEE>ARJU</GRANTEE><PRIVNAME>UNLIMITED TABLESPACE</PRIVNAME><SEQUENCE>918</SEQUENCE><WGO>0</WGO></SYSGRANT_T>
</ROW></ROWSET>
GRANT UNLIMITED TABLESPACE TO "ARJU";

-- new object type path is: SCHEMA_EXPORT/ROLE_GRANT
<?xml version="1.0"?><ROWSET><ROW>
<ROGRANT_T><VERS_MAJOR>1</VERS_MAJOR><VERS_MINOR>0</VERS_MINOR><GRANTEE_ID>61</GRANTEE_ID><GRANTEE>ARJU</GRANTEE><ROLE>DBA</ROLE><ROLE_ID>4</ROLE_ID><ADMIN>0</ADMIN><SEQUENCE>919</SEQUENCE></ROGRANT_T>
</ROW></ROWSET>
GRANT "DBA" TO "ARJU";

-- new object type path is: SCHEMA_EXPORT/DEFAULT_ROLE
<?xml version="1.0"?><ROWSET><ROW>
<DEFROLE_T><VERS_MAJOR>1</VERS_MAJOR><VERS_MINOR>0</VERS_MINOR><USER_ID>61</USER_ID><USER_NAME>ARJU</USER_NAME><USER_TYPE>1</USER_TYPE><DEFROLE>1</DEFROLE><ROLE_LIST/></DEFROLE_T>
</ROW></ROWSET>
ALTER USER "ARJU" DEFAULT ROLE ALL;

-- new object type path is: SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
<?xml version="1.0"?><ROWSET><ROW>
<PROCACTSCHEMA_T><VERS_MAJOR>1</VERS_MAJOR><VERS_MINOR>0</VERS_MINOR><USER_NAME>ARJU</USER_NAME><PACKAGE>DBMS_LOGREP_EXP</PACKAGE><SCHEMA>SYS</SCHEMA><LEVEL_NUM>5000</LEVEL_NUM><CLASS>2</CLASS><PREPOST>0</PREPOST><PLSQL><PLSQL_ITEM><LOCS><LOCS_ITEM><NEWBLOCK>0</NEWBLOCK><LINE_OF_CODE>sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','CURRENT_SCHEMA'), export_db_name=>'ORCL.REGRESS.RDBMS.DEV.US.ORACLE.COM', inst_scn=>'1531485');</LINE_OF_CODE></LOCS_ITEM></LOCS></PLSQL_ITEM></PLSQL></PROCACTSCHEMA_T>
</ROW></ROWSET>

Related Documents

How to get timing details on data pump processed objects

We can get the number of objects processed and timing information needed to process object types in data pump jobs.

We can achieve this goal by using the undocumented parameter METRICS. By setting parameter METRICS to y we can get timing details.

An example is given below.

E:\Documents and Settings\Arju>expdp schemas=arju userid=arju/a dumpfile=arju_30_04.dmp logfile=arju_20_04.log metrics=y

Export: Release 10.2.0.1.0 - Production on Friday, 01 May, 2009 7:24:07

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "ARJU"."SYS_EXPORT_SCHEMA_03": schemas=arju userid=arju/******** dumpfile=arju_30_04.dmp logfile=arju_20_04.lo
g metrics=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1024 KB
Processing object type SCHEMA_EXPORT/USER
Completed 1 USER objects in 0 seconds
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Completed 1 SYSTEM_GRANT objects in 1 seconds
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Completed 1 ROLE_GRANT objects in 0 seconds
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Completed 1 DEFAULT_ROLE objects in 0 seconds
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Completed 1 PROCACT_SCHEMA objects in 0 seconds
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Completed 1 SEQUENCE objects in 3 seconds
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Completed 11 TABLE objects in 4 seconds
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Completed 9 INDEX objects in 0 seconds
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Completed 8 CONSTRAINT objects in 0 seconds
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Completed 10 INDEX_STATISTICS objects in 0 seconds
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Completed 2 COMMENT objects in 1 seconds
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Completed 3 VIEW objects in 2 seconds
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Completed 11 TABLE_STATISTICS objects in 0 seconds
. . exported "ARJU"."SYS_EXPORT_SCHEMA_02" 214.3 KB 1125 rows
. . exported "ARJU"."SYS_EXPORT_SCHEMA_01" 31.55 KB 12 rows
. . exported "ARJU"."AUTHOR" 5.835 KB 14 rows
. . exported "ARJU"."BOOKAUTHOR" 5.609 KB 20 rows
. . exported "ARJU"."BOOKS" 7.781 KB 14 rows
. . exported "ARJU"."BOOK_CUSTOMER" 8.234 KB 21 rows
. . exported "ARJU"."BOOK_ORDER" 8.398 KB 21 rows
. . exported "ARJU"."ORDERITEMS" 6.742 KB 32 rows
. . exported "ARJU"."PROMOTION" 5.710 KB 4 rows
. . exported "ARJU"."PUBLISHER" 6.265 KB 8 rows
. . exported "ARJU"."T" 4.914 KB 1 rows
Master table "ARJU"."SYS_EXPORT_SCHEMA_03" successfully loaded/unloaded
******************************************************************************
Dump file set for ARJU.SYS_EXPORT_SCHEMA_03 is:
E:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\DPDUMP\ARJU_30_04.DMP
Job "ARJU"."SYS_EXPORT_SCHEMA_03" successfully completed at 07:24:41

Note that with usage of additional undocumented parameter METRICS=Y, the following output is displayed in the logfile as well as on the screen.
Completed %n %s objects in %n seconds
Related Documents
http://arjudba.blogspot.com/2009/04/data-pump-process-architecture-master.html

Data pump Process Architecture -Master Table, Worker process

Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of the progress. For every Data Pump Export and Data Pump Import job, a master process is created. The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.

Let's see an example.
1)Starting data pump jobs in one session.
expdp schemas=arju userid=arju/a dumpfile=arju_30_04_2009.dmp logfile=arju_20_04_09.log

2)Query from dba_datapump_session to know the data pump job status.

set lines 150 pages 100
col program for a20
col username for a5
col spid for a7
col job_name for a25
select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid,
s.status, s.username, d.job_name, p.spid, s.serial#, p.pid
from v$session s, v$process p, dba_datapump_sessions d
where p.addr=s.paddr and s.saddr=d.saddr;

DATE PROGRAM SID STATUS USERN JOB_NAME SPID SERIAL# PID
------------------- -------------------- ------- -------- ----- ------------------------- ------- ------- -------
2009-04-30 16:49:46 expdp.exe 148 ACTIVE ARJU SYS_EXPORT_SCHEMA_01 3164 14 16
2009-04-30 16:49:46 ORACLE.EXE (DM00) 144 ACTIVE ARJU SYS_EXPORT_SCHEMA_01 5376 20 17

SQL> /

DATE PROGRAM SID STATUS USERN JOB_NAME SPID SERIAL# PID
------------------- -------------------- ------- -------- ----- ------------------------- ------- ------- -------
2009-04-30 16:49:50 expdp.exe 148 ACTIVE ARJU SYS_EXPORT_SCHEMA_01 3164 14 16
2009-04-30 16:49:50 ORACLE.EXE (DM00) 144 ACTIVE ARJU SYS_EXPORT_SCHEMA_01 5376 20 17
2009-04-30 16:49:50 ORACLE.EXE (DW01) 141 ACTIVE ARJU SYS_EXPORT_SCHEMA_01 7352 7 20

SQL> /

no rows selected

You see in the above output I ran the same query three times. First one is just after submitting data pump jobs. Second query I ran few seconds after data pump job is submitted. And third one is after data pump job is completed.

In the first output of the query just a master table is created as well as master process(DM00). This master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.

In the second output, worker process is created (DW01). Since we did not set any parallelism so by default PARALLEL is 1 (one) and there is only one worker process. If we set PARALLEL to more than 1, then we would see multiple worker process like DW01, DW02 etc. The worker processes are responsible for updating the master table with information on the status (pending, completed, failed) of each object being processed. This information is used to provide the detailed information required to restart stopped Data Pump jobs.

In the third output we see no rows as data pump process is disappeared when the data pump job is completed or it stops.

If you say architecture of data pump job then,
data pump job is initiated after creating Master Table. This table is created at the beginning of a Data Pump operation and is dropped at the end of the successful completion of a Data Pump operation. However if you kill the job by command kill_job in the interactive prompt then the Master Table can also be dropped. If a job is stopped using the stop_job interactive command or if the job is terminated unexpectedly, the Master Table will be retained.

Note that during data pump jobs we can set the keep_master parameter Y which ensure to retain the Master Table at the end of a successful job.

Suppose my datapump job command is,
C:>expdp schemas=arju userid=arju/a dumpfile=arju_30_04_.dmp logfile=arju_20_04_09.log keep_master=y
Export: Release 10.2.0.1.0 - Production on Thursday, 30 April, 2009 17:11:09

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "ARJU"."SYS_EXPORT_SCHEMA_02": schemas=arju userid=arju/******** dumpfile=arju_30_04_.dmp logfile=arju_20_04_0
9.log keep_master=y

As we set keep_master=y so we can see master table "ARJU"."SYS_EXPORT_SCHEMA_02" anytime after data pump jobs is completed.

Structure of a master table is shown below.

SQL> desc "ARJU"."SYS_EXPORT_SCHEMA_02";
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
PROCESS_ORDER NUMBER
DUPLICATE NUMBER
DUMP_FILEID NUMBER
DUMP_POSITION NUMBER
DUMP_LENGTH NUMBER
DUMP_ALLOCATION NUMBER
COMPLETED_ROWS NUMBER
ERROR_COUNT NUMBER
ELAPSED_TIME NUMBER
OBJECT_TYPE_PATH VARCHAR2(200)
OBJECT_PATH_SEQNO NUMBER
OBJECT_TYPE VARCHAR2(30)
IN_PROGRESS CHAR(1)
OBJECT_NAME VARCHAR2(500)
OBJECT_LONG_NAME VARCHAR2(4000)
OBJECT_SCHEMA VARCHAR2(30)
ORIGINAL_OBJECT_SCHEMA VARCHAR2(30)
PARTITION_NAME VARCHAR2(30)
SUBPARTITION_NAME VARCHAR2(30)
FLAGS NUMBER
PROPERTY NUMBER
COMPLETION_TIME DATE
OBJECT_TABLESPACE VARCHAR2(30)
SIZE_ESTIMATE NUMBER
OBJECT_ROW NUMBER
PROCESSING_STATE CHAR(1)
PROCESSING_STATUS CHAR(1)
BASE_PROCESS_ORDER NUMBER
BASE_OBJECT_TYPE VARCHAR2(30)
BASE_OBJECT_NAME VARCHAR2(30)
BASE_OBJECT_SCHEMA VARCHAR2(30)
ANCESTOR_PROCESS_ORDER NUMBER
DOMAIN_PROCESS_ORDER NUMBER
PARALLELIZATION NUMBER
UNLOAD_METHOD NUMBER
GRANULES NUMBER
SCN NUMBER
GRANTOR VARCHAR2(30)
XML_CLOB CLOB
NAME VARCHAR2(30)
VALUE_T VARCHAR2(4000)
VALUE_N NUMBER
IS_DEFAULT NUMBER
FILE_TYPE NUMBER
USER_DIRECTORY VARCHAR2(4000)
USER_FILE_NAME VARCHAR2(4000)
FILE_NAME VARCHAR2(4000)
EXTEND_SIZE NUMBER
FILE_MAX_SIZE NUMBER
PROCESS_NAME VARCHAR2(30)
LAST_UPDATE DATE
WORK_ITEM VARCHAR2(30)
OBJECT_NUMBER NUMBER
COMPLETED_BYTES NUMBER
TOTAL_BYTES NUMBER
METADATA_IO NUMBER
DATA_IO NUMBER
CUMULATIVE_TIME NUMBER
PACKET_NUMBER NUMBER
OLD_VALUE VARCHAR2(4000)
SEED NUMBER
LAST_FILE NUMBER
USER_NAME VARCHAR2(30)
OPERATION VARCHAR2(30)
JOB_MODE VARCHAR2(30)
CONTROL_QUEUE VARCHAR2(30)
STATUS_QUEUE VARCHAR2(30)
REMOTE_LINK VARCHAR2(4000)
VERSION NUMBER
DB_VERSION VARCHAR2(30)
TIMEZONE VARCHAR2(64)
STATE VARCHAR2(30)
PHASE NUMBER
GUID RAW(16)
START_TIME DATE
BLOCK_SIZE NUMBER
METADATA_BUFFER_SIZE NUMBER
DATA_BUFFER_SIZE NUMBER
DEGREE NUMBER
PLATFORM VARCHAR2(101)
ABORT_STEP NUMBER
INSTANCE VARCHAR2(60)

Let's see the number of rows populated inside master table.
SQL> select count(*) from "ARJU"."SYS_EXPORT_SCHEMA_02";

COUNT(*)
--------
1125

We see in the master table the total rows are 1125. In fact master table contains all data pump log messages. It is used to track the detailed progress information of a Data Pump job - which is more than the log messages. It conatins following information.

- Completed rows of a table.

- Total number of errors during data pump operation.

- Elapsed time for each table to do data pump export/import operation.

- The current set of dump files.

- The current state of every object exported or imported and their locations in the dump file set.

- The job's user-supplied parameters.

- The status of every worker process.

- The state of current job status and restart information.

- The dump file location, the directory name information.

And many other useful information.

Related Documents
http://arjudba.blogspot.com/2009/02/important-guideline-about-data-pump.html

How to trace/diagnosis oracle data pump jobs

Whenever you issue, impdp help=y or expdp help=y you can see a list of parameters that can be used for oracle data pump export/import jobs. From there you don't see any parameter by which you can trace data pump jobs. But tracing datapump job is an important issue in case of diagnosing incorrect behavior and/or troubleshooting Data Pump errors. The undocumented parameter TRACE is really useful to troubleshoot data pump jobs.

Before going into original discussion let's see how data pump job is done. The architecture of data pump job is discussed on http://arjudba.blogspot.com/2009/04/data-pump-process-architecture-master.html

The tracing of data pump is done by TRACE parameter. This parameter takes value as 7 digit hexadecimal number. Specifying the parameter value follow some rules.
Out of 7 digit hexadecimal number,
- first 3 digits are responsible to enable tracing for a specific data pump component.

- Rest 4 digits are usually 0300

- Specifying more than 7 hexadecimal number is not allowed. Doing so will result,
UDE-00014: invalid value for parameter, 'trace'.

- Specifying leading 0x (hexadecimal specification characters) is not allowed.

- Value to be specified in hexadecimal. You can't specify it in decimal.

- Leading zero can be omitted. So it may be less than 7 hexadecimal digit.

- Values are not case sensitive.

Before starting tracing be sure you have large enough value setting of the MAX_DUMP_FILE_SIZE initialization parameter because this size is used to capture all the trace information. The default value is UNLIMITED which is ok.

SQL> show parameter max_dump_file
NAME TYPE VALUE
------------------------------------ ----------- -------------------
max_dump_file_size string UNLIMITED

SQL> select value from v$parameter where name='max_dump_file_size';

VALUE
--------------------------------------------------------------------
UNLIMITED

If it is not unlimited then you can set it by,
SQL> alter system set max_dump_file_size=UNLIMITED;

System altered.

The majority of errors that occur during a Data Pump job, can be diagnosed by creating a trace file for the Master Control Process (MCP) and the Worker Process(es) only.

In case of standard tracing trace files are generated in BACKGROUND_DUMP_DEST. In case of standard tracing,

- If it is Master Process trace file then generated file name is,
<SID>_dm<number>_<process_id>.trc

- If it is Worker Process trace file then generated file name is,
<SID>_dw<number>_<process_id>.trc

In case of full tracing two trace files are generated in BACKGROUND_DUMP_DEST just like standard tracing. And one trace file is generated in USER_DUMP_DEST.

Shadow Process trace file: <SID>_ora_<process_id>.trc

The list of trace level in data pump is shown below.

 Trace   DM   DW  ORA  Lines
  level  trc  trc  trc     in
  (hex) file file file  trace                                         Purpose
------- ---- ---- ---- ------ -----------------------------------------------
  10300    x    x    x  SHDW: To trace the Shadow process (API) (expdp/impdp)
  20300    x    x    x  KUPV: To trace Fixed table
  40300    x    x    x  'div' To trace Process services
  80300    x            KUPM: To trace Master Control Process (MCP)      (DM)
 100300    x    x       KUPF: To trace File Manager
 200300    x    x    x  KUPC: To trace Queue services
 400300         x       KUPW: To trace Worker process(es)                (DW)
 800300         x       KUPD: To trace Data Package
1000300         x       META: To trace Metadata Package
--- +
1FF0300    x    x    x  'all' To trace all components          (full tracing)

Individual tracing level values in hexadecimal are shown except last one in the list. You can use individual value or combination of values. If you sum all the individual values you will get 1FF0300 which is full tracing.

To use full level tracing issue data pump export as,
expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300

To use full level tracing for data pump import operation issue import as,
impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=1FF0300

However for most cases full level tracing is not required. As trace 400300 is to trace Worker process(es) and trace 80300 is to trace Master Control Process (MCP). So combining them is trace 480300 and by using trace 480300 you will be able to trace both Master Control process (MCP) and the Worker process(es). This would serve the purpose.

So to solve any data pump export problem issue,
expdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=480300

To solve any data pump import problem issue,
impdp DUMPFILE=expdp.dmp LOGFILE=expdp.log TRACE=480300

Related Documents

Monday, April 27, 2009

How to install and configure magento on windows.

Pre-installation Tasks
Before installing magento make sure that you have the following software installed on your system.
-Apache Web Server.
-PHP 5.2.0 or newer version (You need extension of PDO/MySQL, MySQLi, mcrypt, mhash, simplexml, DOM, curl, gd, soap).
-MySQL 4.1.20 or newer.

I, myself have installed XAMPP software(which contains PHP + MySQL + Apache) and then enable extension of necessary addons by uncommenting php.ini file under apache/bin directory. If you don't do it you might have http://arjudba.blogspot.com/2009/04/installing-magneto-fails-with-php.html

Installing Magento
1)Download magento software from magentocommerce.com/download. You can download.zip or .tar.gz installer package and decompress it with any compression utility.

2)Rename the decompressed folder to magento and place this (magento) folder into your web server directory.

3)Create a mysql database For example name the database magento. Also create username and password for the database.

D:\xampp\mysql\bin>mysql.exe -h localhost -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 5.1.30-community MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> create database magento;
Query OK, 1 row affected (0.00 sec)

mysql> create user 'magento' identified by 'magento';
Query OK, 0 rows affected (0.05 sec)

mysql> grant all privileges on *.* to 'magento'@'localhost' identified by 'magento';
Query OK, 0 rows affected (0.06 sec)

mysql> commit;
Query OK, 0 rows affected (0.00 sec)

mysql> Bye

Ensure that you can log in to database using newly created user.

D:\xampp\mysql\bin>mysql.exe -h localhost -u magento -p
Enter password: *******
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 18
Server version: 5.1.30-community MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

4)You must have the correct permission onto magento directory in order to proceed. On windows if you have NTFS file system then make sure you have read, write and execute permission on the folder. On unix change the permission to 755 by chmod.

5)Already mentioned that before installing magento you need php 5. However if your server runs on php 4 then you need PHP5 CGI Binary.

6)From your browser write http://localhost/magento. You will get http://localhost/magento/index.php/install/ on the browser and the word "Welcome to Magento's Installation Wizard!"

7)Click on the checkbox if you are agree and proceed to install. And then press Continue.

8)In the localization tab select Locale, Time Zone and Default Currency and then press Continue.

9)In the configuration window fill up the field as indicated. In this step you may face a problem defined here http://arjudba.blogspot.com/2009/04/magneto-installation-crashes-apache.html. So care about that and note that this step may take few minutes to continue next step.

10)Create Admin Account window will appear. Fill up the necessary information. And press Continue.

11)You have done installation task. A window will appear cotaining word "You're All Set!". Now you can login to admin panel. You can Go to backend or frontend as your wish.
Related Documents
http://arjudba.blogspot.com/2009/04/after-installing-magento-cant-log-in-to.html
http://arjudba.blogspot.com/2009/04/magneto-installation-crashes-apache.html
http://arjudba.blogspot.com/2009/04/after-installing-magento-cant-log-in-to.html

Sunday, April 26, 2009

What is sharp-bang appear at first inside shell script

You might have seen that in many shell scripts the first line starts with a #! immediately following a path. The ! sign is called the exclamation mark and # sign is called the number sign/pound/hash. The another name of # sign is sharp and the ! sign is bang. The concatenation of # and ! is called the sha-bang (#!) or she-bang or sh-bang.

The sha-bang (#!) at the head of a script tells your system that this file is a set of commands to be fed to the command interpreter indicated. After #! there is path name which is the path to the program that interprets the commands in the script, whether it be a shell, a programming language, or a utility. This command interpreter then executes the commands in the script, starts at the top and ignore comments.

We know that in shell script any command starts with # is considered as comment. But if #! line in a shell script appear the first thing, then the command interpreter (sh or bash) sees and correctly interpret it but finally when the script executes the command interpreter consider the line as a comment. The line has already served its purpose - calling the command interpreter.

If, in fact, the script includes an extra #! line, then bash will interpret it as a comment.

The examples of commonly uses sha-bang are,
#!/bin/sh
#!/bin/bash -execute using bash shell.
#!/bin/ksh — Execute using korn shell.
#!/bin/zsh — Execute using Z shell.
#!/bin/sh — On Solaris this indicates Bourne shell. However on linux it points to bash shell.
#!/bin/csh — Execute using C shell.
#!/usr/bin/perl -Execute using perl.
#!/usr/bin/php — Execute using PHP.
#!/usr/bin/python — Execute using Python.
#!/usr/bin/ruby — Execute using Ruby.


The /bin/sh invokes the default shell interpreter which is /bin/bash on linux machine.

The /bin/bash explicitly invokes bash shell interpreter.

The path after sha-bang may also contain any utility path like,
#!/bin/sed -f
#!/usr/awk -f

While you scripting you can omit #! if the script consists only of a set of generic system commands, using no internal shell directives.

Note that the path given at the "sha-bang" must be correct, otherwise an error message -- usually "Command not found." will be the outcome of the script execution.

Related Documents
Introduction -What is kernel, shell, shell script.
Basic Steps to write a shell script
Variables in shell script
Output text with echo command

Saturday, April 25, 2009

Find pattern from a file using grep, egrep or fgrep utility

The grep or egrep or fgrep utility is used to find pattern from a file and print the output. We can use these utilities to search words from multiple files.

You can put the searching patterns inside single quotes.

The syntax of using grep is,
grep pattern file_1 file_2 ..... file_n

Let's do our experiment on two files named blog_info.txt and myself.txt. The contents of this two files are shown below.

# vi blog_info.txt
Welcome to http://arjudba.blogspot.com blog.
This blog contains day to day tasks of Arju.
Most of the contents of this blog is oracle+php+shell scripts.
Thanks for visiting this blog.

# vi myself.txt
Assalamu Alaikum - which means peace be upon you.
I am Arju. I work as DBA Consultant.
I also help people to learn oracle online.
I completed by B.Sc from IUT.
I try to update my blog regularly.

Let's now search for Arju keyword from both of these files.

# grep 'Arju' blog_info.txt myself.txt
blog_info.txt:This blog contains day to day tasks of Arju.
myself.txt:I am Arju. I work as DBA Consultant.

With grep command you can use several options. Below is some of them.

1)-h : If you search from more than one files like in the above output after searching "Arju" keyword the lines from both of the files are displayed containing Arju plus the file name are shown before the output. If you don't want file name to be displayed then you can use -h option.

# grep -h 'Arju' blog_info.txt myself.txt
This blog contains day to day tasks of Arju.
I am Arju. I work as DBA Consultant.

2)-w : Let's search for 'is' keyword from both files. Any word containing 'is' keyword will appear. So word containing this also will appear.

# grep 'is' blog_info.txt myself.txt
blog_info.txt:This blog contains day to day tasks of Arju.
blog_info.txt:Most of the contents of this blog is oracle+php+shell scripts.
blog_info.txt:Thanks for visiting this blog.

Now if we want to restrict search containing just 'is' keyword only then use grep with -w option.

# grep -w 'is' blog_info.txt myself.txt
blog_info.txt:Most of the contents of this blog is oracle+php+shell scripts.

3)-b : If you want to print the word position number of the search text within the file then use -b option.

# grep -b 'is' blog_info.txt myself.txt
blog_info.txt:45:This blog contains day to day tasks of Arju.
blog_info.txt:90:Most of the contents of this blog is oracle+php+shell scripts.
blog_info.txt:153:Thanks for visiting this blog.

In the first line 45 is printed as position of T is 45 of word "This" which conatins "is".

4)-c : If you use -c option, then it displays only a count of the number of matched lines and not the lines themselves.

# grep -c 'is' blog_info.txt myself.txt
blog_info.txt:3
myself.txt:0

So within blog_info.txt the "is" keyword is appeared three times but within myself.txt there is no such "is" keyword.

5)-e : With -e option you can specify one or more patterns for which grep is to search. You may indicate each pattern with a separate -e option character, or with newlines within pattern. For example, the following two commands are equivalent:
grep -e pattern_1 -e pattern_2 file
grep -e 'pattern_1 pattern_2' file

For example to search either keyword "Arju" or "IUT." from both file you can issue,

# grep -e 'Arju' -e 'IUT.' blog_info.txt myself.txt
blog_info.txt:This blog contains day to day tasks of Arju.
myself.txt:I am Arju. I work as DBA Consultant.
myself.txt:I completed by B.Sc from IUT.

6)-f : The -f patternfile reads one or more patterns from patternfile. Patterns in patternfile are separated by newlines.


7)-i : The -i option tells grep to ignore case. So if you use -i option keyword "blog" and "BlOg" treated same.

8)-l : The -l option prints the file name that contain the matching lines.
# grep -l 'IUT' blog_info.txt myself.txt
myself.txt

As "IUT" word is present inside file myself.txt so that is printed.

9)-n : The -n option precedes each line with the line number where it was found.
# grep -n 'IUT' myself.txt
4:I completed by B.Sc from IUT.

Before printing line 4 is printed as it is found on line 4.

10)-q : The -q option suppresses output and simply returns appropriate return code.

11)-s : The -s option suppresses the display of any error messages for nonexistent or unreadable files.

12)-U[b|B|l|L]: The -U option forces the specified files to be treated as Unicode files. By default, these utilities assume that Unicode characters are little-endian. If a byte-order marker is present, that is used to determine the byte order for the characters. You can force Unicode characters to be treated as big-endian by specifying -Ub or -UB. Similarly, you can force them to be treated as little-endian by specifying -Ul or -UL.

13)-v : The -v option displays all lines not matching a pattern.
For example if we want to print the line numbers where "I" letter is not found then issue,

# grep -v 'I' myself.txt
Assalamu Alaikum - which means peace be upon you.

14)-x : The -x option is used to find line that requires a string to match an entire line.

# grep -x 'I completed by B.Sc from IUT.' myself.txt
I completed by B.Sc from IUT.

fgrep searches files for one or more pattern arguments, but does not use regular expressions. It does direct string comparison to find matching lines of text in the input.

egrep works similarly, but uses extended regular expression matching. If you include special characters in patterns typed on the command line, escape them by enclosing them in apostrophes to prevent inadvertent misinterpretation by the shell or command interpreter. To match a character that is special to egrep, a backslash (\) should be put in front of the character. It is usually easier to use fgrep if you don't need special pattern matching.


Friday, April 24, 2009

Remove duplicate successive line using uniq utility

With uniq utility you can remove duplicate line if they are within successive line. For example you have successive identical line in the file, then with uniq you can discard all but one of successive identical lines from the file. Consider you have following lines within with files.

# vi student_data.txt
Roll number is 024401
His Name is Rafi
His Name is Rafi
He is 24 years old.
Roll number is 024401

Then using use uniq utility as below will yield following result.

# uniq student_data.txt
Roll number is 024401
His Name is Rafi
He is 24 years old.
Roll number is 024401

Note that within file there was two duplicate lines. One is, "Roll number is 024401" and another is "His Name is Rafi". Using the "uniq" output only "His Name is Rafi" line is omitted because they are successive identical lines. However "Roll number is 024401" text line is not removed because they are not successive though they are identical. So uniq utility is used to remove adjacent identical line only.

With the help of "sort" command uniq can be used to remove all duplicate lines within a file regardless of they are successive or not. Following is an example which will remove all duplicate lines within file student_data.txt and save it as sort_student.txt.

# sort student_data.txt | uniq > sort_student.txt

# cat sort_student.txt
He is 24 years old.
His Name is Rafi
Roll number is 024401

Related Documents
http://arjudba.blogspot.com/2009/04/edit-file-on-linux-using-sed-utility.html

Edit file on linux using sed utility

With sed utility you can edit text inside file directly. It is not needed to open the file using any editor and then do editing task. sed is stream editor for filtering and transforming text.

Below is my student_grade.txt
# cat student_grade.txt
024401 4.98
024402 4.95
024403 4.95
024404 4.50
024405 4.95

Now using sed utility we will replace first few digits of "student_id" that is 0244 by "Roll:".

The syntax of using sed utility is,
sed {expression} {file}

Now using sed utility we want to replace "0244" with "Roll:"
# sed '/0244/s//Roll:/g' student_grade.txt
Roll:01 4.98
Roll:02 4.95
Roll:03 4.95
Roll:04 4.50
Roll:05 4.95

Let's now understand about the command.
within single quote,
/0244 indicates search for string 024434.
/s means substitute or replace work.
//Roll: means replace the word "0244" by "Roll:"
/g means make the changes global

Related Documents

Data manipulation using awk utility in linux

With awk utility you can scan a pattern within file record and then process the record as you want. Suppose you have the student_grade.txt like below. First field is
student id and second field is grade. Both field is separated by tab delimiter.

# cat student_grade.txt
024401 4.98
024402 4.95
024403 4.95
024404 4.50
024405 4.95

Now we want to find out those student id whose grade is 4.95. With awk utility we can do this. We will search each record for the grade 4.95 and then print the 1st field.

The syntax of usage awk utiity is,
awk 'pattern_action' {file_name}

Now we can extract the student id whose grade is 4.95 by,
# awk '/4.95/{print $1}' student_grade.txt
024402
024403
024405

It will search for 4.95 within each record of file student_grade.txt and then by command "print $1" it will print first field.

Meta characters used in awk
To search for a pattern in awr you can use various meta characters. The list of meta characters along with their meaning is given below.

1). (Dot): Match any character
2)* : Match zero or more character
3)^ : Match beginning of line
4)$ : Match end of line
5)\ : Escape character following
6)[ ] : Match any of the list of characters
7){ } : Match range of instance
8)+ : Match one more preceding
9)? : Match zero or one preceding
10)| : Separate choices to match

Predefined variables in awk
1)FILENAME : Name of current input file
2)RS : Input record separator character (Default is new line)
3)OFS : Output field separator string (Blank is default)
4)ORS : Output record separator string (Default is new line)
5)NF : Number of input record
6)NR : Number of fields in input record
7)OFMT : Output format of number
8)FS : Field separator character (Blank & tab is default)

Related Documents
http://arjudba.blogspot.com/2009/04/translate-or-replace-characters-using.html

Translate or replace characters using tr utility

With tr command you can translate letter from uppercase to lowercase and vice-versa. In other word, with tr command you can replace a letter by another letter.

The syntax for using tr command is,
tr {source_pattern} {destination_pattern}

Each letter in source_pattern is replaced by corresponding letter in destination_pattern. For example if you write

tr "a6" "7y"

then from the string or file every "a" will be replaced by "7",
and every "6" will be replaced by "y".

Let's see an example. My names.txt looks like below.
# cat names.txt
momin
arju
bony
tany
azmeri
queen
tasreen

Now we want letter "o" will be replaced by number "0".
Letter "i" will be replaced by number "1".
Small letter "e" will be replaced by capital letter "E".

# tr "oie" "01E"
m0m1n
arju
b0ny
tany
azmEr1
quEEn
tasrEEn

We can also capitalize all letters inside names.txt with single statement. We can convert to both lowercase to uppercase and vice-versa.

In the following example, letters within names.txt is converted to all capitals.

# tr "a-z" "A-Z" <> names_capital.txt

# cat names_capital.txt
MOMIN
ARJU
BONY
TANY
AZMERI
QUEEN
TASREEN

Related Documents
http://arjudba.blogspot.com/2009/04/join-utility-in-linux.html

Join utility in linux

We have seen that paste utility discussed on http://arjudba.blogspot.com/2009/04/merge-lines-using-paste-utility-on.html is used to merge line by lines(line 1 of file1 is merged to line 1 of file2 and etc) within two files. But the join utility is used to merge if there is common field in both file and if values are identical to each other. Join does not work line by line. It works with all the lines between file to search for identical values.

A example will make you more clear. We have id_age.txt and id_dept.txt file and data are shown below. Both join and paste are shown below.

# cat id_age.txt
024401 28
024402 26
024434 23

# cat id_dept.txt
024401 CIT
024434 CSE
024438 EEE

# paste id_age.txt id_dept.txt
024401 28 024401 CIT
024402 26 024434 CSE
024434 23 024438 EEE

# join id_age.txt id_dept.txt
024401 28 CIT
024434 23 CSE

Note that the id 024434 is on the 2nd line of the id_dept.txt but the id 024434 is on the 3rd line of the id_age.txt and merge is done successfully.
Related Documents
http://arjudba.blogspot.com/2009/04/merge-lines-using-paste-utility-on.html

Thursday, April 23, 2009

Merge lines using paste utility on linux

In the post Retrieve column from file using cut utility in linux written on http://arjudba.blogspot.com/2009/04/retrieve-column-from-file-using-cut.html
it is discussed about the cut utility. The same example is used in this post. The paste command is used to merge lines together. For example in the cut example I made two files named id.txt and grade.txt. In this example I will merge them into one using paste utility.

The syntax of using paste utility is,

paste {file1} {file2}

The first line of file2 is appended to the first line of file1.
The second line of file2 is appended to the second line of file1 and so on.

Now we will make a file named id_grade.txt that will merge two files named id.txt and grade.txt generated in http://arjudba.blogspot.com/2009/04/retrieve-column-from-file-using-cut.html

To do so issue,
# paste id.txt grade.txt > id_grade.txt

Let's see the information inside id_grade.txt

# cat id_grade.txt
024401 4.98
022401 3.98
021401 4.76
024402 4.02
022402 3.99

Related Documents
http://arjudba.blogspot.com/2009/04/retrieve-column-from-file-using-cut.html

Retrieve column from file using cut utility in linux

Suppose we have following file named student_info.txt where first field is student ID, second field is grade and third field is department.

# vi student_info.txt
024401 4.98 CIT
022401 3.98 EEE
021401 4.76 MCE
024402 4.02 CIT
022402 3.99 EEE

Note that fields are separated by tab delimiter.

With the cut utility we can extract first or second or third field from the file.
The syntax of usage cut command is,
cut -f{field number} {file-name}

Where the field number is the column number in the file, like 1,2,3 etc. Note that field number is adjacent with -f, so there is no whitespace between them.

And the file-name is the name of the file.

Now we want to extract the list of student ID i.e first field in the field. Then use it as,

# cut -f1 student_info.txt
024401
022401
021401
024402
022402

We can also save the output to a file. Suppose we want to extract the grade information from the file and save it inside grade.txt file. Use it as,

# cut -f2 student_info.txt >grade.txt

# cat grade.txt
4.98
3.98
4.76
4.02
3.99

To extract student ID information into file id.txt issue,
# cut -f1 student_info.txt >id.txt

# cat id.txt
024401
022401
021401
024402
022402

Related Documents
http://arjudba.blogspot.com/2009/04/getopts-command-in-linux-shell-script.html

Wednesday, April 22, 2009

getopts command in linux shell script

The getopts command in shell script is used to check the valid command line arguments passed into script. The syntax of using getopts inside shell script is,

getopts {optsring} {variable_name}

From the manual page,
"optstring contains the option letters to be recognized; if a letter is followed by a colon, the option is expected to have an argument, which should be separated from it by white space. Each time it is invoked, getopts places the next option in the shell variable variable_name, When an option requires an argument, getopts places that argument into the variable OPTARG. On errors getopts diagnostic messages are printed when illegal options or missing option arguments are encountered. If an illegal option is seen, getopts places ? into variable_name."

For example you have a shell script named student_info that would be run by,
./student_info -i 024434 -a 23 -d CIT -s male
where student_info is the shell script name
-i is used for the student id.
-a is used for age.
-d is used for department.
-s is used for sex.

Now if you see user is giving wrong argument rather than these, then you can show user about the script usage information.

Let's look at the code.

# vi getopts.sh
{
echo "Usage Syntax: $0 -i -a -d -s"
echo "-i Student ID"
echo "-a Age"
echo "-d Department"
echo "-s Sex"
exit 1
}
if [ $# -lt 1 ]
then
help_menu
fi
while getopts i:a:d:s: option
do
case "$option" in
i) id="$OPTARG";;
a) age="$OPTARG";;
s) sex="$OPTARG";;
d) dept="$OPTARG";;
\?) help_menu
esac
done
echo "Student ID: $id ,Age: $age ,Sex: $sex ,Department: $dept "

# chmod +x getopts.sh

If you run getopts.sh without any argument then it will display the usage syntax.
# ./getopts.sh
Usage Syntax: ./getopts.sh -i -a -d -s
-i Student ID
-a Age
-d Department
-s Sex

If you give correct arguments then it will display as below.
# ./getopts.sh -i 024434 -a 23 -s male -d CIT
Student ID: 024434 ,Age: 23 ,Sex: male ,Department: CIT

Related Documents
http://arjudba.blogspot.com/2009/04/shift-command-and-uses-of-it-in-shell.html

Tuesday, April 21, 2009

The shift command and uses of it in shell script

What is shift command
As you know on shell prompt if you type "./shift_test.sh one two foo" then shift_test.sh is executed and three arguments are passed into it one,two and foo corresponds to $1, $2 and $3 respectively.
That is,
$1=one
$2=two
$3=foo


Now suppose I want to make shift the position parameter by one or two. To do the task we need to use shift command. In fact, simple use of the shift command moves the current values stored in the positional parameters (command line args) to the left one position. So, after usage shift command we will get,
$1=two
$2=foo

and $3 becomes null.


We can also do shift the positional parameters by 2. In that case we have to write
shift 2
And after shift by 2 position values would be,
$1=foo
and $2, $3 become null.

A simple example of shift command is show below which will shift the positional parameters by 1.

# vi shift_test.sh
echo "Current positional parameters \$1=$1, \$2=$2, \$3=$3"
shift
echo "after shift by 1 it becomes : \$1=$1, \$2=$2, \$3=$3"


# chmod +x shift_test.sh

# ./shift_test.sh one two foo
Current positional parameters $1=one, $2=two, $3=foo
after shift by 1 it becomes : $1=two, $2=foo, $3=

Usage of shift command in shell script
Before going into shift command usage let's see a simple example of how to convert one number system into another number system.
The format of converting number is,
echo "obase=to_which_number_system_to_convert; ibase=from_which_number_system_to_convert; number_to_convert;"|bc

For example to convert number 15 from decimal(10 base system) to hexadecimal(16 base system) issue,

# echo "obase=16; ibase=10; 15;"|bc
F

To convert number 15 from decimal(10 base system) to binary(2 base system) issue,
# echo "obase=2; ibase=10; 15;"|bc
1111

To convert number 15 from decimal(10 base system) to octal(8 base system) issue,
# echo "obase=8; ibase=10; 15;"|bc
17

Now we want to write a shell script that will convert one number system to another number system.

User will give the number in decimal (with -n syntax) and will provide base system to convert(-b). Shell script will return the number in the defined base system.

For example my shell script name is convert_from_decimal.sh and to convert decimal 15 to hexadecimal it can be called as,

./convert_from_decimal -b 16 -n 15
or
./convert_from_decimal -n 15 -b 16

Script is below.

# cat convert_from_decimal
while [ "$1" ]
do
if [ "$1" = "-b" ]; then
base="$2"
shift 2
elif [ "$1" = "-n" ]
then
number="$2"
shift 2
else
echo "$1 is not a valid option"
exit 1
fi
done
echo "Decimal $number equivalent $base - base system is `echo "obase=$base; ibase=10; $number;"|bc`"

# chmod +x convert_from_decimal

# ./convert_from_decimal -b 16 -n 15
Decimal 15 equivalent 16 - base system is F

# ./convert_from_decimal -n 15 -b 16
Decimal 15 equivalent 16 - base system is F


Related Documents
http://arjudba.blogspot.com/2009/04/trap-command-on-linux-shell-script.html

Sunday, April 19, 2009

After installing magento can't log in to admin panel

Problem Description
While installing magento on xampp version 2.5 you have provided admin username and password. After installation finished you no longer login with your admin account. In fact in the "Log in to Admin Panel" window whenever you provide wrong password/username combination it displays "Invalid Username or Password." But whenever you provide correct password it does not show anything. Though a new url like
http://127.0.0.1/magento/index.php/admin/index/index/key/d135be4de664ab83db829120740e058a/

is displayed on the address bar.
Everytime you do this you can't log in to admin panel.

Cause of the problem
The problem occurs because magneto could not store cookies. We run it as localhost and localhost is not true domain but to store cookies we need a domain. That's why login stops without saying any word.

Solution of the problem
Way 1:
In different forums I saw they mentioned connecting as http://localhost/magento/index.php/admin will fail but if you connect as http://127.0.0.1/magento/index.php/admin it will work. But in my case IP address in the URL did not work too.
I made it work into whenever I changed my browser. Suppose I installed magento using google chrome browser and I open admin url into my firefox window and it worked. Though in the firefox the url http://localhost/magento/index.php/admin did not work, but url http://127.0.0.1/magento/index.php/admin worked fine.

Way 2:
-Go to app/code/core/Mage/Core/Model/Session/Abstract/Varien.php file within your magento directory.

-Find the code,

session_set_cookie_params(
$this->getCookie()->getLifetime(),
$this->getCookie()->getPath(),
$this->getCookie()->getDomain(),
$this->getCookie()->isSecure(),
$this->getCookie()->getHttponly()
);

-Replace above code by,

session_set_cookie_params(
$this->getCookie()->getLifetime(),
$this->getCookie()->getPath()
//$this->getCookie()->getDomain(),
//$this->getCookie()->isSecure(),
//$this->getCookie()->getHttponly()
);

-Save it and try to login to your magento admin panel.
It would work. :)

Related Documents
http://arjudba.blogspot.com/2009/04/magneto-installation-crashes-apache.html

Why I don't like yahoo services, problems of yahoo

1)A big advertising window with message unexpected HTTP Response -Status code:503:
After sending mail this message appear though my other services are working properly.






















2)Yahoo mail could not load this message:
My network connection is ok. All other things are working properly. After waiting a long time to load mail at last it fails to load my emails.

















3) Mail is not loaded at all:
I used yahoo beta. After clicking a mail it is not loaded at all. Only after double clicking of it it is loaded.

















4)Captcha verification is asked each time:
While I try to send any time of email. It always ask for captcha entry. It is really pain to use yahoo mail.
















5)Could not add more than 5000 contacts:
Now a days to contact every entry into my contact list it fails to add. So workaround I have to delete a contact from my contact list as it seems to me yahoo can't add more than 5000 contacts.













6)Yahoo Messenger Hangs:
Sometimes yahoo messeneger hangs. And I need to kill the process and reload messenger. Also yahoo messenger takes a high memory than other messenger with less functionality. With yahoo messenger I could not be able to archive my messege online. Yahoo never shows whether messge is successfully delivered.
























7) Error Code 5 of yahoo:
While accessing mail yahoo reponded with error code 5 each time I load mail.














8) Mailbox busy problem:
My inbox is not loaded and after some minutes it return error message as below.
















9) Unexpected HTTP Response problem:
Yahoo continues to pain us by bid advertisement window and with its worst services.




















10) Spam folder loading time unlimited:
Good message reside under spam and spam message come in inbox. While loading spam folder it take unlimited time.

















11) After all it fails to send message:
What more we can say about yahoo mail. Is it popular service? It fails to send messages.













12) Loading contacts fails:
I can't load my contact while typing the contact name. Even after sending message yahoo itself can't load them.



















13) Sending taking a little longer than usual:
It waits for 2 minutes displaying message "Sending taking a little longer than usual".













14)Spam message shows in inbox and good message shows in spam:
This is great problem as spam folder is not loaded and my important emails goes in spam folder and all spam messages come in my inbox. Shit yahoo!

15)Horrible Yahoo:
While send messages sometimes it tell "A message could not be sent and a copy was not saved on the server."

















I used to use yahoo mail and yahoo messeneger while I was in university. But now I am really frustrated with yahoo service. Among the many problems I want to share some in this post that I face daily.

16)It can't even save messge into drafts:












For the above reasons I would not like to choose yahoo mail services as well as their chat services. Google's services is the best one, gtalk also works fine for me. No adds on gtalk and at least a little ads on mail. I love google's services.

Related Documents
http://arjudba.blogspot.com/2008/08/list-of-available-advertising-network.html