Wednesday, August 27, 2008

Procedure to insert millions of record

SQL>create table T(a number,b varchar2(12),c varchar2(9));

SQL>begin
for i in 1..1000000
insert into values
(i,dbms_random.string('U',4),dbms_random.value(1,1000));
if mod(i,10000)=0 then
commit;
end if;
end loop;
end;
/

This procedure is to help you to insert n number of record with random values.

DBMS_RANDOM

DBMS.RANDOM: allows to generate the random number of values.

dbms_random.value(lowerlimit,highlimit): Is used to generate the random numeric values in a given limit.
eg:dbms_random.value(1,1000)

dbms_random.string(opt,length): is used to generate the random string, in a specified length.
eg:dbms_random.string('U',10)
'u' or 'U':
Upper case alpha characters
'l' or 'L':
Lower case alpha characters
'a' or 'A':
Upper and lower case alpha characters
'x' or 'X':
Upper alpha and numeric characters
'p' or 'P':
Any printable character

Friday, August 8, 2008

Nth row of table

SQL> select a, r from ( select a,rownum r from t_temp order by a desc) where r=3;

Friday, August 1, 2008

Script to Check the number of session-----------

Sript to Check the number of session running on the database
------------------------------------------------------------

SQL>set pages 100
SQL>set lines 3000
SQL>col machine format a25
SQL> col username format a15
SQL>col program format a25
SQL>break on 1;
set heading off;
SQL>select 'Sessions on database ' d.name ' having instance name ' i.instance_namefrom v$database d,v$instance i;
set heading on;
compute sum label 'Total Sessions' of sessions on 1
SQL>select 1,username,machine,program,count(*) sessionsfrom v$session group by username,machine,program
order by username,sessions desc;

Direct Path Exp Vs Conventional Exp

A direct path export should not be considered the converse of a direct path load using SQLLDR (for those of you familiar with that option). A direct path export does not read directly from the datafiles and write to the DMP file. SQLLDR does write directly to the datafiles from the DAT files. Export in direct path mode simply bypasses the SQL evaluation buffer (where clause processing, column formatting and such).
90% of the path is the same, EXP is still reading buffers into the buffer cache, doing the same consistent read processing and so on.
The speedup of the direct path export can be large however.
The 10% of the processing that it cuts out, accounts for a much larger percentage of the run-time.
For example, I have just exported about 100meg of data, and 1.2 million records. The direct path export took about one minute. The conventional path export on the other hand took three minutes. There is unfortunately no corresponding 'direct path import'. Import uses conventional SQL to insert the data back into tables. SQLLDR is still the way to go for high performance data loads.
It should be noted that in direct path mode, you cannot use the QUERY= parameter to select a subset of rows. This makes sense given that DIRECT=Y is simply a way to bypass the SQL evaluation buffer and that is where the where clause processing would normally take place.
followup to comment two below
Yes, you can move the dmp file -- the file format is the same regardless of the direct= setting.

IMP/EXP Issue

I have one user. I need to export and import that user. While exporting it is very fast. The dmp file size about 1GB. For some reasons I need to drop that user, recreate the same user and import the same data from that dmpfile. The user contains two tablespaces one for data and another for Indexes. Previously I had a datafile size is around 2GB. While importing I faced two problems,
1) Insufficient tablespce
2) It takes more than 7 hours.I used the following scripts to import in Sun 250 server( os ver 5.7)imp user/pwd file = /data1/data.dmp FULL=Y GRANTS=Y ROWS= Y COMMIT=Y LOG=/data1/i_data.log Can you help me ?

1) insufficient tablespace - that means you didn't have enough room. this typically
happens if you do the default export which uses "compress=y" which generates a create
table statement that has an INITIAL extent equal to the sum of all allocated extents.
Recommend you always export with compress=N.
2) Import is a serial process. Your DMP file was 1gig. That was just the data. The
indexes, constraints, etc were all in there as DDL. When you imported, it imported the
data, then it created all of the indexes, then it verified all of the constraints, then
it compiled all of your code, and so on.....
Its a big sequential process.
So, to "speed that up", you have to do things to make those processes faster
o make sure your alert log doesn't have any "checkpoint not complete" or "archival
required" messages. If it does, you have insufficient log configured, add more.
o check your sort_area_size for your index builds, consider increasing it during the
import.
o don't use a simple exp owner=, imp owner=. Export table by table (in parallel, just
fire off export more then once). Import table by table (in parallel, just fire off
import more then once).
o don't use import to create the indexes -- use imp ... indexfile=temp.sql to generate a
script with the create indexes. then modify them to have nologging and parallel. Run
them in sqlplus faster.
Just some ideas. I'm sure others will come along and put their ideas here as well --
things they've done. Me -- I find doing more then one export and more then one import
the faster/easiest (when forced to use exp/imp for this)
If the "some reason" is just to rebuild objects -- I mean after you are done you end up
with exactly what you started with -- I would just use "alter table T move tablespace
foo" to rebuild the tables and "alter index I rebuild" to rebuild the indexes. No need
to ever take the data out of the database anymor.