(Database: Oracle 10G R2)
It takes 1 minute to insert 100,000 records into a table. But if the table already contains some records (400K), then it takes 4 minutes and 12 seconds; also CPU-wait jumps up and “Free Buffer Waits” become really high (from dbconsole).
Do you know what’s happing here? Is this because of frequent table extents? The extent size for these tables is 1,048,576 bytes. I have a feeling DB is trying to extend the table storage.
I am really confused about this. So any help would be great!
This is the insert statement:
begin for i in 1 .. 100000 loop insert into customer (id, business_name, address1, address2, city, zip, state, country, fax, phone, email ) values (customer_seq.nextval, dbms_random.string ('A', 20), dbms_random.string ('A', 20), dbms_random.string ('A', 20), dbms_random.string ('A', 20), trunc (dbms_random.value (10000, 99999)), 'CA', 'US', '798-779-7987', '798-779-7987', '[email protected]' ); end loop; end;
Here dstat output (CPU, IO, MEMORY, NET) for :
- Empty Table inserts: http://pastebin.com/f40f50dbb
- Table with 400K records: http://pastebin.com/f48d8ebc7
Output from v$buffer_pool_statistics
ID: 3 NAME: DEFAULT BLOCK_SIZE: 8192 SET_MSIZE: 4446 CNUM_REPL: 4446 CNUM_WRITE: 0 CNUM_SET: 4446 BUF_GOT: 1407656 SUM_WRITE: 1244533 SUM_SCAN: 0 FREE_BUFFER_WAIT: 93314 WRITE_COMPLETE_WAIT: 832 BUFFER_BUSY_WAIT: 788 FREE_BUFFER_INSPECTED: 2141883 DIRTY_BUFFERS_INSPECTED: 1030570 DB_BLOCK_CHANGE: 44445969 DB_BLOCK_GETS: 44866836 CONSISTENT_GETS: 8195371 PHYSICAL_READS: 930646 PHYSICAL_WRITES: 1244533
UPDATE
I dropped indexes off this table and performance improved drastically even when inserting 100K into 600K records table (which took 47 seconds with no CPU wait - see dstat output http://pastebin.com/fbaccb10 ) .