tags:

views:

94

answers:

3

Hello,

I have a table containing 110GB in BLOBs in one schema and I want to copy it to another schema to a different table.

I only want to copy a column of the source table so I am using an UPDATE statement, but it takes 2,5 hours to copy 3 GB of data.

Is there a faster way to do this?

Thank you

update:

The code I am using is very simple:

update schema1.A a set blobA = (select blobB from schema2.B b where b.IDB = a.IDA);

ida and idb are indexes.

+1  A: 

Check to see if there are indexes on the destination table that are causing the performance issue, if so, temporarily disable them then recreate them after the data is copied from one column in the source table to the column in the destination table.

Michael MacDonald
All the constraints in my destination table refer to the primary key of the table, not the BLOB column. So I think that shouldn't affect the performance...
simao
A: 

If you are on Oracle 10 or 11, check ADDM to see what is causing problems. It is probably I/O or transaction log problem.

What kind of disc storage is this? Did you try to copy 110 GB file from one place to another on that disc system? How long it takes?

zendar
A: 

I don't know if oracle automatically grows the database size or not. If it does, then increase the amount of space allocated to the database to exceed the amount you are about to grow it prior to running your query.

I know in SQL server, under the default setup it will automatically allocate an additional 10% of the database size as you start filling it up. When it fills up, then it stops everything and reallocates another 10%. When running queries that do bulk loading of data, this can seriously slow the query down.

Also, as zendar pointed out, check the disk IO. If it has a high queue length then you may be constrained by have fast the drives work.

Chris Lively