tags:

views:

1363

answers:

6

Hi,

we're dealing with a very slow update statement in an Oracle project.

Here's a little script to replciate the issue:

drop table j_test;

CREATE TABLE J_TEST
(
  ID  NUMBER(10) PRIMARY KEY,
  C1   VARCHAR2(50 BYTE),
  C2   VARCHAR2(250 BYTE),
  C3   NUMBER(5),
  C4   NUMBER(10)
);

-- just insert a bunch of rows
insert into j_test (id)
select rownum 
from <dummy_table>
where rownum < 100000;

-- this is the statement that runs forever (longer than my patience allows)
update j_test
set C3 = 1,
    C1 = 'NEU';

There are some environments where the Update-Statement takes just about 20 seconds, some where the statement runs for a few minutes. When using more rows, the problem gets even worse.

We have no idea what is causing this behavior, and would like to have an understanding of what is going on before proposing a solution.

Any ideas and suggestions? Thanks Thorsten

+3  A: 

Try this:

insert into j_test (id, C3, C4)
select rownum, 1, 'NEU'
from <dummy_table>
where rownum < 100000;
Gordon Bell
+3  A: 

Are you really trying to update a numeric field C4 NUMBER(10) with 'NEU' character value?

Assuming you're trying to do the following:

UPDATE j_test
   SET c3 = 3
 WHERE c1 = 'NEU'

You may need to create an index on the search field and analyze the table to speed up the update process. If you really trying to update the entire table then update speed may vary. It depends on memory, disk access speed, redo-logs creation, etc.

Also, as it was mentioned in another answer, you need to reserve some space for updates using PCTFREE, otherwise your are going to get a lot of chained rows in the table which affect the update speed.

+8  A: 

One possible cause of poor performance is row chaining. All your rows initially have columns C3 and C4 null, and then you update them all to have a value. The new data won't fit into the existing blocks, so Oracle has to chain the rows to new blocks.

If you know in advance that you will be doing this you can pre-allocate sufficient free space like this:

CREATE TABLE J_TEST
(
  ID  NUMBER(10) PRIMARY KEY,
  C1   VARCHAR2(50 BYTE),
  C2   VARCHAR2(250 BYTE),
  C3   NUMBER(5),
  C4   NUMBER(10)
) PCTFREE 40;

... where PCTFREE specifies a percentage of space to keep free for updates. The default is 10, which isn't enough for this example, where the rows are more or less doubling in size (from an average length of 8 to 16 bytes according to my db).

This test shows the difference it makes:

SQL> CREATE TABLE J_TEST
  2  (
  3    ID  NUMBER(10) PRIMARY KEY,
  4    C1   VARCHAR2(50 BYTE),
  5    C2   VARCHAR2(250 BYTE),
  6    C3   NUMBER(5),
  7    C4   NUMBER(10)
  8  );

Table created.

SQL> insert into j_test (id)
  2  select rownum 
  3  from transactions
  4  where rownum < 100000;

99999 rows created.

SQL> update j_test
  2  set C3 = 1,
  3      C2 = 'NEU'
  4  /

99999 rows updated.

Elapsed: 00:01:41.60

SQL> analyze table j_test compute statistics;

Table analyzed.

SQL> select blocks, chain_cnt from user_tables where table_name='J_TEST';

    BLOCKS  CHAIN_CNT
---------- ----------
       694      82034

SQL> drop table j_test;

Table dropped.

SQL> CREATE TABLE J_TEST
  2  (
  3    ID  NUMBER(10) PRIMARY KEY,
  4    C1   VARCHAR2(50 BYTE),
  5    C2   VARCHAR2(250 BYTE),
  6    C3   NUMBER(5),
  7    C4   NUMBER(10)
  8  ) PCTFREE 40;

Table created.

SQL> insert into j_test (id)
  2  select rownum 
  3  from transactions
  4  where rownum < 100000;

99999 rows created.

SQL> update j_test
  2  set C3 = 1,
  3      C2 = 'NEU'
  4  /

99999 rows updated.

Elapsed: 00:00:27.74

SQL> analyze table j_test compute statistics;

Table analyzed.

SQL> select blocks, chain_cnt from user_tables where table_name='J_TEST';

    BLOCKS  CHAIN_CNT
---------- ----------
       232          0

As you can see, with PCTFREE 40 the update takes 27 seconds instead of 81 seconds, and the resulting table consumes 232 blocks with no chained rows instead of 694 blocks with 82034 chained rows!

Tony Andrews
+1 for the excellent demonstration.
BQ
Good demonstration but recommending updating 100% of the rows is just enabling bad ideas.
Mark, I was rather assuming that this was a contrived example and that the real updates come later for some business reason. Obviously, if the rows need to immediately have thise values then they should be inserted with them!
Tony Andrews
A: 

This is very similar to the question and my answer here.

Never update 100% of the rows in a table. Just follow the procedure in that link. build the "right answer" as a new table and then swap that new table for the old one. Same with deleting a large percentage of the rows. It's just far more effective to use the scenario I've outlined.

EDIT: If this seems like a bad idea to some of you, just know that this is the technique recommended by Tom Kyte.

+2  A: 

Are you sure the problem isn't coming from the fact that you're insertting 'NEU' into a Number(10) field? It's doing an on-the-fly conversion from 'NEU' to a Number (??), before insertting.

I mean seriously, the other answers are nice and useful information, but 100k rows on a full update should be fast.

Remember - indexes tend to speed up selects, and slow down inserts / updates.

Kieveli
A: 

Another possibility is that one UPDATE is waiting because the table is locked (eg there is another uncommitted UPDATE on the table)
This link has a SQL statement to show locks

hamishmcn