tags:

views:

70

answers:

2

I have the following Oracle SQL:

Begin

-- tables
for c in (select table_name from user_tables) loop
execute immediate ('drop table '||c.table_name||' cascade constraints');
end loop;

-- sequences
for c in (select sequence_name from user_sequences) loop
execute immediate ('drop sequence '||c.sequence_name);
end loop;

End;

It was given to me by another dev, and I have no idea how it works, but it drops all tables in our database.

It works, but it takes forever!

Script Output

I don't think dropping all of my tables should take that long. What's the deal? And, can this script be improved?

Note: There are somewhere around 100 tables.

+1  A: 

I would try changing the DROP TABLE statement to use the Purge keyword. Since you are dropping all tables, you don't really need to cascade the constraints at the same time. This action is probably what is causing it to be slow. I don't have an instance of Oracle to test this with though, so it may throw an error.

If it does throw an error, or not go faster, I would remove the Sequence drop commands to figure out which command is taking so much time.

Oracle's documentation on the DROP TABLE command is here.

davisoa
I will look into that tomorrow, thanks.
jjnguy
The CASCADE CONSTRAINTS clause forces the dropping of a table whose primary key is referenced by a foreign key. Without it we need to fix the order in which the tables are dropped to zap the dependent ones first, or have the script fail with ORA-02449
APC
+3  A: 

"It works, but it takes forever!"

Forever in this case meaning less than three seconds a table :)

There is more to dropping a table than just dropping the table. There are dependent objects to drop as well - constraints, indexes, triggers, lob or nested table storage, etc. There are views, synonyms stored procedures to invalidate. There are grants to be revoked. The table's space (and that of its indexes, etc) has to be de-allocated.

All of this activity generates recursive SQL, queries which select from or update the data dictionary, and which can perform badly. Even if we don't use triggers, views, stored procs, the database still has to run the queries to establish their absence.

Unlike normal SQL we cannot tune recursive SQL but we can shape the environment to make it run quicker.

I'm presuming that this is a development database, in which objects get built and torn down on a regular basis, and that you're using 10g or higher.

  1. Clear out the recycle bin.

    SQL> purge recyclebin;

  2. Gather statistics for the data dictionary (will require DBA privileges). These may already be gathered, as that is the default behaviour in 10g and 11g. Find out more.

  3. Once you have dictionary stats ensure you're using the cost-based optimizer. Ideally this should be set at the database level, but we can fix it at the session level:

    SQL> alter session set optimizer_mode=choose;

APC
Thanks for the answer. I will check it out tomorrow at work.
jjnguy