views:

745

answers:

5

In an Oracle 10g environment, I have a statement that needs to be executed several million times based on the results of a cursor. For flexibility, the current code has the statement as a constant in the package body. Here is a simplified version of the code. Please keep in mind this is all within the same package:

c_Stmt  CONSTANT VARCHAR2(128) := 'DELETE FROM t WHERE fldA = :b1';

...

PROCEDURE p1(vSomeNumber NUMBER(10)) IS
BEGIN
  EXECUTE IMMEDIATE c_Stmt USING vSomeNumber;
END;

...

FOR i IN 1 .. 9999999
  LOOP
     p1(i);
  END LOOP;

Since the dynamic SQL already uses bind variables, is there any performance advantage to rewriting this code to replace the dynamic SQL with a regular DML statement like this:

PROCEDURE p1(vSomeNumber NUMBER(10)) IS
BEGIN
  /* EXECUTE IMMEDIATE c_Stmt USING vSomeNumber; */
  DELETE FROM t WHERE fldA = vSomeNumber;
END;

I think the impact might be on the number of parses, but it's unclear to me if that's still a concern in 10g.

Thank you for your answers.

+3  A: 

Regular SQL is faster than dynamic SQL (execute immediate...) but why do you delete the rows one by one? Why not one big delete statement that deletes the rows? Think set based, not row-by-row!

tuinstoel
Thank you for the quick answer. Regarding deleting row by row - please just assume that we must do this. This is a simplified version of the code. There are many things we have to do for each record prior to deleting it.
elmnoise
For me that is hard to assume because I have seen many cases where it wasn't true at all (I don't mean this in a rude way). But regular sql is faster and it you know that the table exists when you compile, dynamic sql raises more run time errors because of typos.
tuinstoel
The middle ground is to use bulk binding. But tuinstoel is right -- use every effort to to use regular sql in bulk.
David Aldridge
+1  A: 

Generally it's more efficient to prepare the statement before your loop, and execute the statement handle during the loop (supplying the parameter at that time). That way you avoid the overhead of parsing and preparing the statement each time through the loop.

Using static SQL would avoid the parsing step, since that should be done when you define the procedure. But that doesn't avoid the overhead of the prepare step.

In any case, it's almost always the case that people on the internet cannot know for certain which way will perform better. That's partly dependent on your database and your data. You should try it both ways and measure the performance yourself.

Bill Karwin
A: 

Since you are stuck in pl/sql hell, another option would be to update the row, to indicate that it should be deleted.

Your current queries can ignore these 'soft deleted' rows.

At some future maintenance window, you could use a simple delete statement to drop the 'soft deleted' rows.

EvilTeach
Very complicated, and why would it help? You have to update all those rows, it is better to delete them directly.
tuinstoel
+1  A: 

One of the reasons the second method (using the Delete statement) will be faster than the first (using the Execute Immediate) is context switching. If you do the Execute Immediate, the PL/SQL process has to stop, Oracle needs to load the parse process, parse your statement (or retrieve it from the statement cache), then switch again to the execute process. With the latter, there is one fewer context switches.

These context switches (like thread switching in multi-threaded programs) are time consuming. This is the reason why Oracle spent a fair amount of time working on getting the Bulk Processing statement (Bulk Insert, Bulk Select) to work correct in PL/SQL.

As suggested above, try and get the PL/SQL to do all the deletes in one go, for example by creating a list of IDs to delete in a PL/SQL table and execute the delete in a FORALL loop. Fewer context switches means more work done quickly.

Thomas Jones-Low
+1  A: 

Depending on the exact needs, you have several alternatives for improving the delete speed.

Firstly, store a bunch of IDs in a PL/SQL array and periodically (and at the end) do either a FORALL array DELETE or a DELETE FROM table WHERE ID in (:arr(1), :arr(2)...)

Secondly, store the IDs in a GLOBAL TEMPORARY TABLE and periodically/at the end, do a DELETE FROM table WHERE id IN (SELECT id FROM global_temp_tbl)

Those two options obviously delay the deletion step so might affect SQL inside the loop using those tables.

Thirdly, switch to static SQL. I don't see the benefit of the dynamic SQL here (but you may have simplified the example such that the benefit is hidden). Switching to static would reduce the number of soft parses (Oracle hashing the statement and finding the match in its statement cache). That saving may or may not be significant in your case. It will mostly be CPU time but maybe some latching wait time if you have other processes running at the same time.

Fourthly, if the soft parse overhead is significant, but there is a reason you don't want to switch to static SQL, you can use DBMS_SQL. There, the DBMS_SQL.PARSE would be outside the loop, and only the BIND_VARIABLE and EXECUTE components would be inside the loop.

Gary