Whenever we deploy a change to a database object any code which depends on it is invalidated. This affects triggers, views and stored procedures. However, the next time something calls that code the database will automatically recompile it.
So we don't need to worry about this, right? Well, yes, up to a point. The thing is, the invalidation of the triggers (or whatever) is a flag to us that a change has been made which could affect the operation of that trigger, which might have side-effects. The most obvious side-effect is that the trigger won't compile. More subtly, the trigger compiles but fails during operations.
Hence, it is a good idea to force the recompilation of triggers in a development environment, to ensure that our change has not fundamentally broken anything. But we can skip that step when we deploy our change in production, because we do so confident that everything will re-compile on demand. Depends on our nerve :)
Oracle provides mechanisms for automatically recompiling all the invalid objects in a schema.
The most straightforward is to use DBMS_UTILITY.COMPILE_SCHEMA()
. But this has been dodgy since 8i (because support for Java Stored Procedures introduced the potential for circular dependencies) and is no longer guaranteed to compile all objects successfully first time.
In 9i Oracle gave us a script $ORACLE_HOME/rdbms/admin/utlrp.sql
which recompiled things. Unfortunately it requires SYSDBA access.
In 10g they added the UTL_RECOMP package, which basically does everything that that script does. This is the recommended approach for recompiling large numbers of objects. Unfortunately it also requires SYSDBA access. Find out more.
In 11g Oracle introduced fine-grained dependency management. This means that changes to tables are evaluated at a finer granularity (basically column level rather than table level) , and only objects which are directly affected by the changes are affected. Find out more.