views:

3177

answers:

8

I found that SQL stored procedures are very interesting and useful. I have written stored procedures but i want to write well crafted, good performance tuned and concise SPs for any sort of requirement and also would love to learn about any tricks or good practices for stored procedures. How do i move from the beginner to the advanced stage in writing stored procedures?

Update: Found from comments that my question should be more specific. Everyone has some tricks upon their sleeves and I was expecting such tricks and practices for SPs which they use in their code which differentiates them from others and more importantly spruce up the productivity in writing and working with stored procedures.

+6  A: 

The only trick I always try to use is: Always include an example usage in a comment near the top. This is also useful for testing your SP. I like to include the most common examples - then you don't even need SQL Prompt or a separate .sql file with your favorite invocation, since it's stored right there in the server (this is expecially useful if you have stored procs that look at sp_who output for blocks or whatever and take a bunch of parameters).

Something like:

/*
    Usage:
    EXEC usp_ThisProc @Param1 = 1, @Param2 = 2
*/

Then to test or run the SP, you simply highlight that section in your script and execute.

Cade Roux
A: 

This is not a question that can be answered directly without more information, but a few general rules of thumb really apply.

Stored procedures are simply T-SQL queries that are stored. Therefore, becoming more familiar with T-SQL and the various functions and syntaxes is really what you need to do. And even more so from a performance standpoint you will need to ensure that your queries and the underlying data structures match in a manner that allow for good performance. IE, ensure that indexes, relationships, constraints etc are implemented where needed.

Understanding how to use performance tuning tools, understaning how execution plans work, and things of that nature are how you get to that "next level"

Mitchel Sellers
+5  A: 

This is a very general question, but here are a couple of pieces of advice:

  • Name your stored procedures consistently. Many use a prefix to identify that it's a stored procedure, but don't use 'sp_' as the prefix as that's designated for the Master databae (in SQL Server anyway)
  • Set NOCOUNT on, as this reduces the number of possible return values
  • Set-based queries often perform better than cursors. This question gets into this in much more detail.
  • If you're DECLARE'ing variables for your stored procedure, use good naming conventions just as you would/should in any other kind of programming.
  • Call SPs using their fully-qualified name to eliminate any confusion about which SP should be called, and to help boost SQL Server performance; this makes it easier to find the SP in question.

There's much more, of course. Here's are a link with more: SQL Server Stored Procedures Optimization Tips

AR
+2  A: 

In SQL Server I always put a statement that will drop the procedure if it exists so I can easily hit re-create the procedure while I am developing it. Something like:

IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'usp') AND type in (N'P', N'PC'))
DROP PROCEDURE usp
Scott
You can simplify this with:IF OBJECT_ID(N'dbo.usp') IS NOT NULLDROP PROCEDURE dbo.uspGOCREATE PROCEDURE dbo.usp...
Tom
isn't that what ALTER is for...
dotjoe
yes alter can do the same thin but if you write it as alter and go to put it on a server for the first time it will fail. This process will work whether the proc alrady exists or not.
HLGEM
A: 

Basic stuff:

Have an error-handling policy, and trap errors on all SQL statements.
Decide on a policy for using source code control for stored procedures.
Include a commented header with user, date/time, and purpose of the sp.
Explicitly return 0 (success) for successful execution, something else otherwise.
For non-trivial procedures, include a test case (or cases) and description of expected result.
Get in the habit of performance-testing. For text cases, record execution time at least.
Understand explicit transactions, and use them.
Almost never call SPs from SPs. Reusability is a different ballgame with SQL.

le dorfier
why not call SPs from SPs? I do it all the time out of necessity; if i don't it violates the DRY principle...?
Steven A. Lowe
DRY applies to procedural code, not declarative code. If it's procedural logic within the sp, I suggest you move it out to another level, where it certainly makes sense.
le dorfier
Another strong disincentive. Multilevel ROLLBACK TRANs are *so* screwed up.
le dorfier
A: 

This depends greatly on what you are doing in the stored procs. However, it is a good idea to use transactions if you are doing multiple inserts/updates or deletes in one proc. That way if one part fails, the other parts are rolled back leaving your database in a consistent state.

The two most important things to consider when writing to a database (and thus when using a stored proc that performs an action other than select) are data integrity and performance. Without data integrity, you just have a database that contains garbage and is useless. Without performacne you will have no users (if they are outside clients) or unhappy users (if they are mandated to use your product, usually internal users who have no choice to go elsewhere). Neither one is good for your career. So in writing a stored proc, make sure you first ensure that data will be entered correctly into the database and that it will fail if there is a problem in one part of the action.

If need be write checks into the proc to ensure your final result will be correct. I'm an ETL specialist and I always write my procs to make data is cleaned and normalized before I try to import it into my tables. If you are doing things from the user interface this might not be so important to do inthe proc, although I would have the user inteface do checks before even running the proc to ensure the data is good for the insert (things like checking to make sure a datefield contains a real date, that all required fields have values, etc.)

If you are writing procs to put large amounts of data into tables, it is best to have a way to test those results before they are finalized. You'd be amazed at the junk you will get from clients and vendors for data imports. We write all our import procs with a test flag. That way you can return the select data rather than perform the action, so that you can see in advance, exactly what you would be affecting.

I'm not a fan of dynamic SQL and I prefer not to use it in stored procs. If you are stuck with dynamic SQl in existing procs, please put in a debug flag that will allow you to print the SQL rather than execute it. Then put in the comments the most typical cases that you will need to run. You will find that you can maintain the proc much better if you do this.

Do not do things in a cursor, just becasue you want to reuse another stored proc that only works on one record at time. Code reuse that causes performance issues if a bad thing.

If you are using case statements or if statements, make sure you have done testing that will hit every possible branch. The one you don't test is the one that will fail.

HLGEM
+5  A: 

Here are my stored procedure error-handling guidelines.

  • Use sysmessage, sp_addmessage, and placeholders rather than hard-coded error messages.
  • When using sp_addmessage and sysmessages, always use error message number of 50001 or greater.
  • With RAISERROR, always supply a severity level <= 10 for warning messages.
  • With RAISERROR, always supply a severity level between 11 and 16 for error messages.
  • Remember that using RAISERROR doesn't always abort any batch in progress, even in trigger context.
  • Save @@error to a local variable before using it or interrogating it.
  • Save @@rowcount to a local variable before using it or interrogating it.
  • For a stored procedure, use the return value to indicate success/failure only, not any other/extra information.
  • Return value for a stored procedure should be set to 0 to indicate success, non-zero to indicate failure.
  • Set ANSI_WARNINGS ON - this detects null values in any aggregate assignment, and any assignment that exceeds the maximum length of a character or binary column.
  • Set NOCOUNT ON, for many reasons.
  • Think carefully about whether you want XACT_ABORT ON or OFF. Whichever way you go, be consistent.
  • Exit on the first error - this implements the KISS model.
  • When executing a stored procedure, always check both @@error and the return value. For example:

    EXEC @err = AnyStoredProc @value
    SET  @save_error = @@error
    -- NULLIF says that if @err is 0, this is the same as null
    -- COALESCE returns the first non-null value in its arguments
    SELECT @err = COALESCE( NULLIF(@err, 0), @save_error )
    IF @err <> 0 BEGIN 
        -- Because stored proc may have started a tran it didn't commit
        ROLLBACK TRANSACTION 
        RETURN @err 
    END
    
  • When executing a local stored procedure that results in an error, do a rollback because it's possible for the procedure to have started a transaction that it didn't commit or rollback.
  • Don't assume that just because you haven't started a transaction, there isn't any active transaction - the caller may have started one.
  • Ideally, avoid doing rollback on a transaction that was started by your caller - so check @@trancount.
  • But in a trigger, always do rollback, as you don't know whether the caller initiated an active transaction (because @@trancount is always >= 1).
  • Always store and check @@error after the following statements:

    INSERT, DELETE, UPDATE
    SELECT INTO
    Invocation of stored procedures
    invocation of dynamic SQL
    COMMIT TRANSACTION
    DECLARE and OPEN CURSOR
    FETCH from cursor
    WRITETEXT and UPDATETEXT
    
  • If DECLARE CURSOR fails on a process-global cursor (the default), issue a statement to deallocate the cursor.
  • Be careful with an error in a UDF. When an error occurs in a UDF, execution of the function is aborted immediately and so is the query that invoked the UDF - but @@error is 0! You may want to run with SET XACT_ABORT ON in these circumstances.
  • If you want to use dynamic SQL, try to have only a single SELECT in each batch because @@error only holds the status of the last command executed. The most likely errors from a batch of dynamic SQL are syntax errors, and these aren't taken care of by SET XACT_ABORT ON.
RoadWarrior
+1  A: 
  1. Always use SET NOCOUNT ON
  2. If you are going to perform two or more inserts/updates/deletes, please use a transaction.
  3. Never name your procs 'sp_'. SQL Server will look in the master database first, not find it, then look in your database second. If you name your procs differently, SQL Server will look in your database first.

Bad:

SET NOCOUNT ON
BEGIN TRAN
  INSERT...
  UPDATE...
COMMIT

Better, but looks messy and a major pain to code:

SET NOCOUNT ON
BEGIN TRAN
  INSERT...
  IF @ErrorVar <> 0
  BEGIN
      RAISERROR(N'Message', 16, 1)
      GOTO QuitWithRollback
  END

  UPDATE...
  IF @ErrorVar <> 0
  BEGIN
      RAISERROR(N'Message', 16, 1)
      GOTO QuitWithRollback
  END

  EXECUTE @ReturnCode = some_proc @some_param = 123
  IF (@@ERROR <> 0 OR @ReturnCode <> 0)
       GOTO QuitWithRollback 
COMMIT
GOTO   EndSave              
QuitWithRollback:
    IF (@@TRANCOUNT > 0)
        ROLLBACK TRANSACTION 
EndSave:

Good:

SET NOCOUNT ON
SET XACT_ABORT ON
BEGIN TRY
    BEGIN TRAN
    INSERT...
    UPDATE...
    COMMIT
END TRY
BEGIN CATCH
    IF (XACT_STATE()) <> 0
        ROLLBACK
END CATCH

Best:

SET NOCOUNT ON
SET XACT_ABORT ON
BEGIN TRAN
    INSERT...
    UPDATE...
COMMIT

So where is the error handling on the 'Best' solution? You don't need any. See the SET XACT_ABORT ON, that means perform an automatic rollback if there are any errors. The code is cleaner and easier to read, easier to write, and less buggy. Less buggy because there is no chance of missing an error condition as SQL Server now does this for you.

Simon Hughes