views:

182

answers:

8

Hello Guys,

Do you think having TRANSACTION around every sql statements in stored procedures is a good practice? Just about to optimise this legacy application in my company, one thing I found is that every stored procedures has BEGIN TRANSACTION. Even a simple select and Update statement has one. I thought it would be nice to have BEGIN TRANSACTION if one is performing a multiple actions i.e. (multiple inserts or Updates or Delete) but not just one action insert or update or delete. I may be wrong that's why I need someone else to advice me. THanks for your time guys.

+1  A: 

I don't know of any benefit of not just using implicit transactions for these statements.

Possible disadvantages of using explicit transactions everywhere might be that it just adds clutter to the code and so makes it less easy to see when an explicit transaction is being used to ensure correctness over multiple statements.

Additionally it increases the risk that a transaction is left open holding locks unless care is taken (e.g. with SET XACT_ABORT ON)

Martin Smith
+2  A: 

The only possible reason I could see for this is if you have the possibility of needing to roll-back the transaction for a reason other than a SQL failure.

However, if the code is literally

begin transaction
statement
commit

Then I see absolutely no reason to use an explicit transaction, and it's probably being done because it's always been done that way.

Donnie
+4  A: 

It is entirely unnecessary as each SQL statement executes atomically, ie. as if it were already running in its own transaction. In fact, opening unnecessary transactions can lead to increased locking, even deadlocks. Forgetting to match COMMITs with BEGINs can leave a transaction open for as long as the connection to the database is open and interfere with other transactions in the same connection.

Such coding almost certainly means that whoever wrote the code was not very experienced in database programming and is a sure smell that there may be other problems as well.

Panagiotis Kanavos
+1  A: 

I can only say that placing a transaction block like this to every stored procedure might be a novice's work.

A transaction should be placed only in a block that has more than one insert/update statements, other than that, there is no need to place a transaction block in the stored procedure.

Gunner 4 Life
+1  A: 

When performing multiple insert/update/delete, it is better to have a transaction to insure atomicity on operation and it insure that all the tasks of operation are executed or none.

For single insert/update/delete statement, it depends upon what kind of operation (from business layer perspective) you are performing and how important it is. If you are performing some calculation before single insert/update/delete, then better use transaction, may be some data changed after you retrieve data for insert/update/delete.

Sharique
+1  A: 

BEGIN TRANSACTION / COMMIT syntax shouldn't be used in every stored procedure by default unless you are trying to cover the following scenarios:

  1. You include the WITH MARK option because you want to support restoring the database from a backup to a specific point in time.

  2. You intend to port the code from SQL Server to another database platform like Oracle. Oracle does not commit transactions by default.

Registered User
+1  A: 

One plus point is you can add another INSERT (for example) and it's already safe.

Then again, you also have the problem of nested transactions if a stored procedure calls another one. An inner rollback will cause error 266.

If every call is simple CRUD with no nesting then it's pointless: but if you nest or have multiple writes pre TXN then it's good to have a consistent template.

gbn
+1  A: 

You mentioned that you'll be optimizing this legacy app.

One of the first, and easiest, things you can do to improve performance is remove all the BEGIN TRAN and COMMIT TRAN for the stored procedures that only do SELECTs.

Here is a simple test to demonstrate:

/* Compare basic SELECT times with and without a transaction */

DECLARE @date DATETIME2
DECLARE @noTran INT
DECLARE @withTran INT

SET @noTran = 0
SET @withTran = 0

DECLARE @t TABLE (ColA INT)
INSERT @t VALUES (1)

DECLARE 
  @count INT,
  @value INT

SET @count = 1

WHILE @count < 1000000
BEGIN

  SET @date = GETDATE()
  SELECT @value = ColA FROM @t WHERE ColA = 1
  SET @noTran = @noTran + DATEDIFF(MICROSECOND, @date, GETDATE())

  SET @date = GETDATE()
  BEGIN TRAN
  SELECT @value = ColA FROM @t WHERE ColA = 1
  COMMIT TRAN
  SET @withTran = @withTran + DATEDIFF(MICROSECOND, @date, GETDATE())

  SET @count = @count + 1
END

SELECT 
  @noTran / 1000000. AS Seconds_NoTransaction, 
  @withTran / 1000000. AS Seconds_WithTransaction

/** Results **/

Seconds_NoTransaction                   Seconds_WithTransaction
--------------------------------------- ---------------------------------------
14.23600000                             18.08300000

You can see there is a definite overhead associated with the transactions.

Note: this is assuming your these stored procedures are not using any special isolation levels or locking hints (for something like handling pessimistic concurrency). In that case, obvously you would want to keep them.

So to answer the question, I would only leave in the transactions where you are actually attempting to preserve the integrity of the data modifications in case of an error in the code, SQL Server, or the hardware.

8kb
An interesting result. Even though the overhead only works out at about 4 µs per call.
Martin Smith