views:

246

answers:

3

I am trying to write some integration tests for some SQL server stored procedures and functions. I'd like to have a database that has a known set of test data in it, and then wrap each test in a transaction, rolling it back when complete so that tests are effectively independent.

The stored procedures/functions do anything from fairly simple join queries, to complex filtering with many layers of joins, to insertion of data to multiple tables.

There are a couple stored procedures that actually use transactions - and so these are harder to test. I'll show an example of the overall code that is executed, but keep in mind this would normally be in two different spots (test setup/teardown, and the actual stored procedure). For this sample, I'm also using a very simple temp table:

CREATE TABLE #test (
  val nvarchar(500)
)

Example:

-- for this example, just ensuring that the table is empty
delete from #test
go


-- begin of test setup code --
begin transaction 
go
-- end of test setup code --

    -- begin of code under test --
    insert into #test values('aaaa')

    begin transaction 
    go

        insert into #test values('bbbbb')

    rollback transaction 
    go

    insert into #test values('ccccc')

    -- Example select #1:
    select * from #test

    -- end of code under test --

-- begin of test teardown --
rollback transaction 
go
-- end of test teardown

-- checking that #temp is still empty, like it was before test  
-- Example select #2:
select * from #test

The problem here is that at "Example select #1", I would expect "aaaa" and "cccc" to be in the table, but actually only "cccc" is in the table, as SQL Server actually rolls back ALL transactions (see http://abdulaleemkhan.blogspot.com/2006/07/nested-t-sql-transactions.html). In addition, the second rollback causes an error, and although this can be avoided with:

-- begin of test teardown --
if @@trancount > 0 
begin
    rollback transaction 
end
go
-- end of test teardown

it doesn't solve the real problem: at "Example select #2", we still get "cccc" in the table -- it no longer gets rolled back because there is no transaction active.

Is there a way around this? Is there a better strategy for this type of testing?

Note: I'm not sure if the codebase ever does do anything after rollback or not (the insert 'cccc' part) -- but if it ever does, either intentionally or accidentally, it would be possible for tests to break in strange ways as unexpected data can be left over from another test.


Somewhat similar to http://stackoverflow.com/questions/2073737/nested-stored-procedures-containing-try-catch-rollback-pattern but there is no real solution to the problem posed here.

+1  A: 

the rollbacks in your code don't nest. they rollback everything back to the first BEGIN TRANSACTION.

for every BEGIN TRANSACTION, @@trancount gets incremented by one, however, any ROLLBACK sets @@trancount back to zero.

if you want to rollback a portion of a transaction, you need to use TRANSACTION savepoints. you can look them up in BOL, for more info than I can enter here.

http://msdn.microsoft.com/en-us/library/ms188378.aspx

KM
One problem with this is that it requires that the inner exception is modified - that is, the one inside the code being tested (stored procedure). If someone doesn't use this pattern when writing a transaction in the code base, it can still possibly break the tests in unpredictable ways. It would be nicer if the outside transaction (part of the test framework) could be modified to make it work for a 'regular' transaction written in the code being tested. Thanks though, otherwise this is a nice thing to know.
gregmac
@gregmac, TSQL is what it is. I have correctly explained why your code has problems, and what options you have.
KM
+1  A: 

I'd like to have a database that has a known set of test data in it, and then wrap each test in a transaction, rolling it back when complete so that tests are effectively independent.

Don't. For one, you won't actually test the functionality, because in the real world the procedures will commit. Second, and this is far more important, you will get a gazilion false failures and need to implement workarounds to read dirty data because you don't actually commit, and you cannot do any proper verification.

Instead have a database backup with the well know set, then quickly restore it before the test. Group tests into suites that can all run on a fresh database restore without affecting each other, so you reduce the number of restores needed.

You can also use database snapshots, take a snapshot a the suite startup, then restore the database from the snapshot before each test, see How to: Revert a Database to a Database Snapshot (Transact-SQL).

Or combine the two methods: suite setup (ie. unit test @class method) restores the database from .bak file and creates a snapshot, then each test restores the database from the snapshot.

Remus Rusanu
I should have been clearer about the setup: the actual test assertions would be done between the end of test, and before the teardown -- so at that point, data would be committed.
gregmac
The data cannot be committed and later rolled back. At that point data is not commited.
Remus Rusanu
A: 

I had similar issues with that kind of setup and my take on that was to create a "SetupTest" script and a "ClearTest" script to run before and after test execution. Unless you're talking about a huge volume of data here - which would make the test execution too slow, this should work well and make the tests repeatable, because you know that everytime you're running that test suite, you will have the correct data awaiting to be executed.

Wagner Silveira