views:

2467

answers:

7

I have an application which potentially does thousands of inserts to a SQL Server 2005 database. If an insert fails for any reason (foreign key constraint, field length, etc.) the application is designed to log the insert error and continue.

Each insert is independent of the others so transactions aren't needed for database integrity. However, we do wish to use them for the performance gain. When I use the transactions we'll get the following error on about 1 out of every 100 commits.

This SqlTransaction has completed; it is no longer usable. at System.Data.SqlClient.SqlTransaction.ZombieCheck() at System.Data.SqlClient.SqlTransaction.Commit()

To try to track down the cause I put trace statements at every transaction operation so I could ensure the transaction wasn't being closed before calling commit. I've confirmed that my app wan't closing the transaction. I then ran the app again using the exact same input data and it succeeds.

If I turn the logging off it fails again. Turn it back on and it succeeds. This on/off toggle is done via the app.config without the need to recompile.

Obviously the act of logging changes the timing and causes it to work. This would indicate a threading issue. However, my app isn't multi-threaded.

I've seen one MS KB entry indicating a bug with .Net 2.0 framework could cause similar issues (http://support.microsoft.com/kb/912732). However, the fix they provided doesn't solve this issue.

+4  A: 

Difficult to help without seeing code. I assume from your description you are using a transaction to commit after every N inserts, which will improve performance vs committing each insert provided N is not too big.

But the downside is: if an insert fails, any other inserts within the current batch of N will be rolled back when you rollback the transaction.

In general you should dispose a transaction before closing the connection (which will rollback the transaction if it hasn't been committed). The usual pattern looks something like the following:

using(SqlConnection connection = ...)
{
    connection.Open();
    using(SqlTransaction transaction = connection.BeginTransaction())
    {
        ... do stuff ...
        transaction.Commit(); // commit if all is successful
    } // transaction.Dispose will be called here and will rollback if not committed
} // connection.Dispose called here

Please post code if you need more help.

Joe
I normally use this model. However, in this case I specifically don't want the other operations to be rolled back. I merely want to log the issue and resume. The transaction is only being used as a performance boost.
aef123
+1  A: 

Keep in mind that your application isn't the only participant in the transaction - the SQL Server is involved as well.

The error you quote:

This SqlTransaction has completed; it is no longer usable. at System.Data.SqlClient.SqlTransaction.ZombieCheck() at System.Data.SqlClient.SqlTransaction.Commit()

doesn't indicate the transaction has comitted, only that it is complete.

My first suggestion is that your server has killed off the transaction because it either took too long (ellapsed wall time) or got too large (too many changes or too many locks).

My second suggestion is to check that you're cleaning up connections and transactions appropriately. It's possible that you're running into problems because you are occasionally exhausting a pool of some resource before things get automatically recycled.

For example, DbConnection implements IDisposable, so you need to ensure you clean up appropriately - with a using statement if you can, or by calling Dispose() directly if you can't. 'DbCommand' is similar, as it also implements IDisposable.

Bevan
A: 

Are you committing the same transaction twice in a row?

Sam Saffron
A: 

Well, first IMO, you shouldn't expect your app to deal with "hard" errors like these. Your application should understand what these business rules are and account for them. DON'T force your database to be the business rule, or the constraint rule cop. It should only be a data rule cop, and at that, be graceful about telling the interface with RETURNs and other methods. Schema stuff shouldn't be force up that far.

On to answer your question, I suspect, without seeing any of your code, that you are trying to do a commit when an error has occured, and you don't know it yet. This is the reasoning behind my first statement ... trying to trap for the error the database gives, without having your application understand and participate in these rules, you're giving yourself a headache.

Try this. Insert rows that won't fail with a database constraint, or other errors and process the inserts with a commit. See what happens. Then shove in some records that will fail, process a commit and see if you get your lovely error. Third, run the errors again, do a forced rollback, and see if you succeed.

Just some ideas. Again, as a summary, I think it has to do with not trapping certain errors from the database in a graceful way for things that are "hard" errors and expecting the front end to deal with them. Again, my expert opinion, NOPE, don't do it. It's a mess. Your app needs to overlap in knowledge about the rules on the back. Those things are in place just to make sure this doesn't happen during testing, and the occasional bug that surfaces like this, to then put in the front it to handle the forgotten lookup to a foreign key table set.

Hope it helps.

SnapJag
Normally I'd agree with you about the app dealing with business rules. However, in this case the app is a data conversion tool. The end user sets up all the business rules. It is the users responsibility to build their conversion in a way that the DB accepts. This is why I need to log and continue
aef123
Agreed as well. My answer is to support why the problem is occuring. The application hasn't accounted for a failing error IMO. Execute the stored proc as an statement in SQL Enterprise Manager (bypassing the app) and see what the results are, successful or not.
SnapJag
A: 

You have proved your data is OK beyond all reasonable doubt.
FWIW, I would prefer to move the insert into a SPROC and don't use the transaction at all.
If you need the UI to be responsive, use a background worker to do the database grunt.
To me, a transaction is for interrelated activities, not a time saving device. The insertion cost has to be paid somewhere along the line.

I recently used ANTS profiler on a database application and was amazed to see intermittant SQlClient exceptions showing in solidly performing code. The errors are deep in the framework when opening a connection. They never make it to the surface and aren't detectable by the client code. So... The point?
It is not all rock solid out there, move the heavy work off the U.I. and accept the cost. HTH Bob

+1  A: 

Thanks for all the feedback. I've been working with someone from MSFT on the MSDN forums to figure out what's going on. It turns out the issue is due to one of the inserts failing due to a date time conversion problem.

The major problem is the fact that this error shows up if it's a date conversion error. However, if it's another error such as a field being too long it doesn't cause this issue. In both cases I would expect the transaction to still exist so I can call Rollback on it.

I have a full sample program to replicate this issue. If anyone wishes to see it or the exchange with MSFT you can find the thread on MSFT's newsgroups in microsoft.public.dotnet.framework.adonet under the SqlTransaction.ZombieCheck error thread.

aef123
The final result was that SQL Server automatically rolls back transactions on what it considers serious errors. DateTime conversion error is considered one of these serious errors. Currently there is no way to stop SQL Server from terminating your transaction for this kind of error.
aef123
A: 

This exception is thrown because actual DB transaction is already rolled back, so a .NET object representing it on the client side is already a "zombie".

More detailed explanation is here. This post explains how to write a correct transaction rollback code for such scenarios.

Alex Yakunin