views:

455

answers:

6

Hello,

I have a problem with some SQL queries that are wrapped inside a transaction. Here's how the code looks like:

using (SqlTransaction dbTrans = conn.BeginTransaction())

{

using (SqlCommand cmd = conn.CreateCommand())

{

  for(Parameters p in parameterList)
  try

  {
      //execute insert commmand
  }
  catch
  {
      //log exception
      //SQL server rolls back everything
      //even though no rollback statement is present!!!
  }

}

dbTrans.Commit();

}

I'm trying to execute some insert statements inside a transaction but if one fails, everything gets auto rollbacked. I know that in most situations this behavior is wanted by in my scenario it doesn't matter if a few statements don't make it. The reason for the transaction's existence is to improve speed. I know about bulk insert, but unfortunately I cannot use it here, so this is what I have to work with. Could you please tell me if it's possible to disable this behavior I described?

A: 

You could write the insert command in such a way that it never fails. For example:

insert into table1 (id, name) values (1,'charles')
where not exists (select * from table1 where id=1)
Andomar
+1  A: 

Basically you want to use transactions without them being all or nothing. This is simply not possible. My blog has a post on various ways of speeding up inserts. It might be useful to you.

RichardOD
I checked your blog, but like you said, using your class may be risky. I'm already using a parameterized command and just change the parameters' values but I was hoping a transaction would speed things up even more. Too bad it's designed like this.
Yes- but surely the rollback is highly unlikely?
RichardOD
A: 

This is the nature of a database transaction, which is to ensure atomicity. If an INSERT operation fails, the transaction must rollback.

David Andres
I agree with you, that this should be the default behavior but the developer should also have the freedom to turn it off for those rare cases where it is needed.
@ASDF: To me, turning transactions off is no different then executing each statement within its own transaction (SQL Server's Auto Transactions, more or less). I understand the need for improving performance, but there's more than one way to skin that cat.
David Andres
+1  A: 

I think "all or nothing" is the default behaviour of the transaction and you can not change it, and it is not correct to choose transaction for only speed.

You have to optimize your code use optimized classes and minimize number of database accesses.

Second thing why you can not use SqlBulkCopy class?? can you give more details

Ahmed Said
I can't use BulkCopy because the application needs to be independent of the database engine and afaik SqlBulkCopy only works for SQL Server.
@ASDF- Just factor out that functionality by allowing different pluggable strategies. You don't have to go with the lowest common denominator approach then.
RichardOD
A: 

SQL does not rollback everything actually. It all depends on the error being raised. Some error do abort the current transaction, some don't. For instance a key violation error does not abort a transaction and you can safely continue execution. See Database Engine Error Severities for details on engine error severity.

One thing to look into is if your application is changing the default SET XACT_ABORT setting. When this setting is ON any error will cause the transaction to abort. Default is OFF.

Remus Rusanu
This is what I find annoying, that some errors are considered by SQL Server more severe than others. The developer should have the freedom to decide that.XACT_ABORT is off by default so this is not an issue.
A: 

Ok guys, thanks for all the useful input on this. I guess I'll have to leave things like this and just wait for Microsoft to add this functionality in the future, if they ever do.