views:

667

answers:

2

I'm using the following code in an ASP.NET page to create a record, then count the records to make sure I haven't exceeded a set limit and rollback the transaction if I have.

using (var session = NhibernateHelper.OpenSession())
using (var transaction = session.BeginTransaction())
{
    session.Lock(mall, LockMode.None);

    var voucher = new Voucher();
    voucher.FirstName = firstName ?? string.Empty;
    voucher.LastName = lastName ?? string.Empty;
    voucher.Address = address ?? string.Empty;
    voucher.Address2 = address2 ?? string.Empty;
    voucher.City = city ?? string.Empty;
    voucher.State = state ?? string.Empty;
    voucher.Zip = zip ?? string.Empty;
    voucher.Email = email ?? string.Empty;
    voucher.Mall = mall;
    session.Save(voucher);

    var issued = session.CreateCriteria<Voucher>()
        .Add(Restrictions.Eq("Mall", mall))
        .SetProjection(Projections.Count("ID"))
        .UniqueResult<int>();

    if (issued >= mall.TotalVouchers)
    {
        transaction.Rollback();
        throw new VoucherLimitException();
    }

    transaction.Commit();
    return voucher;
}

However, I'm getting a ton of deadlocks. I presume this happens because I'm trying to count the records in a table I just performed an insert on and a lock is still held on the inserted row, causing the deadlock.

  • Can anyone confirm this?
  • Can anyone suggest an fix?

I've tried calling SetLockMode(LockMode.None) on the final query, but that results in a NullReferenceException that I cannot figure out.

Edit: If I run the query before I save the object, it works, but then I'm not accomplishing the goal of verifying that my insert didn't somehow go over the limit (in the case of concurrent inserts).

Edit: I found that using IsolationLevel.ReadUncommited in the session.BeginTransaction call solves the problem, but I'm no database expert. Is this the appropriate solution to the problem or should I adjust my logic some how?

A: 

2 questions :

  1. How frequently are vouchers deleted
  2. Any objections (beyond purity) to a db level trigger ?
JBland
Theoretically, vouchers would never be deleted or even updated. Also, I tend to avoid triggers for performance reasons (although purity is a good one too).
Chris
But let's say vouchers get deleted often. What would be the proper way to ensure that by inserting a voucher, I've never exceeded some arbitrary limit?
Chris
If vouchers didn't get deleted, i'd just check (mall.Vouchers.Count < mall.TotalVouchers) before attempting the insert.
JBland
JBland - what if 100 other connections are inserting rows, and (say) three inserts happen in the time between the count and and the insert for our connection?
onupdatecascade
+2  A: 

That design would be deadlock prone - typically (not always) one connection is unlikely to deadlock itself, but multiple connections that do inserts and aggregates against the same table are very likely to deadlock. That's because while all activity in one transaction looks complete from the point of view of the connection doing the work -- the db won't lock a transaction out of "its own" records -- the aggregate queries from OTHER transactions would attempt to lock the whole table or large portions of it at the same time, and those would deadlock.

Read Uncommitted is not your friend in this case, because it basically says "ignore locks," which at some point will mean violating the rules you've set up around the data. I.E. the count of records in the table will be inaccurate, and you'll act on that inaccurate count. Your count will return 10 or 13 when the real answer is 11.

The best advice I have is to rearrange your insert logic such that you capture the idea of the count, without literally counting the rows. You could go a couple of directions. One idea I have is this: literally number the inserted vouchers with a sequence and enforce a limit on the sequence itself.

  1. Make a sequence table with columns (I am guessing) MallID, nextVoucher, maxVouchers
  2. Seed that table with the mallids, 1, and whatever the limit is for each mall
  3. Change the insert logic to this pseudo code:
Begin Transaction
Sanity check the nextVoucher for Mall in the sequence table; if too many exist abort
If less than MaxVouchers for Mall then {
  check, fetch, lock and increment nextVoucher
  if increment was successful then use the value of nextVoucher to perform your insert. 
    Include it in the target table.
}
Error? Rollback
No Error? Commit

A sequence table like this hurts concurrency some, but I think not as much as counting the rows in the table constantly. Be sure to perf test. Also, the [check, fetch, lock and increment] is important - you have to exclusively lock the row in the sequence table to prevent some other connection from using the same value in the split second before you increment it. I know the SQL syntax for this, but I'm afraid I am no nHibernate expert.

For read uncommitted data errors, check this out: http://sqlblog.com/blogs/merrill_aldrich/archive/2009/07/29/transaction-isolation-dirty-reads-deadlocks-demo.aspx (disclaimer: Merrill Aldrich is me :-)

onupdatecascade
Bingo! I knew the aggregate following the insert was likely the source of the deadlock, but I wasn't sure the best way to avoid it. Despite it being somewhat clunky, I've added a sequence table that I retrieve the mall record from each iteration with an update lock. I then increment that sequence number, write the row back, and perform the voucher insert all within a Read Committed transaction. Extensive performance testing shows this will not be a problem and does indeed enforce the business rule correctly. Thanks for the input!
Chris
No Prob - glad it works for you
onupdatecascade