views:

573

answers:

4

I have an application with the architecture like client/server/db. Communication between client and server is WCF (has been migrated from asmx) and database is SQL Server 2005.

The project has a requirement, that you cannot update an order, which have been changed (updated) by another user after your initial read. A common requirement in most applications I think.

An update of an order is usually as:

  1. Client read an order - an initially-read copy is stored (Session) on the server
  2. Client updates order - returns updated-order to server
  3. Server will read order again from database and compare with initially-read to check, if order has been changed by another user - in case client will be notified to re-read order
  4. Server will save changes

This way to handle data change has the effect, that at a certaint point (3), the server will have 3 (different) copies of the order in memory! Anyone knows another strategy for this?

We are running WCF with AspNetBackwardCompability, because we need the Session-variable to "hold" the initially-read copies - it will make my day, if we could dump that

+1  A: 

What's to prevent a concurrent change to occur after 3, before 4?

The usual way to handle this is to eliminate step 3 (is incorrect anyway unless done in a repeatable read isolation level, and that is completely overkill) and apply the changes optimistically hopping that nothing has changed (ie. the optimistic concurrency model). To enforce that indeed nothing changed you use either a WHERE clause that contains all the old values, or add to the WHERE clause special column that is changed at every update, like a time stamp or a row version.

If the update was a no-op (it. it did not found the old value(s), which is checked using various way, like checking the @@ROWCOUNT or using an OUTPUT clause), then you can read the new, modified, values and notify the client, only on the exception case.

Remus Rusanu
+1  A: 

A stateful service such as that you have implemented is a big service antipattern. Web services should be stateless as a general principle otherwise your scalability may suffer. For optimistic locking use a timestamp column for the table. Return this as a concurrency token to the client which is returned unchanged and compare with the value in DB before update. I am raw on sql server but oracle has select for update statements that can help you.

If data is distribute across many tables consider a suitable locking strategy using a stored procedure.

Pratik
+1  A: 

One solution is to have the client supply both the initially-read values and the updated values when saving. Then you don't need a copy of the original values in session.

DataSets have the built-in capability to store both versions (DataRowVersion.Original and DataRowVersion.Current), but you'll have to provide your own method to do this (e.g. an operationContract:

SaveMyData(MyType original, MyType updated);

You can then save to the database thus:

UPDATE MyTable
SET Col1 = @NewCol1, Col2 = @NewCol2, ...
WHERE Col1 = @OldCol1, Col2 = @OldCol2, ...
IF @@ROWCOUNT = 0 ... update failed ...

Alternatively you can have a TIMESTAMP / ROWVERSION column in your table. You roundtrip this to the client, and test it when updating:

UPDATE MyTable
SET Col1 = @NewCol1, Col2 = @NewCol2, ...
WHERE PKCol = @PK AND TimeStampCol = @OldTimeStamp
IF @@ROWCOUNT = 0 ... update failed ...

You are of course relying on the client to correctly return the original values / original timestamp when saving. But this isn't a security issue - a malicious client can't do any more damage than it could with your session-based solution.

Joe
Hi JoeI think this is a good approach, but sending both versions (original and current) gives additonal data which must be send forward and back to the server again - and we are struggling with quite amount of data. I prefer your timestamp version.
A: 

I like Joe's approach - that's pretty much what I would recommend, too.

In your initial read, send back the value of a TIMESTAMP column from your main table to the client, either in the actual DataContract, or as a header in the WCF response message.

When you want to update the data, send back that initial timestamp value from the client to the server. The server will then first check to see whether that timestamp value has changed, and if so, throw a FaultException and not update the data. Only if the timestamp value is still the same value as the one returned in the UPDATE call by the client, would the server actually perform the update.

I would recommend using the SQL Server TIMESTAMP data time (which is btw not really anything that has to do with date and/or time - it's just a unique, ever-increasing number, really) because that's a lot more accurate than a DATETIME, and it gets automatically updated by SQL Server every time your row in the table is written to. Makes for a perfect "marker" for checking for updates.

With this approach, all you need to pass around is a 8-byte timestamp value - no need to have three exact copies of the entire data row.

See this great article "Understanding TIMESTAMP (ROWVERSION) in SQL Server".

Marc

marc_s