views:

114

answers:

4

I've got a problem I'm not sure how best to solve.

I have an application which updates a database in response to ad hoc requests. One request in particular is quite common. The request is an update that by itself is quite simple, but has some complex preconditions.

  • For this request the business layer first requests a set of data from the data layer.
  • The business logic layer evaluated the data from the database and parameters from the request, from this the action to be performed is determined, and the request's response message(s) are created.
  • The business layer now executes the actual update command that is the purpose of the request.

This last step is the problem, this command is dependent on the state of the database, which might have changed since the business logic ran. Locking down the data read in this operation across several round-trips to the database doesn't seem like a good idea either. Is there a 'best-practice' way to accomplish something like this? Thanks!

+2  A: 

In simple terms when you execute the update command you are concerned that the database may have changed?

Then call stored procedures that are written defensively and will only update if the data is in an acceptable state when they are called (by checking the foreign key references, data integrity etc.).

Let me know if I can help in mocking up some aspect of this.

amelvin
Yes, but this puts business logic (which I'd like to be able to change easily) in a stored procedure.
Paul
@Paul, not if the stored proc is only checking to validate that data hasn't changed. I would argue that if all you are doing in the BL is checking for conditions to determine the correct table/columns to update, that should be in a stored proc anyway.
AllenG
Checking that referential integrity and foreign keys will be OK is best practice, having the db implicitly support business logic through structure and metadata (rather than explicitly through contrived procedural code) is also good practice.
amelvin
+2  A: 

You could store the original state of the modified business objects and compare the original objects to their database counterparts to check if anything has been changed.

If changes have been made, then you either have the choice to merge the objects based on the original, modified and stored (database) objects, or to cancel the update and tell the client the update has failed.

Prutswonder
Here are some google keywords: the strategy described above is also called 'optimistic concurrency control', in its more sophisticated incarnations, we also talk about 'choreography', 'business process management' and 'compensating transactions', see also 'sagas'.The alternative is to lock the data (whole database, or only what you touched/saw, full lock or write only lock) when you start transaction and have everybody wait until the operation is completed - this is the basics of called 'pessimistic concurrency control'.
ddimitrov
A: 

this is kind of difficult, because there are not many specifics in the question, so I'll just give a simple example that you may be able to apply to your situation.

Load all the data as well as the last changed date (yyyy-mm-dd hh:mi:ss.mmm)

SELECT AAA,BBB,LastChgDate FROM YourTable WHERE ID=xxxxxx

do your business logic

save the data

UPDATE YourTable SET AAA=aaaaa,BBB=bbbbb WHERE ID=xxxxxx AND LastChgDate=zzzzzz

If the row count !=1 then error someone else has changed the data, otherwise the data is saved.

KM
This *pattern* is called Optimistic concurrency control : http://en.wikipedia.org/wiki/Optimistic_concurrency_control
Steve Schnepp
A: 

Use a proper transaction isolation mode and do everything in a singe database transaction (i.e. start transaction in step 1. and commit after step 3.).

Your question is a little bit vague, but my guess you either need SNAPSHOT or READ COMMITTED mode.

Milan Babuškov