views:

75

answers:

1

Hi,

We are thinking to make some architectural changes in our application, which might affect the technologies we'll be using as a result of those changes.

The change that I'm referring in this post is like this:

We've found out that some parts of our application have common data and common services, so we extracted those into a GlobalServices service, with its own master data db. Now, this service will probably have its own cache, so that it won't have to retrieve data from the db on each call. So, when one client makes a call to that service that updates data, other clients might be interested in that change, or not. Now that depends on whether we decide to keep a cache on the clients too.

Meaning that if the clients will have their own local cache, they will have to be notified somehow (and first register for notifications). If not, they will always get the data from the GlobalServices service.

I need your educated advice here guys:

1) Is it a good idea to keep a local cache on the clients to begin with?
2) If we do decide to keep a local cache on the clients, would you use
   SqlCacheDependency to notify the clients, or would you use WCF for 
   notifications (each might have its cons and pros)

Thanks a lot folks,

Avi

A: 

I like the sound of your SqlCacheDependency, but I will answer this from a different perspective as I have worked with a team on a similar scenario. We created a master database and used triggers to create XML representations of data that was being changed in the master, and stored it in a TransactionQueue table, with a bit of meta data about what changed, when and who changed it. The client databases would periodically check the queue for items it was interested in, and would process the XML and update it's own tables as necessary.

We also did the same in reverse for the client to update the master. We set up triggers and a TransactionQueue table on the client databases to send data back to the master. This in turn would update all of the other client databases when they next poll.

The nice thing about this is that it is fairly agnostic on client platform, and client data structure, so we were able to use the method on a range of legacy and third party systems. The other great point here is that you can take any of the databases out of the loop (including the master - e.g. connection failure) and the others will still work fine. This worked well for us as our master database was behind our corporate firewall, and the simpler web databases were sitting with our ISP.

There are obviously cons to this approach, like race hazard, so we were careful with the order of transaction processing, error handling, de-duping etc. We also built a management GUI to provide a human interaction layer before important data was changed in the master.

Good luck! Tim

TimS
Well, that sounds interesting and pretty complicated, a solution that only a dba can provide :) I'm afraid we don't have that knowledge here in our fledgling company. I'm personally not a person that favors putting too much logic into a database.But if it worked for you then that's great, and it's good to know of such a way of working.Thanks Tim!
Avi Shilon