views:

527

answers:

2

I've asked about each of these technologies separately, and really haven't found a suitable answer.

We have a server in our central office running SQL Server 2005 Enterprise that has several large (large in the sense that DSL is the limiting factor) databases that we need local copies of at each of our locations. We currently have a few dozen locations, and are needing to bring even more online. The total number of locations we'll need to sync these databases to will be in the several hundreds in the next 2 years.

We are trying to overcome issues with the WAN connection at each location. These are DSL lines and the wiring at the locations isn't always the best. We currently have issues with some of the locations going down as often as every hour. While we are working to resolve these issues with rewiring and assistance from the local telcos, it mainly highlights the problem at hand: we need a two-way sync that can handle being occasionally-connected.

We tried transactional replication for a while, and while it worked some of the time, it was too high maintenance for us, and it seemed to randomly error out often with no possible explanations, forcing us to reinitialize subscriptions (which could take upwards of 4 hours assuming the location would stay connected long enough to get the entire snapshot in one go). We've looked at rolling our own solution from scratch, but I don't feel this would be the best idea given the scale and reliability we are needing.

So far we've also looked at Sync Framework, and as suggested by someone else, Service Broker. Sync Framework seems a better fit, but I was told that Service Broker scales better and is more reliable? I can't find any empirical data on the overhead involved with Sync Framework or Service Broker, so it's proving impossible to compare the two in this regard.

What we really need is a two-way sync between the central office server and a remote client that can run autonomously and can report to an admin in the event of a failure that requires our intervention.

There are so many possible solutions to this problem, all involving completely different technologies, that I need a fresh eye on this.

What do you think would be the optimal solution for our situation, and why?

EDIT: Obviously, upgrading to SQL Server 2008 would solve this problem easily. However, we would like to try to less expensive options first.

A: 

I don't have any hard data to offer on this, but we used the sync framework on a project a while ago. My experience with it is really bad. It's slow (even when synchronizing relatively small tables across a LAN), scales terribly and requires a lot of work to manually handle error conditions (it'll happy produce larger packets than WCF can handle by default -- and is only able to split updates into batches when syncing one way, not the other.) And it only works with a few select databases (the client must use MS SQL Compact Edition, as I recall), unless you're willing to write your own SyncAdapter.

Overall, a lot of work just to get a fragile and inefficient solution to your problem. I wouldn't recommend it.

jalf
Thank you for the reply. I've heard similar problems out of v1 of the sync framework, but nothing for v2 yet. I believe v2 includes a lot more SyncAdapters to include standard SQL Server instances now as well as more abstracted classes for creating your own adapters.
Matt
We used v2. The only client database supported out of the box is still SQL CE. Of course, it's not rocket science to write your own adapters, so this is probably the least of the problems with the framework.
jalf
A: 

I think we'll look into the SQL Server 2008 upgrade route. It seems the native change tracking support will be the easiest way to accomplish this.

Matt