views:

180

answers:

4

I would like to know how I can connect remote offices to a central database.

Here is the scenario:

Office with several users working on a windows desktop application

This application get it’s data from the database in the office server.

Now 1 or more remote (offices, stores, whatever) need to have access to the same data.

How can we achieve this?

Sometimes we need real time

Sometimes it’s not that important.

+6  A: 

Because of the security you need some kind of middle layer between the client and the database. It is very dangerous to expose the database directly to the internet. A lot of applications use SOAP in this scenario, because it is widely adopted. You make a SOAP service, define the protocol and functions and the client then calls that. Other options are, that you make a REST based service / server that does the same, or even make a plain TCP /IP layer. So you options are:

  1. SOAP service as the middle layer
  2. REST HTTP based middle layer
  3. XML RPC based middle layer
  4. Pure TCP /IP middle layer
  5. VPN connection for clients

If you make a VPN connection from the client then you can have a direct database access, but this is probably to acward for the clients specialy if they are not from your company. I personaly would probably go with 2 or 3, but just because I do not like SOAP to much. Most if asked would prefer SOAP I guess.

EDIT:

Since I see these are remote offices, you can consider a permanent VPN connection. Most routers these days are capable of those. Then your application can work in the same way remotely as it does localy.

Runner
+1  A: 

One criteria to choose what to adopt is your connection speed, its reliability and cost between remote offices and the central server. If speed is adequate, reliability is ok and costs are flat VPN is the way to go. It won't require changes to your application (if it is not written to download the whole database...) It will also handle authentication and encryption of your data. If VPN is not a choice, you can use one of the remote techniques in the Runner answer, but they usually require to redesign (and rewrite) the application, that model is a bit different than a "classic" client/server application. HTTP(S) based ones usually have no problems with firewalls, but usually need to be stateless and don't support callbacks, security needs certificate generation and deployment. Other working with other protocols (i.e DCOM, dbExpress Datasnap) are more flexible, but may be more complex to setup and/or secure properly. If the connection is not fast and reliable enough, or too expensive, you may need local caches synchronized with the remote server properly - and that may not be easy to achieve, depending on the application needs.

ldsandon
+3  A: 

There are 3 possible solution routes:

  1. Write a normal desktop application and publish this application with Terminal Services (Remote Desktop Services) or Citrix XenApp (or make it web base)
  2. (Re)design your application to work over (possible slow and/or unreliable) WAN connection)
  3. Implement some kind of database replication and implement a seperate database(server) at each office
Remko
Good answer, I forgot to mention point 3. Still you need the connection between offices to replicate. So you still have the same issue, but yes you gain a lot in the speed department and connection reliability.
Runner
If you have no experience in 2 or 3 I really recommend 1 since it's easy to implement.
Remko
+1  A: 

You could easily use DataSnap or RemObjects to wrap and expose your existing database as a middle tier server then write clients against it. For connecting to this I would strongly suggest a VPN connection between your remotes and your middle tier. This way you don't have to worry as much about securing the communications link...unless the data you are transferring is low risk data (not worth anything).

You will want to become very familiar with TClientDataset. It gives you the ability to operate your remote systems without having access to the master database, as well as reduce the amount of traffic that is shipped across the wire (you can filter and sort locally).

Update Much of the performance will be based on the speed of your back-end server and the type and amount of data your transferring. It is fast enough for most purposes, and if you find yourself having to send lots of data down the pipe, I have found it worth while to do some client caching. By adding an update trigger to set an updated timestamp on each record, you can update your cache by only requesting records newer than the last one you received.

skamradt
I have wanted to experiment DataSnap for long time, this could be a good time to do it.The TClientDataSet I already use it like that in small tools that have XML file as data “tables”I like this solution.What is the DataSnap speed over the internet?VPN is not an option here.
Jlouro
Datasnap speed over the Internet is the speed of your Internet connection. If speed is important for you, you need to code the middle tier with that in mind - but if your connection is slow, with an high latency or unreliable, and it doesn't match your performance requirements, you need a different approach. Before deciding what technique to use, you have to set your performance requirements (in both directions) and see if the connection is able to match them. Otherwise you may end up to develop a perfect application just to discover the connection can't cope with it.
ldsandon
There are network simulators that could help you to test an application in an environment simulating the real one (speed, reliability, etc.). Don't code and test it in a clean Ethernet switched 1Gb LAN, and then deploy it on a crowded ADSL line or the like.
ldsandon
I wrote network "simulators", should have been "emulators"
ldsandon
Why is VPN not an option? Are you aware of http://openvpn.net and similar solutions?
avra
No. I has not. Thanks.
Jlouro