views:

57

answers:

2

The in-house developed software where I work connects directly to a mysql server here in our office via our devexpress orm (XPO). Performance is great.

We're opening another office... cross country. Performance: not so great. The requirement is that the software be as responsive in both offices as it is in this office and that the data from one office be available to the other 'in real time'.

Something of this scale is totally new to me. I'm not averse to bringing in a consultant who's done something like this before, but I'd like to get a good picture of the options first. I'm sure that this is a common situation.

Is replication a good idea? Is it fast enough? stable enough?

Are there development patterns that tackle this kind of situation if replication won't work?

Heck, I'm not even sure how to tag this, so if someone knows better... please, feel free to re-tag

EDIT > Details about the data

I guess, compared to some enterprise software, we're not moving lots of data. The software manages customer accounts, appointments etc. and each user works on about 2-5 separate accounts/minute (50 users currently, 200-400 after planned expansion), updating data each time.

The real-time aspect comes into play when someone in office A creates an appointment for someone in office B who, ideally, needs to be able to view its details near immediately (<2 mins). That said, each record usually only gets mutated a maximum of 5 times a day. But that's only what I suspect; I don't actually have any usage statistics on me.

+1  A: 

One of you're last resorts is of course to make sure that all heavy duty work is done in background threads such that the GUI thread is never blocked.

Having real-time data depends on the data, I miss a detailed description such as how much data are we talking about per request (i.e. how large are the objects), how fast is you're internet connection (could be the bottle neck?), is the mysql server and all infrastructure in between that you control well configured? How static/dynamic is the data, if the real-time data gets mutated one time a day or it gets mutated a zillion times a day is important for the "solution"

Henri
updated to clarify the kinds and amounts of data being used, and how often it's mutated. The network connection we currently use is a 5mb line, which could be the slow point, but as of this instant, it only supports 3 users in the remote office.
SnOrfus
+1  A: 

You can't use asynchronous replication in both directions without creating replication conflicts which are impossible to resolve and break things.

Therefore, your obvious choice is to use read/write splitting - have the application do noncritical reads from a (readonly) local DB, and direct all writes to the master. The disadvantage of this is that it will mean that you can't immediately read back your own writes.

MySQL replication isn't perfect and requires some effort to set up and continuous monitoring to maintain; you must frequently check the data are the same in the slaves. Some queries get replicated incorrectly; you'll need to understand those and avoid them.

MarkR
Thanks for all of your input. Using your answer and the mysql docs I've decided to attempt read/write splitting. Now it's just a matter of convincing the ORM to do the same (DevExpress.XPO).
SnOrfus