views:

366

answers:

2

My question is a lot like this one. However I'm on MySQL and I'm looking for the "lowest tech" solution that I can find.

The situation is that I have 2 databases that should have the same data in them but they are updated primarily when they are not able to contact each other. I suspect that there is some sort of clustering or master/slave thing that would be able to sync them just fine. However in my cases that is major overkill as this is just a scratch DB for my own use.

What is a good way to do this?

My current approach is to have a Federated table on one of them and, every so often, stuff the data over the wire to the other with an insert/select. It get a bit convoluted trying to deal with primary keys and what not. (insert ignore seems to not work correctly)

p.s. I can easily build a query that selects the rows to transfer.

+4  A: 

MySQL's inbuilt replication is very easy to set up and works well even when the DBs are disconnected most of the time. I'd say configuring this would be much simpler than any custom solution out there.

See http://www.howtoforge.com/mysql_database_replication for instructions, you should be up and running in 10-15 mins and you won't have to think about it again.

The only downside I can see is that it is asynchronous - ie. you must have one designated master that gets all the changes.

Mark
A well done tutorial, readable, concise and whatnot. However it's still to invasive for what I want.
BCS
Me, as the lazy programmer that I am, would use the replication feature as well rather than having to issue a statement manually every now and then. Don't get scared off by the initial setup steps, it really pays off in the end!
Cassy
The end for this (the drop dead, dead line for the project) is some time in December and I don't expect to do replication on this DB ever again.
BCS
A: 

My current solution is

  • set up a federated table on the source box that grabs the table on the target box
  • set up a view on the source box that selects the rows to be updated (as a join of the federated table)
  • set up another federated table on the target box that grabs the view on the source box
  • issue an INSERT...SELECT...ON DUPLICATE UPDATE on the target box to run the pull.

I guess I could just grab the source table and do it all in one shot, but based on the query logs I've been seeing, I'm guessing that I'd end up with about 20K queries being run or about 100-300MB of data transfer depending on how things happen. The above setup sold result in about 4 queries and little more data transfered than actually needed to be.

BCS