views:

138

answers:

2

We have 2 server clusters: the first is made up of typical web applications backed by SQL databases. The second are highly optimized multiplayer game servers which keep all data in memory. Both clusters communicate with clients via HTTP (Ajax with JSON). There are a few cases in which we need to share data between the two server types, for example, reporting back and storing the results of a game (should ultimately end up in the database).

We're considering several approaches for inter-server communication:

  • Just share the MySQL databases between clusters (introduce SQL to the game servers)
  • Sharing data in a distributed key-value store like Memcache, Redis, etc.
  • Use an RPC technology like Google ProtoBufs or Apache Thrift
  • Using RESTful web services (the game server would POST back to the web servers, for example)

At the moment, we're leaning towards web services or just sharing the database. Sharing the database seems easy, but we're concerned this adds extra memory and a new dependency into the game servers. Web services provide good separation of concerns and fit with the existing Ajax we use, but add complexity, overhead and many more ways for communication to fail.

Are there any other good reasons not to use one or the other approach? Which would be easier to scale?

+1  A: 

Sharing the DB brings the obvious drawback of not having one unit in control of the data going into the DB. This can be a big hassle, which is I would recommend building an application layer.

If this application layer is what your web applications form, then I see nothing wrong with implementing client-server communication between the game servers and the web apps. Let the game servers push data to the application layer and have them subscribe to updates. This is a good fit to a message queueing system, but you could get away with building your own REST-based system for instance, if this fits better with your current architecture.

If the web apps do not form the application layer, I would suggest introducing such a layer by writing a small app, which hides the specifics of the storage. Each side gets a handle to the app interface, and writes it data to it.

In order to share the data between the two systems, the application layer could then use a distributed DB, like mnesia, or implement a multi-level cache system with replication. The simplest version of this would be time-triggered replication with for instance MySQL as you mention. Other options are message queues, replicated memory (Terracotta) and/or replicated caches (memcached), although these do not provide persistent storage.

disown
A: 

I'd also suggest looking at Redis as a data store and nodered for distributed pub-sub.

Although Redis is an in-memory K/V store, the latest version has VM support where keys are kept in memory, but values may be swapped out as memory pressure hits a configurable threshold. It also has simple master-slave replication and publish-subscribe built in.

NodeRed is built on node.js which is a scalable and ridiculously fast server-side js engine.

JBland