views:

956

answers:

5

I'm going through a bit of a re-think of large-scale multiplayer games in the age of Facebook applications and cloud computing.

Suppose I were to build something on top of existing open protocols, and I want to serve 1,000,000 simultaneous players, just to scope the problem.

Suppose each player has an incoming message queue (for chat and whatnot), and on average one more incoming message queue (guilds, zones, instances, auction, ...) so we have 2,000,000 queues. A player will listen to 1-10 queues at a time. Each queue will have on average maybe 1 message per second, but certain queues will have much higher rate and higher number of listeners (say, a "entity location" queue for a level instance). Let's assume no more than 100 milliseconds of system queuing latency, which is OK for mildly action-oriented games (but not games like Quake or Unreal Tournament).

From other systems, I know that serving 10,000 users on a single 1U or blade box is a reasonable expectation (assuming there's nothing else expensive going on, like physics simulation or whatnot).

So, with a crossbar cluster system, where clients connect to connection gateways, which in turn connect to message queue servers, we'd get 10,000 users per gateway with 100 gateway machines, and 20,000 message queues per queue server with 100 queue machines. Again, just for general scoping. The number of connections on each MQ machine would be tiny: about 100, to talk to each of the gateways. The number of connections on the gateways would be alot higher: 10,100 for the clients + connections to all the queue servers. (On top of this, add some connections for game world simulation servers or whatnot, but I'm trying to keep that separate for now)

If I didn't want to build this from scratch, I'd have to use some messaging and/or queuing infrastructure that exists. The two open protocols I can find are AMQP and XMPP. The intended use of XMPP is a little more like what this game system would need, but the overhead is quite noticeable (XML, plus the verbose presence data, plus various other channels that have to be built on top). The actual data model of AMQP is closer to what I describe above, but all the users seem to be large, enterprise-type corporations, and the workloads seem to be workflow related, not real-time game update related.

Does anyone have any daytime experience with these technologies, or implementations thereof, that you can share?

+2  A: 

Jon, this sounds like an ideal use case for AMQP and RabbitMQ.

I am not sure why you say that AMQP users are all large enterprise-type corporations. More than half of our customers are in the 'web' space ranging from huge to tiny companies. Lots of games, betting systems, chat systems, twittery type systems, and cloud computing infras have been built out of RabbitMQ. There are even mobile phone applications. Workflows are just one of many use cases.

We try to keep track of what is going on here:

http://www.rabbitmq.com/how.html (make sure you click through to the lists of use cases on del.icio.us too!)

Please do take a look. We are here to help. Feel free to email us at [email protected] or hit me on twitter (@monadic).

Cheers

alexis

alexis
+1  A: 

My experience was with a non-open alternative, BizTalk. The most painful lesson we learnt is that these complex systems are NOT fast. And as you figured from the hardware requirements, that translates directly into significant costs.

For that reason, don't even go near XML for the core interfaces. Your server cluster will be parsing 2 million messages per second. That could easily be 2-20 GB/sec of XML! However, most messages will be for a few queues, while most queues are in fact low-traffic.

Therefore, design your architecture so that it's easy to start with COTS queue servers and then move each queue (type) to a custom queue server when a bottleneck is identified.

Also, for similar reasons, don't assume that a message queue architecture is the best for all comminication needs your application has. Take your "entity location in an instance" example. This is a classic case where you don't want guaranteed message delivery. The reason that you need to share this information is because it changes all the time. So, if a message is lost, you don't want to spend time recovering it. You'd only send the old locatiom of the affected entity. Instead, you'd want to send the current location of that entity. Technology-wise this means you want UDP, not TCP and a custom loss-recovery mechanism.

MSalters
Yes: The problem with TCP comes when you drop a packet. The stall before recovery can be significant -- and with TCP, the kernel will withhold *newer* information just because *older* information hasn't arrived yet. For gaming (such as position updates), that's not desirable.Note that the message queueing clients are all users distributed around the world -- they are not within the cluster itself, so networking problems are a fact of life. (In fact, even within well-connected server rooms, you'll see some amount of packet loss, seemingly no matter how big your switches and buffers are)
Jon Watte
+2  A: 

@MSalters

Re 'message queue':

RabbitMQ's default operation is exactly what you describe: transient pubsub. But with TCP instead of UDP.

If you want guaranteed eventual delivery and other persistence and recovery features, then you CAN have that too - it's an option. That's the whole point of RabbitMQ and AMQP -- you can have lots of behaviours with just one message delivery system.

The model you describe is the DEFAULT behaviour, which is transient, "fire and forget", and routing messages to wherever the recipients are. People use RabbitMQ to do multicast discovery on EC2 for just that reason. You can get UDP type behaviours over unicast TCP pubsub. Neat, huh?

Re UDP:

I am not sure if UDP would be useful here. If you turn off Nagling then RabbitMQ single message roundtrip latency (client-broker-client) has been measured at 250-300 microseconds. See here for a comparison with Windows latency (which was a bit higher) http://old.nabble.com/High%28er%29-latency-with-1.5.1--p21663105.html

I cannot think of many multiplayer games that need roundtrip latency lower than 300 microseconds. You could get below 300us with TCP. TCP windowing is more expensive than raw UDP, but if you use UDP to go faster, and add a custom loss-recovery or seqno/ack/resend manager then that may slow you down again. It all depends on your use case. If you really really really need to use UDP and lazy acks and so on, then you could strip out RabbitMQ's TCP and probably pull that off.

I hope this helps clarify why I recommended RabbitMQ for Jon's use case.

Cheers

alexis

alexis
Thanks for the suggestions. An alternative to Rabbit is Qpid, which claims 6 M messages per second on a single 8-core server box (!) when run on the Red Hat low-latency kernel. However, I doubt that box also had 10,000 users connected at the same time.If you have a good link for comparing Rabbit vs Qpid, I'd love to see it!
Jon Watte
Jon, please can you point me at the 6M reference? I have a feeling that it refers to a case that RabbitMQ and Qpid both tested with the (financial market data) OPRA feed some time ago. This is a good case, but as I recall we both used batching and compression to get a higher rate. Note that in the case of OPRA, the use of both batching and compression is standard practice. Re comparing the two brokers in like-for-like cases recently, nothing immmediately springs to mind, but Googling may reveal more.Cheersalexis
alexis
Yes, that's probably the test case. The 6M figure was on the Red Hat site for their "low latency linux" based Qpid implementation.And that test case has almost nothing to do with the case I'm interested in, which has the problem of having 1,000,000 connected users, each of which only gets a few messages a second...
Jon Watte
+1  A: 

FWIW, for cases where intermediate results are not important (like positioning info) Qpid has a "last-value queue" that can deliver only the most recent value to a subscriber.

Steve Huston
That's great to know!I've actually checked out and built Qpid, and am trying it out now. What I don't particularly like is that the default configuration is capped at 300 connections. Will it do 20,000 connections per box?
Jon Watte
I answered in more depth on the [email protected] list, but to close the gap here, yes, doing 20,000 connections per box should be no problem, assuming you've got sufficient hardware horsepower.
Steve Huston
+1  A: 

I am building such a system now, actually.

I have done a fair amount of evaluation of several MQs, including RabbitMQ, Qpid, and ZeroMQ. The latency and throughput of any of those are more than adequate for this type of application. What is not good, however, is queue creation time in the midst of half a million queues or more. Qpid in particular degrades quite severely after a few thousand queues. To circumvent that problem, you will typically have to create your own routing mechanisms (smaller number of total queues, and consumers on those queues are getting messages that they don't have an interest in).

My current system will probably use ZeroMQ, but in a fairly limited way, inside the cluster. Connections from clients are handled with a custom sim. daemon that I built using libev and is entirely single-threaded (and is showing very good scaling -- it should be able to handle 50,000 connections on one box without any problems -- our sim. tick rate is quite low though, and there are no physics).

XML (and therefore XMPP) is very much not suited to this, as you'll peg the CPU processing XML long before you become bound on I/O, which isn't what you want. We're using Google Protocol Buffers, at the moment, and those seem well suited to our particular needs. We're also using TCP for the client connections. I have had experience using both UDP and TCP for this in the past, and as pointed out by others, UDP does have some advantage, but it's slightly more difficult to work with.

Hopefully when we're a little closer to launch, I'll be able to share more details.

Tim McClarren