views:

227

answers:

5

I'm attempting to create a C++ plugin for a realtime 3D game. Although I believe to have a firm grasp on the theory of UDP, how it works, what its strengths and weaknesses are, my primary matter of concern is performance, scalability and likely statistics. I am aware that I probably know only about a drop in the oceans worth when it comes to UDP and even TCP.

The question:

Given a certain scenario, how many players would a typical dedicated server(s) be able to cope with at any one time.

Now for the scenario...

Let's imagine we have a MMORPG game where all players can be anywhere in the "game world". Everybody sends and receives data to the same server / server hub as everybody must be able to...see and interact with everybody else, when their paths eventually cross. It's a real time 1st person game, so player positions must be up to date, very timeously.

Lets say we have 1000 (or even 10000) players online...

Three primary things need to happen here:

  1. Each player streams their data to the game server via UDP, at say 14 sends per second. In a nutshell, this data includes, who, where and what each player is. The data being sent has been normalized and optimized for size and speed to encourage minimal bandwidth usage.

  2. The server receives for example up to 1000 (a non-fictional figure for demonstrational purposes) of these packets 14 times per second, thus processing 14 000 packets per second. This processing phase typically involves updating the central memory data stucture, where a players old x,y,z position data will be updated with his new position and a timestamp. This data structure on the server contains ALL data for ALL players in the ENTIRE game world.

  3. The server (possibly a separate thread, maybe even a separate machine) now needs to broadcast the packets to all the other players, so they can update their screens to show other players on the map. This also, happens 14 times per second. (where 14 might typically be a dynamic figure, changing based upon the CPU capacity being used, busy CPU, less framerate and vice versa).

The important thing is this: for Player X, only the data of other players within visual range of his position, are dispatched to that respective player. So if Player Y is 2 miles away, his data needs to be sent to X, but if Player Z is on the other side of the planet, his data is not dispatched to X as an attempt to save bandwidth. This of course involves a little bit more processing as data would have to be iterated and filtered, using the most effective indexing solution possible.

Now my concern is this, sending a data packet from a client machine, getting it into the servers RAM, doing the slight tiny bit of processing updating the data, and selectively broadcasting the info to other players, takes time. This meaning, that there is a certain threshold that a server will be able to handle, and yes, depends on the effectiveness of my implementation, the speed and abilities of the hardware being used, and of course, other external factors like, internet speed, traffic and nr. of solar flares hitting the earth per second...just kidding.

I'm trying to find out from others, who have gone through this process, what the pitfalls are, and what typical performance I can expect when creating a multiplayer plugin.

I could easily say: "I want to cater for 10000 people playing on the same server at the same time", and you might say: "100 is a more realistic and probable figure, per server."

So I am aware that I might have to come up with a multiple server / cloud computing hub for dealing with my thousands of requests and dispatches, distributing the processing load over multiple machines. So I might have a few machines dealing only with receiving data, a huge central box, which is like a in memory database shared somehow by all the receiving and dispatching machines, and then of course a series of dispatching machines.

Obviously, there are technical limitations, and I don't really know what to expect and what they are. And throwing extra CPU's and server boxes at the problem will not neccessarily solve the problem, as more intercommunication between machines will also slow the process down a bit. I suppose the more CPU's you throw at it, might reduce effectiveness and even reverse CPU productivity at some threshold.

Could and should I consider P2P (Peer To Peer) for multiplayer!

Am I being realistic saying that I will be able to cater for 2500 players at any one time?

Would it be possible to scale up to 10000 players in a few years time?

I know this question is dreadfully long, so please do accept my sincere apologies.

+2  A: 

The scaling question is entirely legitimate. The focus on UDP, however, is misplaced. It is not going to be the main problem for you.

The reason is that player-player interactions are fundamentally an O(N*N) problem. Server bandwidth on the other hand is an O(N) problem. Considering modern webservers can staurate 1Gbit Ethernet with HTTP over TCP, the lower overhead of UDP means that you're probably going to be able to saturate 1 Gbit Ethernet with UDP as long as your computations hold up.

MSalters
If the server does not handle any player-to-player, however, and merely updates and retransmits, it should be a basic O(n) process. Each client should be handling a single O(n) slice of the overall O(n^2) problem.
S.Lott
Yup. But the problem is then that you're looking at an O(n) problem with an unspecified constant, and the question is precisely about that constant (!)
MSalters
+1  A: 
  • Could and should I consider P2P (Peer To Peer) for multiplayer? I don't think that p2p technology is able to handle the real-time aspects of game networking. Also, in the usual p2p networks, you are not connected to thousands of members at once, but you're usually connected to some upstream nodes so it's more a graph than a very flat tree.

  • Am I being realistic saying that I will be able to cater for 2500 players at any one time? Not on a single server. However, by distributing your users onto multiple servers you already can filter them by geographic region (e.g. by continent or country) within the game world if it's a very large world. For low latency you would anyway want to keep the servers near the real locations of the users - you don't play on European servers if you live in the US and vice versa.

  • Would it be possible to scale up to 10000 players in a few years time? There are many ways to optimize the way the data is encoded and transmitted. Sending only deltas of the game world state, Client-side prediction of player movement, Broadcasting on the network level, cloud-computing on the server side etc. and there will be more in the next few years, esp. when the gaming industry reaches out to the cloud-based computing platforms like OnLive it becomes apparant that we need more efficient algorithms and inrastructure to cope with that amounts.

mhaller
+1  A: 

The problem with P2P is ultimately the end users' connection. ISPs typically don't give you a lot of upload, in a lot of cases < 1/10 of of your download speed. A lot of users are behind NAT, so you are going to need to setup some form of proxy for clients to initiate connections. You will need to handle user disconnects and packet loss (for the inevitable node that is on crappy wireless that drops half the packets). And you will need a good way to group clients by ISP/Location so they don't have 200ms+ pings between each other.

IMO it sounds like a disaster waiting to happen. You are probably better off going with a well known networking library (and traditional client/server architecture) then trying to invent a square wheel. Only transmit what needs to be updated (notice how most MMOs contain large static worlds with few dynamic objects).

envalid
+1  A: 

The scaling issue is one of the most difficult challenges for MMO's and is one that has been partially solved. There are many examples of how to track and update user info.

One point I'll mention though is that historically, games are a social thing and as such there is a pattern of a majority of the people tend to cluster together in a central or single area. So you really have to design for this worst case.

Some games are really going for a huge epic feeling, and having all the users allowed to group and bunch together is a core design requirement. For this type of game, plan on all the users being in the exact same spot. For other games, you should be able to break them into smaller groups and divide and conquer.

tooleb
+1  A: 

Could and should I consider P2P (Peer To Peer) for multiplayer - no, that opens you up to cheating and all sorts of reliability issues at best. It's a can of worms best left unopened. It might help you out with content distribution however, if that's a concern you have.

Am I being realistic saying that I will be able to cater for 2500 players at any one time? - definitely, but the emphasis is on how you implement it. In the mid 90s, text games like Realms of Despair or Medievia were handling hundreds of players online simultaneously. They didn't send data out to everybody 14 times a second, but they did update those players several times a second. Computing power has increased by a factor of about 250 since then. Food for thought.

Would it be possible to scale up to 10000 players in a few years time? - it's possible to do it now, if you relax your bandwidth requirements so that you're not always sending 14 updates a second, or relax the requirement that everybody is handled by 1 server. The 'C10K problem' was addressed over 10 years ago. Obviously an FTP client is not a real-time game, but on the other hand its throughput requirements are higher. If you can tolerate a little extra latency in return for higher bandwidth then you're onto a winner.

Kylotan