There isn't really anything for low latency, high throughput in-memory applications like real-time online games, at least not as a piece of arbitrary middleware.
Project Darkstar made an admirable attempt at this on the usability and complexity side, but found (unsurprisingly) that it wasn't scaling.
Ultimately it's a difficult (though not intractable) problem where there is no solution that is near to being universally applicable. In particular you are likely to have a tradeoff, between needing to act on stale game data on the one hand against needing to constantly exchange shared data on the other hand. Lack of correctness vs. exponential growth in complexity... pick your poison.
It's worth noting - especially if your application domain isn't real-time games - that you often don't mind if you are working with stale data, as long as it becomes correct soon enough. In such cases simple caching systems like memcache are great. Similarly, if you need more correctness but don't have to worry about throughput so much, something like Hazelcast (mentioned in another answer) might be great, but for most online games that are big enough to require load balancing "thousands of operations/sec" is just not good enough.
Some MMO technology makes some attempt to distribute the application by partitioning it geographically, which means there isn't really much shared state at all, and it requires this scheme to make sense in the game world and fiction.
Another approach is to partition it by service, and implement most services with your favourite off-the-shelf RPC approach. This lets you scale quite easily if your services are independent, but any dependencies between services puts you right back at square one.