We're looking into transport/protocol solutions and were about to do various performance tests, so I thought I'd check with the community if they've already done this:

Has anyone done server performance tests for simple echo services as well as serialization/deserialization for various messages sizes comparing EJB3, Thrift, and Protocol Buffers on Linux?

Primarily languages will be Java, C/C++, Python, and PHP.

Update: I'm still very interested in this, if anyone has done any further benchmarks please let me know. Also, very interesting benchmark showing compressed JSON performing similar / better than Thrift / Protocol Buffers, so I'm throwing JSON into this question as well.

+6  A: 

You may be interested in this question: "Biggest differences of Thrift vs Protocol Buffers?"

+1  A: 

One of the things near the top of my "to-do" list for PBs is to port Google's internal Protocol Buffer performance benchmark - it's mostly a case of taking confidential message formats and turning them into entirely bland ones, and then doing the same for the data.

When that's been done, I'd imagine you could build the same messages in Thrift and then compare the performance.

In other words, I don't have the data for you yet - but hopefully in the next couple of weeks...

Jon Skeet
The Thrift-protobuf-comparison project ( would be a good home for this, if you have done something? It'd be great to see different use cases -- the current one deals with very samll messages, which is just one area.
I have a benchmarking framework now, but it's *mostly* aimed at benchmarking different implementations of Protocol Buffers and different messages. See
Jon Skeet
+1  A: 

If the raw net performance is the target, then nothing beats IIOP (see RMI/IIOP). Smallest possible footprint -- only binary data, no markup at all. Serialization/deserialization is very fast too.

Since it's IIOP (that is CORBA), almost all languages have bindings.

But I presume the performance is not the only requirement, right?

Vladimir Dyuzhev
Performance is definitely not the only requirement. The other requirements we have a handle on or can evaluate fairly easily; performance is the one I was looking for feedback on.
"Only binary data" doesn't mean it's necessarily the smallest possible footprint. For instance, you can transmit an Int32 as either "just 4 bytes" or with an encoding which reduces the transmission size of small values at the cost of using more data for large values.
Jon Skeet
In my experience, it's cheaper to not worry about tight bit-packing protocols and just zlib-stream your data. Those 0's from bits you don't need compress great (assuming you zero-init the bufs). This usually beats manual bit-packing and is a ton easier to debug. Assuming zlib is an option, anyway.
Scott Bilas
+7  A: 

I'm in the process of writing some code in an open source project named thrift-protobuf-compare comparing between protobuf and thrift. For now it covers few serialization aspects, but I intend to cover more. The results (for Thrift and Protobuf) are discussed in my blog, I'll add more when I'll get to it. You may look at the code to compare API, description language and generated code. I'll be happy to have contributions to achieve a more rounded comparison.

I've just added an issue to that - you're using the default options for protocol buffers, which mean "optimise for small code size". This has a *huge* impact on performance (but does lead to much smaller code). You should do a comparison with optimize_for = SPEED turned on.
Jon Skeet
+3  A: 

I did test performance of PB with number of other data formats (xml, json, default object serialization, hessian, one proprietary one) and libraries (jaxb, fast infoset, hand-written) for data binding task (both reading and writing), but thrift's format(s) was not included. Performance for formats with multiple converters (like xml) had very high variance, from very slow to pretty-darn-fast. Correlation between claims of authors and perceived performance was rather weak. Especially so for packages that made wildest claims.

For what it is worth, I found PB performance to be bit over hyped (usually not by its authors, but others who only know who wrote it). With default settings it did not beat fastest textual xml alternative. With optimized mode (why is this not default?), it was bit faster, comparable with the fastest JSON package. Hessian was rather fast, textual json also. Properietary binary format (no name here, it was company internal) was the slowest. Java object serialization was fast for larger messages, less so for small objects (i.e. high fixed per-operation noverhead). With PB message size was compact, but given all trade-offs you have to do (data is not self-descriptive: if you lose the schema, you lose data; there are indexes of course, and value types, but from what you have reverse-engineer back to field names if you want), I personally would only choose it for specific use cases -- size-sensitive, closely coupled system where interface/format never (or very very rarely) changes.

My opinion in this is that (a) implementation often matters more than specification (of data format), (b) end-to-end, differences between best-of-breed (for different formats) are usually not big enough to dictate the choice. That is, you may be better off choosing format+API/lib/framework you like using most (or has best tool support), find best implementation, and see if that works fast enough. If (and only if!) not, consider next best alternative.

ps. Not sure what EJB3 here would be. Maybe just plain of Java serialization?

Perhaps you could post the results in a blog post? I'd certainly be interested in seeing the details, particularly around the XML testing.
Ok. Core of the thing lives under "StaxBind" module with Woodstox ( repository at Codehaus; this just for convenience. Nothing woodstox - specific. I will try to get results published -- it's frustrating if no one can reproduce them.
+14  A: 

Latest comparison available here at the thrift-protobuf-compare project wiki. It includes many other serialization libraries.

Eishay Smith