views:

92

answers:

6

I'm about to design a client application and the server part is not designed either.

I need to decide on the communication protocol.

The requirements are:

  • fast, compact
  • supports binary file transfer both ways
  • server is probably PHP, client .NET

So far I have considered these:

  • custom XML over HTTP - I've done this in the past, but it's not very suitable for file transfer, otherwise OK
  • SOAP - no experience, I read it's very verbose and complicated
  • Google protobuf - read a lot of good things about this
  • pure HTTP - using get and post - this may be badly extensible.

I'm open to suggestions. So far i'm leaning towards protobuf.

Edit: More info

  • The server will be data heavy, thin application layer, possibly only database itself. Milions to a billion records, search intensive (fultext and custom searches).
  • Expected client application count is in hundeds, but may grow
  • 2 types of messages from server to client, small (under 100KB), but very common, large (file downloads, under 10MB cca)
  • client sends back only the smaller messages but with more information.
  • i'd like to have information structured, to provide meta information both ways.
  • i'd like it extensible for future changes
  • Encryption mandatory (considering https as transport layer)
  • Lantency is crutial, I'd like to achieve "standard" web latencies (under 200ms would be good), for the small messages. This really depends on many things.
A: 

I would use simple HTTP, TCP (with sockets) or FTP, unless your really need some more sofisticated functionalities

Hbas
I sort need some meta information, so FTP is out of question. TCP is transport protocol I would need to roll my own definition.
Kugel
I accepted this, because I decided to start as simple as possible. I use https GET and POST with xml if needed.
Kugel
A: 

Well protocol buffers certainly work well for us :)

You may well want to layer them over HTTP though. Obviously you'll need some sort of transport layer between TCP/IP and protocol buffers themselves - protobufs don't define anything other than serialized messages. HTTP is generally well understood, goes through firewalls easily, and has both client and server support on multiple platforms.

One concern: I'm not sure what sort of protocol buffer support there is in PHP. There's a beta library here, but that's all I could see listed in the 3rd party add-ons page.

Jon Skeet
A: 

Keep in mind that when using HTTP, the server cannot send information to the client without a request, so you'll probably have to use a technique similar to long-polling.

I wanted to add this as a comment but I'm not able to do it.

b2238488
That is fine, that's why I mentioned server-client.
Kugel
A: 

I think that Protocol Buffers sounds like a great choice. That is pretty much what it was designed for.

The .NET port is written by none other than Jon Skeet:

http://code.google.com/p/protobuf-csharp-port/

I am not sure how great the support in PHP is though. That could be a problem.

Justin
A: 

I'd also recommend protocol buffers, over TCP. HTTP should be avoided unless you go with a higher-level abstraction that implicitly uses HTTP.

The .NET port of protocol buffers AFAIK does not support asynchronous reading of the protobuffers, so I'd recommend using asynchronous sockets and use length-prefixed protobuffers.

I've written several recommendations for protocol design on my .NET TCP/IP FAQ, including XML over TCP/IP (I do agree that XML is not a good fit for your needs, though).

Stephen Cleary
A: 

I wouldn't use HTTP for one of your common message types, the "large" (but <10MB) binary, that might prove too slow for your applications. But testing will show it anything HTTP-based is acceptable for your use cases. So maybe use FTP for the large binary messages, and whatever else for the small messages. Yes, I am recommending that you use two types of communication protocols here.

Chris O