+1  A: 

It isn't done largely because because:

a.) The client may need to know what version of protocol the server uses

b.) You won't even know you really are talking to a server that supports the protocol.

In short, it often makes sense to know what you're talking to before spewing data at it.

SpliFF
chmike
Thinking again about it, if the service locator holds bogus information pointing toward an innocent victim, it will "spew" data at it before noticing the deception. It could break the victim service not expecting this type of data.
chmike
I decided to give the answer to SpliFF because he is most explicit. Unwind would deserve the answer as well and other answers were highlighting too. But I can give only one answer. Thanks for your help.
chmike
+1  A: 

I wonder if this design might not be said to be a violation of Postel's Law, since it's assuming things about the receiver, and thereby about what is legal to send, before knowing.

I would at least expect this principle to be the reason most protocols are designed so that they spend a roundtrip to figure out more about the other end, before sending data that might not be understood at all.

unwind
You are right. Thanks for referring this good wikipedia page. As explained in my comment to SpliFF, what may justify the difference with common protocol is that the client will "know" what it is expected to find on the other end of the connection.
chmike
A: 

If delay is your main concern, you may want to look at LPT, a protocol that is specifically designed for connections with extremely long round-trip times.

When designing a new transport protocol, you should pay attention to congestion control and what firewall are going to do, when they encounter packets of an unknown protocol.

jgre
The DITP¨protocol is not a transport protocol. The T stands for transfer protocol as in FTP or SMTP. It uses an underlying connection oriented transport layer which can be TCP, LPT or any any other protocol best suited for the usage context.Congestion aspects are indeed important by only when multiplexing transactions in the same connection.
chmike
A: 

Design goals of the protocols like HTTP,SMTP were not the speed, rather reliability under flaky physical network conditions and the meagre bandwidth utilisation. Largely these conditions have changed now with better hardware.

Your design should be look at in the light of the network conditions you are bound to encounter, reliability required, latency and bandwidth utilisation of your intended application.

Indeera
A: 
  1. In theory, this is correct.
  2. Common protocols don't use this, because it's inefficient. Client would have to split the data streams, so they would have to be distinguishable. Server would have to take care about this, for example by packing each data piece in a container (XML, JSON, Bitorrent-like, You name it). And the container is just an unnecessary data overhead, slowing down the transfer.

Why wouldn't one just open several TCP sockets and send separate requests over those multiple connections? No overhead here! Oh, this is already being done, f.e. by some modern web browsers. Use a wireshark or tcpdump to inspect the packets and see for Yourself.

There's more than that. TCP socket takes time to set up (SYN, some time, SYN+ACK, some time, ACK...). Someone thought it's a waste to reset the connection after each request, so some modern HTTP servers and clients use Connection: keep-alive to indicate that they wish to reuse the connection.

I am sorry but I think Your ideas are great, however You can find them in RFC's. Keep thinking though, I am sure one day You'll invent something brilliant. See f.e. here for an optimized bitorrent client.

Reef
Sry, I don't understand your second point. The protocol is for a distributed information system. So the client would connect to many different servers across the world like the world wide web. Multiple request can be sent through the same connection.My goal is not to invent something brilliant. It is to provide a new tool with new properties to support new types of applications as well as optimizing and extending current applications.
chmike