- In theory, this is correct.
- Common protocols don't use this, because it's inefficient. Client would have to split the data streams, so they would have to be distinguishable. Server would have to take care about this, for example by packing each data piece in a container (XML, JSON, Bitorrent-like, You name it). And the container is just an unnecessary data overhead, slowing down the transfer.
Why wouldn't one just open several TCP sockets and send separate requests over those multiple connections? No overhead here! Oh, this is already being done, f.e. by some modern web browsers. Use a wireshark
or tcpdump
to inspect the packets and see for Yourself.
There's more than that. TCP socket takes time to set up (SYN, some time, SYN+ACK, some time, ACK...). Someone thought it's a waste to reset the connection after each request, so some modern HTTP servers and clients use Connection: keep-alive
to indicate that they wish to reuse the connection.
I am sorry but I think Your ideas are great, however You can find them in RFC's. Keep thinking though, I am sure one day You'll invent something brilliant. See f.e. here for an optimized bitorrent client.