For the first time I'm trying to implement some Network Protocol over TCP/IP. I've designed one but I'm not sure if it's efficient.
Design
So here is my idea: after client opening TCP/IP connection with server, every time it wants to make request, first of all it sends size of request followed by some separator character (new line or space) and after that actual request (same principal is used in HTTP and I think in most cases this idea is used).
For example if client wants to send GET ASD, it will actually send 7 GET ASD (assuming that space is separator).
For server side, for every client server has buffer in which it saves incoming requests. Whenever it gets some new chunk of characters from client, server will append it to corresponding client buffer. After that server will try to get content length of request (in this example 7) and checks if rest of the buffer's length is more or equal to it. If it is, server will get actual content of request, processes and remove it from the buffer.
Implementation
This all was about protocol design, now some notes about actual implementation: I think main problem here is effectively implementing and managing buffers.
I think buffer with size of 2 * MAX_SIZE_OF_ONE_REQUEST will be enough to serve one client because chunks received by server can simultaneously contain end of the first request and beginning of second one. This is my assumption, if I'm wrong and we need more or less space please let me know why.
I think there are two ways of storing requests in buffer until they are served:
Whenever server receives new chunk of characters, server will append it to the right side of the buffer. As soon as buffer will contain complete request, server will process it and move left all of the rest of the buffer at the beginning of buffer space.
Some cyclic buffer which doesn't move buffer at the beginning after processing request.
This is my thoughts about implementing buffers with async I/O in mind (server will use epoll/kqueue/select to receive requests from clients). I think if server won't use async I/O for communication with clients then implementing buffer will be much much more simpler.
Also I haven't decided how server should behave when it receives malformed request. Should it close connection with client?
Maybe I've written to much but I'm really interested in this topic and want to learn as much as possible. I think there are many like me, so any real world problems about this topic and best practices to solve them will be very helpful.1.