views:

24

answers:

0

Hi,

The .NET method Socket.AcceptAsync() has a neat feature: The ability to specify an initial buffer that will receive some amount of data immediately after accepting the new connection. This could be useful for exchanging a fixed-length signature, protocol header, session ID or similar initiatory data (and in fact, that is what the MSDN article suggests).

See the MSDN article.

Now, this page says that if this feature is used, the runtime will use some of the user-supplied buffer for internal purposes. The amount which is used such is said to depend on the address family, and that the minimum amount consumed is 288 bytes.

If I understand this correctly, this is telling me that 288 bytes will always be consumed, but under certain circumstances (perhaps for IPv6?), this number might increase and 1) my buffer may not actually be sufficient if I am only a couple of bytes above the minimum or 2) I can't be certain in advance of how much buffer space will be left to my user data.

My question is thus: Is there a way to determine this number at runtime? And: Does anybody know if this number is OS-specific or implementation specific (say, if I'm using mono .NET on Unix instead of Microsoft .NET on Windows)?

If not, this seems like a feature to stay away from.

Thanks a bunch in advance,
Siberion

Edit: I thought about this some more, and it later occurred to me that the minimum buffer size of 288 bytes might actually be the maximum amount consumed in all scenarios. However, what I understood from the MSDN article is that the asynchronous call with a user buffer will only complete successfully when this buffer is exhausted and will otherwise time out and fail. So in order to receive a fixed-length sequence of bytes (after which the connecting entity will wait for a response), you would still need to know exactly how much buffer capacity the runtime will consume internally.