I have stumbled on a peculiar difference between Solaris 10 sockets and other Linux/*NIX sockets. Example:
int temp1, rc;
temp1 = 16*1024*1024; /* from config, a value greater than system limit */
rc = setsockopt( sd, SOL_SOCKET, SO_RCVBUF, &temp1, sizeof(temp1);
The code above will have rc == 0
on all systems - Linux, HP-UX and AIX - except the Solaris 10. Other systems silently truncate supplied value to the allowed maximum. Solaris 10 rightfully fails with the errno == ENOBUFS
indicating the configuration error.
After some deliberation, it was decided that since the particular application is critical instead of failing it should continue working as gracefully as possible:
- Produce a warning in the log file about configuration mismatch (easy, add a check with
getsockopt()
) and - Try to set the maximum buffer size (to get whatever performance possible).
The #2 is were I have stuck. On all non-Solaris systems I do not need to do anything: the sockets already do it for me.
But on Solaris I'm at loss what to do. I have implemented some trivial dichotomy around (setsockopt(...) == -1 && errno == ENOBUFS)
condition to find the max buffer size, but it looks ugly. (Neither I have a context to save the results of lookup: the search would have to be repeated for every connection with such bad configuration. Global variables are problematic since the code is inside a shared library and it is used from MT application.)
Is there any better way on Solaris 10 to detect max allowed buffer size using sockets API?
Is there any way to tell Solaris' sockets API to truncate the value like other systems do it?