If you have vsnprintf
available to you, I would make use of that. It prevents buffer overflow since you provide the buffer size, and it returns the actual size needed.
So allocate your 1K buffer then attempt to use vsnprintf
to write into that buffer, limiting the size. If the size returned was less than or equal to your buffer size, then it's worked and you can just use the buffer.
If the size returned was greater than the buffer size, then call realloc
to get a bigger buffer and try it again. Provided the data hasn't changed (e.g., threading issues), the second one will work fine since you already know how big it will be.
This is relatively efficient provided you choose your default buffer size carefully. If the vast majority of your outputs are within that limit, very few reallocations has to take place (see below for a possible optimisation).
If you don't have an vsnprintf
-type function, a trick we've used before is to open a file handle to /dev/null
and use that for the same purpose (checking the size before outputting to a buffer). Use vfprintf
to that file handle to get the size (the output goes to the bit bucket), then allocate enough space based on the return value, and vsprintf
to that buffer. Again, it should be large enough since you've figured out the needed size.
An optimisation to the methods above would be to use a local buffer, rather than an allocated buffer, for the 1K chunk. This avoids having to use malloc
in those situations where it's unnecessary, assuming your stack can handle it.
In other words, use something like:
int test(const char* format, ...)
{
char buff1k[1024];
char *buffer = buff1k; // default to local buffer, no malloc.
:
int need = 1 + vsnprintf (buffer, sizeof (buff1k), format, arguments);
if (need > sizeof (buff1k)) {
buffer = malloc (need);
// Now you have a big-enough buffer, vsprintf into there.
}
// Use string at buffer for whatever you want.
...
// Only free buffer if it was allocated.
if (buffer != buff1k)
free (buffer);
}