tags:

views:

488

answers:

13

I'm working on some code that generates a lot of

ignoring return value of ‘size_t fwrite(const void*, size_t, size_t, FILE*)’, declared with attribute warn_unused_result

warnings when compiled with g++, and I'm wondering about the best programming pattern to actually record and handle the return value of a large number of separate sequential fwrites (i.e. not the same fwrite in a loop)

Let's say that the code looks like this at the moment:

fwrite (&blah, sizeof (blah), 1, fp);
// ... more code ...
fwrite (&foo, sizeof (foo), 1, fp);
// ... more code ...

I'm currently thinking about something like this, but I may have difficulty cleaning up the file pointer:

if (fwrite (&blah, sizeof (blah), 1, fp) != 1) return someerrorcode;
// ... more code ...
if (fwrite (&foo, sizeof (foo), 1, fp) != 1) return someerrorcode;
// ... more code ...

I think that approach is clearly better than nesting, which would get too crazy too quick:

if (fwrite (&blah, sizeof (blah), 1, fp) == 1) {
   // ... more code ...
   if (fwrite (&foo, sizeof (foo), 1, fp) == 1) {;
      // ... more code ...
   }
}

Surely there is already an established best-practice pattern for this sort of thing, though?

Of course, as I am mainly looking into this to get rid of the compiler warnings, I could just assign the return value to a dummy variable and ignore it, but I'd like to try doing it the right way first.

dummy = fwrite (&blah, sizeof (blah), 1, fp);
// ... more code ...
dummy = fwrite (&foo, sizeof (foo), 1, fp);
// ... more code ...

Update: I've removed the c++ tag as this code is really just c being compiled using g++, so c based solutions are needed to keep with the rest of the code base.

A: 

You can remove the warnings like this:

(void) fwrite ( ,,,, );

Addressing your main question, if any of the fwrite() calls fail, I'd guess that it doesn't make sense to continue, as the output is presumably corrupt. In that case, and as you tagged this C++, I'd throw an exception.

anon
A: 

Nesting is bad, and multiple returns are not good either.

I used to use following pattern:

#define SUCCESS (0)
#define FAIL    (-1)
int ret = SUCCESS;

if (!fwrite(...))
    ret = FAIL;
if (SUCCESS == ret) {
    do_something;
    do_something_more;
    if (!fwrite(...))
        ret = FAIL;
}
if (SUCCESS == ret)
    do_something;

return ret;

I know it looks ugly but it has single return point, no excessive nesting and very easy to maintain.

qrdl
“Single return point” is a _very_ subjective rule to code by. It should not be mentioned here.
Bombe
I agree with Bombe. Stating the multiple returns aren't good as a general fact isn't kosher.
Greg D
Also, the fwrite return could be non-zero and still indicate failure, you need to check it against the size of the data to be written
David Dean
Single return point allows you to make a cleanup just once. I know cleanup isn't always needed but it makes sense to develop a good habit. It pays off.
qrdl
@David Dean - it is just an example to illustrate the concept. It is very easy to make proper error handling this way.
qrdl
@David Dean: if you always write 1 item of the size specified, fwrite() returns either 0 or 1.
Jonathan Leffler
A: 

Well ... You could create a wrapper function, that re-tries the write if it fails, perhaps up to some maximum number of retries, and returns success/failure:

int safe_fwrite(FILE *file, const void *data, size_t nbytes, unsigned int retries);
void print_and_exit(const char *message);

Then your main code could be written as

#define RETRIES 5
if(!safe_fwrite(fp, &blah, sizeof blah, RETRIES))
  print_and_exit("Blah writing failed, aborting");
if(!safe_fwrite(fp, &foo, sizeof foo, RETRIES))
  print_and_exit("Foo writing failed, aborting");
unwind
A: 

Your first solution looks ok. Usually a goto err; comes in more handy as you may need some common cleanup part (such as rewinding to a known position).

To make GCC quiet just do:

(void)fwrite (&blah, sizeof (blah), 1, fp);
(void)fwrite (&foo, sizeof (foo), 1, fp);
kmkaplan
A: 

Something like this would work

if (fwrite (&blah, sizeof (blah), 1, fp) != 1) throw SomeException;

If you're worried about pointers being cleaned up you can wrap the pointer in some form of smart pointer before carrying out your fwrite's.

If you don't want to use smart pointers then this will work, but it's messy, so I'd try the smart pointer route first

if (fwrite (&blah, sizeof (blah), 1, fp) != 1) {
    //cleanup here
    throw SomeException;
}
Glen
A: 

Why don’t you wrap the fwrite into a Writer object of some kind and throw an exception if fwrite() returns an error code? Easy to code, easy to use, easy to manage. IMHO, of course. :)

Bombe
+5  A: 

The poor man's C exception handling based on goto (in fact, the one and only instance of goto NOT being harmful):

int foo() {
    FILE * fp = fopen(...);
    ....

    /* Note: fwrite returns the number of elements written, not bytes! */
    if (fwrite (&blah, sizeof (blah), 1, fp) != 1) goto error1;

    ...

    if (fwrite (&foo, sizeof (foo), 1, fp) != 1) goto error2;

    ...

ok:
    /* Everything went fine */
    fclose(fp);
    return 0;

error1:
    /* Error case 1 */
    fclose(fp);
    return -1;

error2:
    /* Error case 2 */
    fclose(fp);
    return -2;
}

You get the idea. Restructure as you wish (single/multiple returns, single cleanup, custom error messages, etc.). From my experience this is the most common C error handling pattern out there. The crucial point is: NEVER, EVER ignore stdlib return codes, and any good reason to do so (e.g. readability) is not good enough.

fbonnet
+2  A: 

You could write a wrapper function

void new_fwrite(a, b, c, d) {
    if (fwrite (a, b, c, b) != b) 
       throw exception;
}

and then replace all calls to fwrite with new_fwrite

Patrick McDonald
I like this idea, but I want have to keep it c-only, so no exceptions.
David Dean
+2  A: 

Ignoring errors is a bad idea. It's much better to do something nasty like crash the program so that at least you know something went wrong, rather than silently proceeding. Even better is nice error checking and recovery.

If you're using C++, you can create an RAII wrapper for the FILE* so that it'll always get closed. Look at std::auto_ptr for ideas. You can then return a useful error code or from the function whenever you wish, or throw an exception and not have to worry about forgotten cleanup items.

Mr Fooz
Right, and (as you imply) use this in combination with the series of tests each of which returns an error if there is a problem.
Sol
@Sol: more explicitness added
Mr Fooz
A: 

Maybe something like this? You catch the errors without making the code too unreadable, and you can do cleanup after the end of the faux loop.

#define WRITE_ERROR 100
#define WRITE_OK 0

int do_fwrite(void* ptr, size_t bytes, int fp) {
  if ( fwrite(ptr, bytes, 1, fp) != bytes ) return WRITE_ERROR;
  return WRITE_OK;
}

int my_func() {
   int errcode = 0;

   ...
   do {
     if ( errcode = do_fwrite(&blah, sizeof(blah), fp) ) break;
     ....
     if ( errcode = do_fwrite(&foo, sizeof(foo), fp) ) break;
     ....
     etc
   } while( false );

   fclose(fp);
   return errcode;
}
Eric Petroelje
+6  A: 

I'd do something along these lines:

FILE * file = fopen("foo", "wb");
if(!file) return FAILURE;

// assume failure by default
_Bool success = 0;

do
{
    if(!fwrite(&bar, sizeof(bar), 1, file))
        break;

    // [...]

    if(!fwrite(&baz, sizeof(baz), 1, file))
        break;

    // [...]

    success = 1;
} while(0);

fclose(file);

return success ? SUCCESS : FAILURE;

With a little C99 macro magic

#define with(SUBJECT, FINALIZE, ...) do { \
    if(SUBJECT) do { __VA_ARGS__ } while(0); if(SUBJECT) FINALIZE; \
} while(0)

and using ferror() instead of our own error flag as suggested by Jonathan Leffler, this can be written as

FILE * file = fopen("foo", "wb");
with(file, fclose(file),
{
    if(!fwrite(&bar, sizeof(bar), 1, file))
        break;

    // [...]

    if(!fwrite(&baz, sizeof(baz), 1, file))
        break;

    // [...]
});

return file && !ferror(file) ? SUCCESS : FAILURE;

If there are other error conditions aside from io errors, you'll still have to track them with one or more error variables, though.

Also, your check against sizeof(blah) is wrong: fwrite() returns the count of objects written!

Christoph
thanks for pointing out the sizeof bug!
David Dean
Note that ferror() tells you whether there was an error on the stream.
Jonathan Leffler
A: 

Ok, given that I'm looking for a c solution (no exceptions), how about:

void safe_fwrite(data,size,count,fp) {
   if fwrite(data,size,count,fp) != count {
      printf("[ERROR] fwrite failed!\n");
      fclose(fp);
      exit(4);
   }
}

And then in my code I have:

safe_fwrite (&blah, sizeof (blah), 1, fp);
// ... more code ...
safe_fwrite (&foo, sizeof (foo), 1, fp);
// ... more code ...
David Dean
Exiting on error is a bit hardcore, but if it fits your app's needs, why not. In this case no need to worry about cleanup, though, so this doesn't really answer your original question, and wrapping only shifts the burden. See my answer for a widely used 'best' practice.
fbonnet
whoops, I forgot the fclose - thanks for pointing that out
David Dean
A: 
Timo Geusch
the only problem is that the .. do other stuff .. is still executed, when the program should really just terminate (as the other stuff is basically just processing in order to do the next fwrite)
David Dean
You're correct - I've updated the code and put a little caveat in the answer.
Timo Geusch
fbonnet