views:

488

answers:

17

Very often you have a function, which for given arguments can't generate valid result or it can't perform some tasks. Apart from exceptions, which are not so commonly used in C/C++ world, there are basically two schools of reporting invalid results.

First approach mixes valid returns with a value which does not belong to codomain of a function (very often -1) and indicates an error

int foo(int arg) {
    if (everything fine)
        return some_value;
    return -1; //on failure
}

The scond approach is to return a function status and pass the result within a reference

bool foo(int arg, int & result) {
     if (everything fine) {
         result = some_value;
         return true;
     }
     return false;  //on failure
}

Which way do you prefer and why. Does additional parameter in the second method bring notable performance overhead?

A: 

I think there is no right answer to this. It depends on your needs, on the overall application design etc. I personally use the first approach though.

PeterK
A: 

I think a good compiler would generate almost the same code, with the same speed. It's a personal preference. I would go on first.

pcent
+1  A: 

There shouldn't be much, if any, performance difference between the two. The choice depends on the particular use. You cannot use the first if there is no appropriate invalid value.

If using C++, there are many more possibilities than these two, including exceptions and using something like boost::optional as a return value.

KeithB
A: 

You missed a method: Returning a failure indication and requiring an additional call to get the details of the error.

There's a lot to be said for this.

Example:

int count;
if (!TryParse("12x3", &count))
  DisplayError(GetLastError());

edit

This answer has generated quite a bit of controversy and downvoting. To be frank, I am entirely unconvinced by the dissenting arguments. Separating whether a call succeeded from why it failed has proven to be a really good idea. Combining the two forces you into the following pattern:

HKEY key;
long errcode = RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key);
if (errcode != ERROR_SUCCESS)
  return DisplayError(errcode);

Contrast this with:

HKEY key;
if (!RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key))
  return DisplayError(GetLastError());

(The GetLastError version is consistent with how the Windows API generally works, but the version that returns the code directly is how it actually works, due to the registry API not following that standard.)

In any case, I would suggest that the error-returning pattern makes it all too easy to forget about why the function failed, leading to code such as:

HKEY key;
if (RegOpenKey(HKEY_CLASSES_ROOT, NULL, &key) != ERROR_SUCCESS)
  return DisplayGenericError();

edit

Looking at R.'s request, I've found a scenario where it can actually be satisfied.

For a general-purpose C-style API, such as the Windows SDK functions I've used in my examples, there is no non-global context for error codes to rest in. Instead, we have no good alternative to using a global TLV that can be checked after failure.

However, if we expand the topic to include methods on a class, the situation is different. It's perfectly reasonable, given a variable reg that is an instance of the RegistryKey class, for a call to reg.Open to return false, requiring us to then call reg.ErrorCode to retrieve the details.

I believe this satisfies R.'s request that the error code be part of a context, since the instance provides the context. If, instead of a RegistryKey instance, we called a static Open method on RegistryKeyHelper, then the retrieval of the error code on failure would likewise have to be static, which means it would have to be a TLV, albeit not an entirely global one. The class, as opposed to an instance, would be the context.

In both of these cases, object orientation provides a natural context for storing error codes. Having said that, if there is no natural context, I would still insist on a global, as opposed to trying to force the caller to pass in an output parameter or some other artificial context, or returning the error code directly.

Steven Sudit
This approach is extremely bad unless you're already passing a context structure in which the error can be kept! Otherwise, it requires global variables, and is not thread-safe unless you resort to nasty (either slow or non-portable, or both) measures to keep per-thread error status.
R..
POSIX traditionally goes this way for many of its functions, and has just one magic pseudo-variable `errno` in which error codes are stored and that is guaranteed to be thread safe. So if you don't need more than that, this is definitively an option.
Jens Gustedt
For what it's worth, I'm not the one who voted this down. I'd actually consider this a good answer if it were revised to reflect that global state is **bad** and recommended keeping the state in a context structure.
R..
@R.: GetLastError is the name of an actual Windows API method that returns the *thread-local* error code. The usage pattern is to check the error code on failure, immediately after the call. As such, being global is an acceptable compromise.
Steven Sudit
@Steven Sudit: interesting, would be good to have a reference. Maybe it states something that `errno` might resolve to a macro or function. But would surprise me if it would make a statement about thread safeness.
Jens Gustedt
Major problem with this approach: Who clears the error variable and when? Anyway, you're already using the return value to indicate success, this is clearly a variant of the second method presented in the question.
Ben Voigt
@Ben: The variable is changed after each call to the API. I do agree that it's a variant, but it's an important (yet apparently controversial) one.
Steven Sudit
@Jens: Here's your reference: http://stackoverflow.com/questions/1694164/is-errno-thread-safe
Steven Sudit
Ben Voigt
@Ben: That's functionally identical to the second code block, but is even more cryptic because it combines assignment and testing in the predicate. I know this sort of coding is often seen as acceptable in C, but I'm not exactly a fan. I did something similar but cleaner in C++, using a helper class called `ComResult`. Its constructor takes an `HRESULT` and stores it in a TLV. The instance can be implicitly cast to `bool` for error detection. The default constructor retrieves the value from the TLV. It also had `DidFail`, `DidSuccess` and `ThrowOnFail` methods, among others.
Steven Sudit
-1 for continuing to advocate for global variables/global state. The fact that Windows (and POSIX) both have ways of maintaining this internally in a thread-safe way doesn't change the fact that it's a very bad practice for designing your own APIs, especially since it's impossible to make such a thread-local state in a way that's both fast and portable.
R..
@R.: The choice is up to you: you can take away imaginary points from me or you can offer a convincing argument. So far, you've only managed the former. For example, your most recent argument has the minor problem of being factually incorrect. In C#, all you have to do is slap a ThreadStatic attribute onto a static field and you're done. It's as portable as .NET and performance is quite good, although that hardly matters since you only check after a failure has been indicated. In C/C++, you'd use pthreads `pthread_getspecific` to get the same effect.
Steven Sudit
I should clarify that, although I suggested pthreads for maximum portability, the reality is that I probably wouldn't bother with it. Instead, I'd use a #define that maps to `__declspec(thread)` under MSCPP and `__thread` under GCC. Again, speed is a non-issue, although expected performance is actually pretty good.
Steven Sudit
Steven said "that hardly matters since you only check after a failure has been indicated". Actually, the TLS variable has to be assigned on both failure and success, so you pay the TLS lookup penalty on the fast path as well.
Ben Voigt
@Ben: That's a good point. If we knew for a fact that we were stuck with a slow implementation (pthreads!), we could optimize for this by defining the interface so that the errno equivalent is only set on failure, so checking it on success yields an undefined value (presumably the most recent error). In practice, I think VC/GCC provides sufficient portability with good speed, so we don't have to do this sort of thing.
Steven Sudit
@R.: I would appreciate your feedback to the addendum.
Steven Sudit
Having a class maintain error state may be a good approach if the class is by nature not going to be thread-safe. It does effectively foreclose the possibility of the class ever being thread-safe, but that may not be a killer of one can have multiple non-thread-safe "command" objects which act upon a common thread-safe database.
supercat
Addendum: In C, maintaining an error state may be useful if one provides that an attempt to perform a function on some "object" (file, device, or whatever) will early-exit if an error has occurred that has not yet been polled. This may avoid the need to error-check every operation in the mainline code; one can attempt a bunch of operations and then use one error-code check to see if they all worked.
supercat
@supercat: I don't think anyone would expect that hypothetical RegistryKey wrapper class to be thread-safe, so that's not really a concern. If it did have to be thread-safe, then a TLV would be needed, as discussed at all too much length above.
Steven Sudit
@supercat: I've been giving your early-exit scenario a lot of thought, and I think I don't like it. If we want to avoid explicit error checks and force the caller to stop, we can throw an exception. The early-exit scenario would require the object to enter a failed state that might only be reset by a retrieval of the error code, which means that if we only care that it did fail but not why, it will never reset. The more I look at this, the more I'd rather just return a bool.
Steven Sudit
@Steven: It's better but I guess I'm pretty much an absolutist about global state. Also, this question is about C and C++ too, not C#. If you must make a global per-thread state, `pthread_getspecific` is the way to go (as opposed to non-portable hacks) but performance can vary a lot. Sacrificing lots of performance for a "pretty" error reporting system (which to many, myself included, is not at all "pretty" but extremely ugly) is not a good choice in my book. When there's no state, just pass an extra `int *errcode` argument and be done with it.
R..
@supercat: If the class needs to be "thread safe" in the sense of allowing simultaneous write access from multiple threads, you can still use the approach of thread-local data. The class just needs to keep its own per-thread state **inside the object**.
R..
@R.: Thanks for your input. I'm clearly not an absolutist, nor do I view compiler-specific optimizations as hacks. Given this, the issue for me does not involve sacrificing lots of performance, so the benefits of separating the "whether" from the "why" easily outweigh the negatives, based on robustness, not prettiness. In short, we still disagree, but now we can see where and why.
Steven Sudit
@Steven Sudit: The early-exit case is helpful (at least I use it) in C, where exceptions aren't available.
supercat
@supercat: Good point, although my C background is in the Windows environment, where even C can use a language-neutral OS feature called structured exceptions. IIRC, C++ under Windows uses structured exceptions to implement C++ exceptions. http://msdn.microsoft.com/en-us/library/ms680657(VS.85).aspx
Steven Sudit
@Steven Sudit: Much of my C background is with embedded systems. Although I have sometimes used stack-pointer hacks for things like cooperative multi-tasking (can be VERY handy) code space is often at a premium. Coding a sequence of I/O operations and then checking for an error is more compact in source and compiled form than checking for an error after each operation.
supercat
@supercat: Sounds like another case of the right answer depending on the specifics of the question, such as the platform. :-)
Steven Sudit
+9  A: 

Don't ignore exceptions, for exceptional and unexpected errors.

However, just answering your points, the question is ultimately subjective. The key issue is to consider what will be easier for your consumers to work with, whilst quietly nudging them to remember to check error conditions. In my opinion, this is nearly always the "Return a status code, and put the value in a separate reference", but this is entirely one mans personal view. My arguments for doing this...

  1. If you choose to return a mixed value, then you've overloaded the concept of return to mean "Either a useful value or an error code". Overloading a single semantic concept can lead to confusion as to the right thing to do with it.
  2. You often cannot easily find values in the function's codomain to co-opt as error codes, and so need to mix and match the two styles of error reporting within a single API.
  3. There's almost no chance that, if they forget to check the error status, they'll use an error code as if it were actually a useful result. One can return an error code, and stick some null like concept in the return reference that will explode easily when used. If one uses the error/value mixed return model, it's very easy to pass it into another function in which the error part of the co-domain is valid input (but meaningless in the context).

Arguments for returning the mixed error code/value model might be simplicity - no extra variables floating around, for one. But to me, the dangers are worse than the limited gains - one can easily forget to check the error codes. This is one argument for exceptions - you literally can't forget to handle them (your program will flame out if you don't).

Adam Wright
+1 - so long as you only use exceptions for truely exceptional cases. Their unwind and clean-up is an expensive operation (relatively). The clue is in the name.
Ragster
If there is a well-defined notion of an invalid return value, then overloading this way isn't a big deal. For example `CreateFile` returns a NULL handle on failure, where NULL is always understood to be invalid. When there is no such notion, such as `atoi` returning 0 on failure, it's not so great.
Steven Sudit
In addition, some API's return a success/failure code with each call. The problem is that this combines *whether* a function worked with the details of how it worked. For example, ODBC returns a code where positive values are different types of success and negatives are different types of errors. This is not good, either.
Steven Sudit
@Steven: I guess there's another problem with this design -- using the wrong magic value. `CreateFile` returns `INVALID_HANDLE_VALUE (~0)` not `NULL (0)` on failure. Usually `NULL` is used for initialized-but-unassigned handle variables, while `INVALID_HANDLE_VALUE` is used for failure.
Ben Voigt
@Ben: Well, the function is defined to return a handle, not a pointer, so it would have to be an invalid handle as opposed to an invalid pointer. I was certainly wrong in detail when I said it returned NULL, but I think the idea was correct.
Steven Sudit
@Steven: I wasn't trying to argue your point, I was making a new one and using you as an illustration. With magic values there's not only the risk that the caller forgets to test for them, but the risk of coding the test incorrectly. Another example would be comparing COM `HRESULT` values against `S_OK` instead of using the `SUCCEEDED` macro.
Ben Voigt
@Ben: Sorry if I misunderstood. Yes, you're right that all of these magic values are risky because we're effectively taking a primitive type and *pretending* it has rich semantics. Of course, holding an OS handle in what is effectively (if not actually) a void* is a bad way to go, regardless. The right answer is to wrap these returns into classes that implement value-appropriate destructors and validity checkers. So, for example, the `ComResult` class mentioned elsewhere does use the SUCCEEDED macro inside its `DidFail` function.
Steven Sudit
@Adam Wright: If there's a "null like concept" that you can stick in the return reference in case of error, then you could just as easily have used that value as the error return value. If there *isn't* a suitable "null like concept" that will explode when used, then you have exactly the same problem - the caller can ignore the error and try to use the value you stuck in the reference.
caf
+1  A: 

C traditionally used the first approach of coding magic values in valid results - which is why you get fun stuff like strcmp() returning false (=0) on a match.

Newer safe versions of a lot of the standard library functions use the second approach - explicitly returning a status.

And no exceptions aren't an alternative here. Exceptions are for exceptional circumstances which the code might not be able to deal with - you don't raise an exception for a string not matching in strcmp()

Martin Beckett
`strcmp` does not return "false" on match. Like any comparison function, it returns a positive, negative, or zero result reflecting whether the first item is greater than, less than, or equal to the second item in the given ordering. There's nothing magic about this. It's plain common sense. Would you expect `(a-b)` (the comparison function for numbers `a` and `b`) to return "true" when they match and "false" otherwise?!?
R..
None of the possible return values from `strcmp` indicate errors, so that's a poor example for both the magic values and use of exceptions. Perhaps `fgetc` (which returns the magic `EOF` value) would be a better example.
Ben Voigt
@R strcmp() returning <,0,> is a convenient hack if you are implementing a sort function. The downside is all the bugs introduced by people writing "if ( strcmp() )", yes it's the programmers fault for not memorizing the stdlib - but the name doesn't help.
Martin Beckett
That's the reason why it is considered good style to be explicit when the tested value is not a boolean: `if (strcmp()==0) (pointer!=NULL) (number>0)`. Sadly, the programmers who don't read the function definitions in the standard library are the same who don't read recommendations about good and bad style... ;)
Secure
@Martin: They are different things. He is talking about raising an error, not functions where the question is an inherent part of the operation of the function. You're right, there are some functions where a failure is not an exceptional circumstance- but then, they don't return error codes, either. We're talking about errors, which are not failures.
DeadMG
@Secure: This is why C# and Java don't automatically convert int to bool. :-)
Steven Sudit
For what it's worth, it's less a hack than an example of how low-level constructs made their way into C. At the level of assembly language, you get instructions such as `CMP EAX,0x1234`, which perform a subtraction and set the LT, EQ and GT flags appropriately, so that you can follow up with a branch instruction. There's even a three-way IF construct in FORTRAN
Steven Sudit
A: 

It's not always possible, but regardless of which error reporting method you use, the best practice is to, whenever possible, design a function so that it does not have failure cases, and when that's not possible, minimize the possible error conditions. Some examples:

  • Instead of passing a filename deep down through many function calls, you could design your program so that the caller opens the file and passes the FILE * or file descriptor. This eliminates checks for "failed to open file" and report it to the caller at each step.

  • If there's an inexpensive way to check (or find an upper bound) for the amount of memory a function will need to allocate for the data structures it will build and return, provide a function to return that amount and have the caller allocate the memory. In some cases this may allow the caller to simply use the stack, greatly reducing memory fragmentation and avoiding locks in malloc.

  • When a function is performing a task for which your implementation may require large working space, ask if there's an alternate (possibly slower) algorithm with O(1) space requirements. If performance is non-critical, simply use the O(1) space algorithm. Otherwise, implement a fallback case to use it if allocation fails.

These are just a few ideas, but applying the same sort of principle all over can really reduce the number of error conditions you have to deal with and propagate up through multiple call levels.

R..
I don't find this realistic or helpful.
Steven Sudit
Then rethink. For point 1, it's better design anyway because it's more flexible. What if you already have the file open and no longer have its name? (Or maybe its name no longer exists, or it's a pipe/socket/etc.) Point 2 is very open to debate. It imposes some tighter implementation constraints but could give really big boosts in performance at the same time. Point 3 is plain common sense...
R..
I assure you that mine was a not a knee-jerk reaction. Point 1 would forbid utility, such as an XML DOM that lets you load from a filename. Point 2 is a niche answer at best, with little applicability in general. Point 3 isn't so much false as irrelevant. Ultimately, that's my biggest complaint here: the question is how to report the errors that *will* happen, not how to shrink that set. While I'm all for reducing errors, it doesn't address the question at hand, nor is it always worth the price.
Steven Sudit
I addressed right from the beginning that you can't eliminate all errors (and thus need for error reporting). When the set of possible errors is overwhelming, most coders using your functions will just ignore errors or have one poorly-thought catch-all error handler. If the set of possible errors is very limited, it's much easier for the caller to handle them well.
R..
Some added thoughts.. For point 1 (filenames vs open files) it may be worthwhile to support both, and also preloaded in-memory buffers. As an example, I recall that it used to be painful to use embedded fonts because FreeType required a file to open (thus tempfile hell). Modern versions support use of fonts in memory. A good example for point 2 is a regex compiler that builds a finite automaton. You can bound the size of the FA structures by a constant multiple of the regex string length and just preallocate that, avoiding the possibility of allocation errors mid-parse.
R..
It doesn't matter how large the set is, just whether it's empty. If the caller doesn't know what to do about the particular error, its responsibility is to log and fail. Exceptions make this automatic.
Steven Sudit
For point 1, if we agree that we do want to support filenames, then we agree that we can have file-not-found errors and the like. I do agree that, at least in some cases, it makes perfect sense to allow passing in either an in-memory buffer or some form of streaming input. For point 2, this is a non-starter because the initial allocation can still fail, and you may need intermediate allocations .
Steven Sudit
I'm about sick of arguing. Since this question was C/C++, I'm interested in covering the C case where you **do not have exceptions**. Even if you do have exceptions, callers tend to handle them poorly or not at all, so reducing the number of error cases is worthwhile. BTW for my specific example of point 2 (regex compiler) you can easily bound the total compiled size and working space needed. The only failure case then is if the entire initial allocation fails, in which case the caller is responsible before even making the call. Believe it or not cases like this are the norm not the exception.
R..
In C, where exceptions aren't available, I sometimes have functions which return immediately if an error occurred the on the last "similar" function call but has not been polled. If an attempt at communication times out, repeated attempts to communicate with the same port without acknowleding the error will instantly abort. Thus one can safely have several communications routines without error-checks between them, and check afterward whether everything worked. If something failed, one might not know what failed, but one might not care.
supercat
+4  A: 

Quite a few books, etc., strongly advise the second, so you're not mixing roles and forcing the return value to carry two entirely unrelated pieces of information.

While I sympathize with that notion, I find that the first typically works out better in practice. For one obvious point, in the first case you can chain the assignment to an arbitrary number of recipients, but in the second if you need/want to assign the result to more than one recipient, you have to do the call, then separately do a second assignment. I.e.,

 account1.rate = account2.rate = current_rate();

vs.:

set_current_rate(account1.rate);
account2.rate = account1.rate;

or:

set_current_rate(account1.rate);
set_current_rate(account2.rate);

The proof of the pudding is in the eating thereof. Microsoft's COM functions (for one example) chose the latter form exclusively. IMO, it is due largely to this decision alone that essentially all code that uses the native COM API directly is ugly and nearly unreadable. The concepts involved aren't particularly difficult, but the style of the interface turns what should be simple code into an almost unreadable mess in virtually every case.

Exception handling is usually a better way to handle things than either one though. It has three specific effects, all of which are very good. First, it keeps the mainstream logic from being polluted with error handling, so the real intent of the code is much more clear. Second, it decouples error handling from error detection. Code that detects a problem is often in a poor position to handle that error very well. Third, unlike either form of returning an error, it is essentially impossible to simply ignore an exception being thrown. With return codes, there's a nearly constant temptation (to which programmers succumb all too often) to simply assume success, and make no attempt at even catching a problem -- especially since the programmer doesn't really know how to handle the error at that part of the code anyway, and is well aware that even if he catches it and returns an error code from his function, chances are good that it will be ignored anyway.

Jerry Coffin
It's not only COM, but the entire Win32 API that uses separate failure and result returns. with a couple exceptions to the rule (but no C++ exceptions): `HANDLE`-returning functions have `INVALID_HANDLE_VALUE` and `IUknown::AddRef` and `IUnknown::Release` don't return an `HRESULT`. There's a good reason too: those APIs are called by consumers written in a variety of languages.
Ben Voigt
+1 for recognizing the need to decouple detection from handling.
Steven Sudit
You're right in that COM returns an HRESULT with each call, while the natural return value is an output parameter that's flagged in IDL, which results in unreadable C code. However, when you use a C++ COM wrapper, such as ATL, all this ugliness gets hidden away. Instead, it turns into a call that has a regular return value and throws exceptions containing an HRESULT.
Steven Sudit
@Ben: The rest of Win32 is a lot less dependable about it than you imply. Enough functions return handles for that to be a *huge* exception by itself. There are also quite a few that return NULL to indicate failure, including `HeapAlloc`, `CreateWindow`, `CreateDC`, and `CreateIC`. `RegisterClass` and `RegisterClassEx` return an ATOM, or 0 to indicate failure. `RegisterWindowMessage` returns a message identifier, or 0 for failure. The list goes on and on...
Jerry Coffin
@Steven: yes, it's entirely possible (and almost necessary) to hide how bad the design is -- but that doesn't change the fact that it's a bad design...
Jerry Coffin
@Jerry: Agreed. If anything, the need to hide away the ugliness just demonstrates how bad it is.
Steven Sudit
@Steven: quite true. In some ways, the ugliness is even a good thing -- it's *so* bad that there's no real question that hiding it is necessary. If the design was less awful, people might put up with it instead...
Jerry Coffin
@Jerry: I may be dating myself, but people *did* put up with it, at least for a while. We didn't like it, though. And, as you said, it was so obviously bad that MS provided ATL (after first fumbling with MFC), so all's well that ends well. I don't think that ugliness at the low level is really avoidable; what matters is whether it's technically correct and can be cleanly wrapped to preserve that correctness. Given the thread models COM supports, a thread-local errno/GetLastError would not have been viable at the IDL level.
Steven Sudit
@Steven: No, in COM the `GetLastError` equivalent isn't a thread-local global variable, it's per-object (possibly thread-local as well under some threading schemes, I don't remember). See the `ISupportErrorInfo` interface. But even that is less ugly than the data structures used to implement C++ exceptions. Which is a VERY good thing, since it has to be implemented correctly in different languages or the whole model collapses.
Ben Voigt
@Ben: I think you misunderstood me. POSIX uses errno and Windows uses GetLastError, both of which are thread-local error codes, but COM cannot do this because of its threading model, so it returns the HRESULT with each call. You're right that ISupportErrorInfo is similar to GetLastError in that it's an additional call we can make after an error to find out more, but it's not really the same thing because an HRESULT is a lot more than just its highest bit.
Steven Sudit
+5  A: 

The idea of special return values completely falls apart when you start using templates. Consider:

template <typename T>
T f( const T & t ) {
   if ( SomeFunc( t ) ) {
      return t;
   }
   else {         // error path
     return ???;  // what can we return?
   }
}

There is no obvious special value we can return in this case, so throwing an exception is really the only way. Returning boolean types which must be checked and passing the really interesting values back by reference leads to an horrendous coding style..

anon
Interesting point. It's probably no coincidence that the error-handling I've seen in templates is usually through exceptions.
Steven Sudit
http://stackoverflow.com/questions/3157098/whats-the-right-approach-to-return-error-codes-in-c/3157182#3157182
JUST MY correct OPINION
I marked this down because it is misleading. Exceptions are not the only way. boost::optional can separate the value from the error. You can think of optional as a container with zero or one elements which you may think you should be able to iterate over.Actually I also marked the above comment up for his Maybe monad which is basically just boost optional plus some extra candy.I have an implementation proof of concept if this idea here. http://xtargets.heroku.com/2010/06/03/using-boostoptional-as-a-range/
bradgonesurfing
@bradgonesurfing Well, you could also return a pair, for example. But I think if there actually is an error, as opposed to a status, then an exception is best. And its normally not the thing to do on SO to vote down because of a simple technical disagreement, when it's not obvious that either side is correct.
anon
Sorry for voting down. I'm new here and just got the 100 points available for voting down. I'll be spare with the new power ;) Actually I've seen somewhere a combination of the boost::optional and exception technique in a single package. It was very cool but I can't remember it right now. The basic class was just like boost optional. However if the return *value* is destructed before it is checked then an exception is thrown. If it is checked then no exception is thrown even on error. It gives you the choice in a single API.
bradgonesurfing
@bradgonesurfing, I did something similar back in 2002. I've been meaning to blog about the experience, but I've been negligent in getting my blog set up. I'd be curious to see the package you're referring to.
Mark Ransom
I can't find it though i've looked. However good the idea it has one fatal flaw. Throwing an exception in a destructor is bad form. If that destructor is called in the scope of another exception then C++ will automatically terminate. It's a perfect case of how C++ is totally broken with respects to RAII. It's good in theory until you find yourself writing more an more complex destructors and being unable to figure out if they throw or not.
bradgonesurfing
@bradgonesurfing, I got around that problem by using a Microsoft extension that you could call to determine if you were already inside exception processing.
Mark Ransom
@Mark: there's no need for a Microsoft extension to do that; `std::uncaught_exception()` (from `<exception>`) will tell you whether an exception is currently being thrown.
Mike Seymour
@Mike Probably only an option to check, not to use. Even Herb Sutter says no. http://gotw.ca/gotw/047.htm
DumbCoder
A: 

If you have references and the bool type, you must be using C++. In which case, throw an exception. That's what they're for. For a general desktop environment, there's no reason to use error codes. I have seen arguments against exceptions in some environments, like dodgy language/process interop or tight embedded environment. Assuming neither of those, always, always throw an exception.

DeadMG
A: 

Well, the first one will compile either in C and C++, so to do portable code it's fine. The second one, although it's more "human readable" you never know truthfully which value is the program returning, specifying it like in the first case gives you more control, that's what I think.

aitorkun
+1  A: 

For C++ I favour a templated solution that prevents the fugliness of out parameters and the fugliness of "magic numbers" in combined answers/return codes. I've expounded upon this while answering another question. Take a look.

For C, I find the fugly out parameters less offensive than fugly "magic numbers".

JUST MY correct OPINION
+6  A: 

boost optional is a brilliant technique. An example will assist.

Say you have a function that returns an double and you want to signify an error when that cannot be calculated.

double divide(double a, double b){
    return a / b;
}

what to do in the case where b is 0;

boost::optional<double> divide(double a, double b){
    if ( b != 0){
        return a / b;
    }else{
        return boost::none;
    }
}

use it like below.

boost::optional<double> v = divide(a, b);
if(v){
    // Note the dereference operator
    cout << *v << endl;
}else{
    cout << "divide by zero" << endl;
}
bradgonesurfing
+1, that was my gut feeling when I saw the question, I'm glad I am not the only one. This corresponds exactly to `Maybe` construct in Haskell and is certainly very natural.
Matthieu M.
Actually it is not a proper monad according to my understanding.http://xtargets.heroku.com/2010/06/03/using-boostoptional-as-a-range/adds iteration support to boost::optional which might bring it closer.
bradgonesurfing
A: 

I prefer using return code for the type of error occured. This helps the caller of the API to take appropriate error handling steps.

Consider GLIB APIs which most often return the error code and the error message along with the boolean return value.

Thus when you get a negative return to a function call, you can check the context from the GError variable.

A failure in the second approach specified by you will not help the caller to take correct actions. Its different case when your documentation is very clear. But in other cases it will be a headache to find how to use the API call.

Praveen S
A: 

Apart from doing it the correct way, which of these two stupid ways do you prefer?

I prefer to use exceptions when I'm using C++ and need to throw an error, and in general, when I don't want to force all calling functions to detect and handle the error. I prefer to use stupid special values when there is only one possible error condition, and that condition means there is no way the caller can proceed, and every conceivable caller will be able to handle it.. which is rare. I prefer to use stupid out parameters when modifying old code and for some reason I can change the number of parameters but not change the return type or identify a special value or throw an exception, which so far has been never.

Does additional parameter in the second method bring notable performance overhead?

Yes! Additional parameters cause your 'puter to slow down by at least 0 nanoseconds. Best to use the "no-overhead" keyword on that parameter. It's a GCC extension __attribute__((no-overhead)), so YMMV.

John
Thanks for your stupid answer, which I stupidly down voted.
doc
Oh, don't worry. It's not stupid to down vote a sensible answer on this site... it's expected!
John
A: 

For a "try" function, where some "normal" type of failure is reasonably expected, how about accepting either a default return value or a pointer to a function which accepts certain parameters related to the failure and returns such a value of the expected type?

supercat
+1  A: 

In C, one of the more common techniques I have seen is that a function returns zero on success, non-zero (typically an error code) on error. If the function needs to pass data back to the caller, it does so through a pointer passed as a function argument. This can also make functions that return multiple pieces of data back to the user more straightforward to use (vs. return some data through a return value and some through a pointer).

Another C technique I see is to return 0 on success and on error, -1 is returned and errno is set to indicate the error.

The techniques you presented each have pros and cons, so deciding which one is "best" will always be (at least partially) subjective. However, I can say this without reservations: the technique that is best is the technique that is consistent throughout your entire program. Using different styles of error reporting code in different parts of a program can quickly become a maintenance and debugging nightmare.

bta