views:

149

answers:

7

One of the main things I wanted to achieve in my experimental programming language was: When errors occur (Syntax, Name, Type, etc.) keep the program running, no matter how serious or devastating it is. I know that this is probably very bad, but I just wanted something that doesn't kill itself on every error - I find it interesting what happens when a serious error occurs but the program continues.

  • Does this "paradigm" have a name? I mean expect for
  • How bad is it to do the above?
  • Are there programs in use out there that just follow: "Hey, this is a fatal, unexpected error - but you know what? I don't care!"?
A: 

I don't know about a name.

How bad is it? I don't know; what are you using it for? Usually, it's better to have a crash than a silent error. Crashes are obvious, and can usually be corrected, but silent errors are likely to be unnoticed, and so it doesn't matter how easy they are to correct.

There are situations in which this may be the right thing to do. I've read of a Navy fire control computer with "battle bars" that disabled the exception facilities, on the grounds that as long as it was running it might be giving out good information, while if it isn't running you know very well it isn't. In battle, it's frequently the case that doing nothing is bad, even if you can't do something particularly useful.

David Thornley
I thought the "battle bars" were bypasses for the circuit breakers, with the logic being that you don't want to lose weapons control even if the control room is on fire. Of course, I suppose you could classify circuit breakers as "exception facilities", that just wasn't the first thing that came to mind...
TMN
@TMN: In the account I read (which may be erroneous, and I've never been in the Navy), the "battle bars" were to cut out the exception facilities, likely on the principle that you want to keep fire control computations if they might be right. The term may just have been reused. (The bypasses for the circuit breakers were probably inspired by a battle in the fall of 1942, when USS South Dakota found herself without power for ten minutes, while under fire from a Japanese battleship and two heavy cruisers.)
David Thornley
+2  A: 

In order for your program to proceed, you'd have to have a basic state that you know is good, and then each request is processed independently. For example:

  • A client/server application. New requests coming in to the server are each processed independently, and if one request fails catastrophically, the server catches it and either pretends it didn't happen, or lets the client know that it failed.
  • A local application. There's some base form, and everything the user tries to do is instantiated from there. If a process fails catastrophically, the instance of this process (maybe an MDI form) is killed, and the user is left with their initial application shell form, free to try again or do something else.

The biggest thing to be careful of if you're doing this is that your application must have some irreducible core that is bug-free, and will handle any unplanned exceptions (not an easy task). Not to mention that swallowing unexpected exceptions can make troubleshooting/debugging miserable, the irreducible core can't fail, or else the entire application will fail.

rwmnau
+1 - "swallowing unexpected exceptions can make troubleshooting/debugging miserable". Miserable is a too soft word for such pain.
Konamiman
+4  A: 

On the naming, you could say the language exhibits "pig-headedness".

Crashing is normally prefered, because programs should not return unpredictable and unrepeatable results. No result is generally better than an unreliable one, especially if you are doing something business critical. For example, it's better that a customer's order on Amazon is not processed (they can always re-submit it) than for the customer to be delivered a random product. Some errors are truely unrecoverable, for example if the instruction pointer is corrupted.

You can implement similar bahaviour in most modern languages with catch all exception handlers.

MichaelB76
Maybe we should keep a backup of the instruction pointer in another register...
Jimbo
+1  A: 

I went with the 'military topology' method of handling errors.

Errors should be classified by severity, then either dealt with or passed up to a superior for resolution.

A private is ordered to go clean the parade ground with a toothbrush. He runs out of soap. That's a problem he should figure out himself and not bother his superior.

If the parade ground has a platoon of enemy soldiers landing on it it's probably something he should tell his superiors about.

Please credit me in the 'snark' section of your book when you're rich and famous.

Jay
A: 

You're assuming that it's possible to determine that all errors of type X are show-stoppers and that all errors of type Y are not. I think this sounds fine on the surface but the devil is in the details. In practice, I don't think you could think of a single error which is always benign.

You mention "Syntax, Name, Type". If you know common syntax errors that can objectively fixed without causing problems, build them into the spec and let the compiler handle them (at which point they would no longer be syntax errors). I don't know what kind of trivial error "Name" refers to. Avoiding type errors is a matter of having a dynamic typing system. Again, this is part of the spec, not error handling.

What I'm reading is that you want to have a certain spec, a versatile compiler that allows for syntax to be written a variety of ways, and a dynamic typing system. This is great and I wish you the best of luck. But don't confuse this with interpreting the coder's intentionality and thinking you can tell an ok error from a detrimental one.

Dinah
A: 

It seems to me that you're talking about the difference between correctness and robustness. A highly robust program keeps running no matter what. A highly correct program gives only correct outputs. These two qualities are often diametrically opposed and must be balanced against each other. There's no general way to decide on a balance between the two. It depends on the purpose of the program. Thus, there's really no way for a compiler to intelligently decide which it should favour.

Programming for robustness is not necessarily a bad thing. Device drivers, for example, tend to be highly robust (at least the good ones). Generally speaking, it's not ok for a device driver to blow up in the face of unexpected errors since this could take down the entire system. Device drivers often interpret "fatal" errors in the "I don't care" way you describe. For example, if the hardware sends corrupted data, it's often sufficient to simply discard it and wait for a new sample.

Peter Ruderman
A: 

Erlang approach seems to be quite balanced. If a process fails, it won't (ideally) affect the others.

SK-logic