views:

596

answers:

7

I always felt that expecting exceptions to be thrown on a regular basis and using them as flow logic was a bad thing. Exceptions feel like they should be, well, the "exception". If you're expecting and planning for an exception, that would seem to indicate that your code should be refactored, at least in .NET...
However. A recent scenario gave me pause. I posted this on msdn a while ago, but I'd like to generate more discussion about it and this is the perfect place!

So, say you've got a database table which has a foreign key for several other tables (in the case that originally prompted the debate, there were 4 foreign keys pointing to it). You want to allow the user to delete, but only if there are NO foreign key references; you DON'T want to cascade delete.
I normally just do a check to see if there are any references, and if there are, I inform the user instead of doing the delete. It's very easy and relaxing to write that in LINQ as related tables are members on the object, so Section.Projects and Section.Categories and et cetera is nice to type with intellisense and all...
But the fact is that LINQ then has to hit potentially all 4 tables to see if there are any result rows pointing to that record, and hitting the database is obviously always a relatively expensive operation.

The lead on this project asked me to change it to just catch a SqlException with a code of 547 (foreign key constraint) and deal with it that way.

I was...
resistant.

But in this case, it's probably a lot more efficient to swallow the exception-related overhead than to swallow the 4 table hits... Especially since we have to do the check in every case, but we're spared the exception in the case when there are no children...
Plus the database really should be the one responsible for handling referential integrity, that's its job and it does it well...
So they won and I changed it.

On some level it still feels wrong to me though.

What do you guys think about expecting and intentionally handling exceptions? Is it okay when it looks like it'll be more efficient than checking beforehand? Is it more confusing to the next developer looking at your code, or less confusing? Is it safer, since the database might know about new foreign key constraints that the developer might not think to add a check for? Or is it a matter of perspective on what exactly you think best practice is?

+4  A: 

Your lead is absolutely right. Exceptions are not just for once in a blue moon situations, but specifically for reporting other than expected outcomes.

In this case the foreign key check would still take place, and exceptions are the mechanism by which you can be notified.

What you should NOT do is catch and suppress exceptions with a blanket catchall statement. Doing fine-grained exception handling is specifically why exceptions were designed in the first place.

levik
Well, it is the expected outcome, that's the problem. Far more often than not, there will be children.
Grank
I very much agree with your edit. Specifically only SqlException 547 is being caught
Grank
Totally agree with you on this one levik, people that are replying with this "exceptional circumstances" are missing the point (see my post).
Rob Cooper
+4  A: 

Wow,

First off, can you please distill the question down a bit, while it was nice to read a well thought out and explained question, that was quite a lot to digest.

The short answer is "yes", but it can depend.

  • We have some applications where we have lots of business logic tied up in the SQL queries (not my design Gov!). If this is how it is structured, management can be difficult to convince of otherwise since it "already works".
  • In this situation, does it really make a big deal? Since it's still one trip across the wire and back. Does the server do much before it realises that it cannot continue (i.e.if there is a sequence of transactions that take place to your action, does it then fall over half way through, wasting time?).
  • Does it make sense to do the check in the UI first? Does it help with your application? If it provides a nicer user experience? (i.e. I have seen cases where you step through several steps in a wizard, it starts, then falls over, when it had all the info it needed to fall over after step 1).
  • Is concurrency an issue? Is it possible that the record may be removed/edited or whatever before your commit takes place (as in the classic File.Exists boo-boo).

In my opinion:

I would do both. If I can fail fast and provide a better user experience, great. Any expected SQL (or any other) exceptions should be getting caught and fed back appropriately anyway.

I know there is a concensus that exceptions should not be used for other than exceptional circumstances, but remember, we are crossing application boundaries here, expect nothing. Like I said, this is like the File.Exists, there is no point, it can be deleted before you access it anyway.

Rob Cooper
It's a very small-scale application; in the client's current environment, most of the insertion will be done by only one person. At max there are I think 25 users. So concurrency isn't an issue, performance isn't an issue, it's a perfect time to debate standards ;)
Grank
But its not just about performance or concurrency. Its the assumption that is the problem. Like I said, do both if it provides a better user experience, but you should be handling exceptions in either case.
Rob Cooper
+1  A: 

There's nothing wrong with what you are doing. Exceptions aren't nessesarily "exceptional". They exist to allow calling objects to have fine grained error handling depending on their needs.

Charles Graham
+2  A: 

I think you are right, exceptions should only be used to handle unexpected outcomes, here you are using an exception to deal with an possibly expected outcome, you should deal with this case explicitly but still catch the exception to show a possible error.

Unless this is the way this case is handled all throughout the code, I would side with you. The performance issue should only be brought up if it is actually an issue, i.e. it would depend of the size of those tables and the number of times this function is used.

Harald Scheirich
+1  A: 

Catching the specific SqlException is the right thing to do. This is the mechanism by which SQL Server communicates the foreign key condition. Even if you might favor a different usage of the exception mechanism, this is how SQL Server does it.

Also, during your check on the four tables, some other user might add a related record before your check is completed but after you read that table.

MusiGenesis
A: 

I'd recommend calling a stored procedure that checks if there are any dependencies, then deletes if there aren't any. That way, the check for integrity is done.

Of course, you'd probably want a different stored procedure for singleton deletes vs. batch deleting... The batch delete could look for child rows and return back a set of records that were not eligible for a bulk delete (had child rows).

Jeff
+2  A: 

Nice question. However, I find the answers ... scary!
An exception is a kind of GOTO.
I don't like using exceptions in this way because that leads to spaghetti code.
Simple as that.

AJ
+1 for the GOTO! that's sooo true
Peter Gfader