So I ran into a situation today where some production code was failing precisely because a method performed exactly as documented in MSDN. Shame on me for not reading the documentation. However, I'm still scratching my head as to why it behaves this way, even if "by design", since this behavior is exactly opposite what I would have expected (and other, known behaviors) and therefore seems to violate the principle of least surprise.
The All()
method allows you to supply a predicate (such as a lambda expression) to test an IQueryable
, returning a Boolean value that indicates whether all collection members match the test. So far so good. Here's where it gets weird. All()
also returns true
if the collection is empty. This seems completely backwards to me, for the following reasons:
- If the collection is empty, a test like this is, at best, undefined. If my driveway is empty, I cannot assert that all cars parked there are red. With this behavior, on an empty driveway all cars parked there are red AND blue AND checkerboard - all of these expressions would return true.
- For anyone familiar with the SQL notion that NULL != NULL, this is unexpected behavior.
- The
Any()
method behaves as expected, and (correctly) returns false because it does not have any members that match the predicate.
So my question is, why does All()
behave this way? What problem does it solve? Does this violate the principle of least surprise?
I tagged this question as .NET 3.5, though the behavior also applies to .NET 4.0 as well.
EDIT Ok, so I grasp the logic aspect to this, as so excellently laid out by Jason and the rest of you. Admittedly, an empty collection is something of an edge case. I guess my question is rooted in the struggle that, just because something is logical doesn't mean it necessarily makes sense if you're not in the correct frame of mind.