I’m going to play a devil's advocate for moment. I have been always wondering why browser detection (as opposed to feature detection) is considered to be a flat out a bad practise. If I test a certain version of certain browser and confirm that that certain functionality behaves is some predicable way then it seems OK to special case it. The reasoning is that it will be future foolproof, because this partial browser version is not going to change. On the other hand, if I detect that a DOM element has a function X, it does not necessarily mean that:
- this function works the same way in all browsers, and
- more crucially, it will work the same way even in all future browsers.
I just peeked into the jQuery source and they do feature detection by inserting a carefully constructed snipped of HTML into DOM and then they check it for certain features. It’s a sensible and solid way, but I would say that it would be a bit too heavy if I just did something like this in my little piece of personal JavaScript (without jQuery). They also have the advantage of practically infinite QA resources. On the other hand, what you often see people doing is that they check for the existence of function X, and then based on this, they assume the function will behave certain way in all browsers which have this function.
I’m not saying anything in the sense that feature detection is not a good thing (if used correctly), but I wonder why browser detection is usually immediately dismissed even if it sounds logical. I wonder whether it is another trendy thing to say.