It's the difference between "black box" testing (where you know what the code us supposed to do, but not how it works), and "white box" testing (where knowing how it works drives how you test it). "Black box" testing is what most people think of when you mention Quality Assurance.
I work for a company where the QA team are also software developers. (That narrows the field a lot if you care to guess the company.) I know Joel's opinion, and my experience leads me to partially disagree: for the same reason that a "white hat" hacker is more effective finding security holes, certain kinds of errors are more effectively found by white box testers who know how to write code (and therefor what the common mistakes are - for example, resource management issues like memory leaks).
Also, since QA-oriented developers are part of the process from the initial design phase, they can theoretically help to drive higher-quality code throughout the process. Ideally, for each developer working on the project with a mental focus on functionality, you have an opposing developer with a mental focus on breaking the code (and thus making it better).
Seen in that light, it's less a matter of using developers for testers than it is kind of disconnected pair-programming where one developer has an emphasis on controlling quality.
On the other hand, a lot of testing (such as basic UI functionality) frankly doesn't need that kind of skill. That's where Joel has a point.
For many businesses, I could see a system where programming teams trade off code review and testing duties for each others' code. Members of the Business Logic team, for example, could spend an occasional tour testing and reviewing code for the UI team, and vice-versa. That way you're not "wasting" developer talent on full-time testing, but you are gaining the advantages of exposing the code to (hopefully) expert scrutiny and punishment. Then, a more traditional QA team can take up the "black box" testing.