I've recently worked for companies scoring 8-9 on a strict interpretation of the test, but I would argue that they were actually better than many companies scoring 12.
This is almost a true statement: "The neat thing about The Joel Test is that it's easy to get a quick yes or no to each question."
#3 - Daily builds are not appropriate for every project. I have worked on projects where we did continuous builds (every checkin) that included automated unit/regression testing. In the spirit (but not literal text) of the question, I would agree with the comment by Franci Penov that it is really about regular verification of the code base.
#5 - At a couple of companies I've worked for, some engineers were tasked with fixing bugs while others were implementing new features. Proper branch management in source control was crucial, but it did work; we didn't stop working on the next release just to get the bugs out of the current one. But every "must-fix" bug was fixed for release, usually on time, and all fixes were propagated to the next release branch with minimal disruption. Speaking of which, not every bug is "must-fix" and that's a business decision, not an engineering decision. And the issue gets even muddier when you track enhancement requests in the same database as bugs.
#12 - At a previous company, we regularly did formal usability testing--not the "hallway" method Joel describes elsewhere, but bringing in customers and having them use the UI and/or various prototypes. IMO it was far superior, because we got feedback from real users instead of random programmers.
The neat thing about the Joel Test is that it's easy to make glib pronouncements about software development organizations, without paying attention to details. :-)