tags:

views:

574

answers:

11

We all know the Joel test. Joel had written it 8 years ago in August 2000. He wrote that most software companies ran with score between 2 and 3, which should have been quite scary for them.

8 years is a long time in software industry :).

Two years ago friend of mine ask his friends [text is in Polish] about Joel test and published some results.

In short:

  • two the worst companies had 4 points,
  • three the best had 8 points,
  • not many companies ask for writing a piece of code,
  • almost no once test UI.

I would like to ask you how many points your company earns today in the Joel test?

I am sure the results look much better then 8 years ago. But I am wondering how much better and how much Joel's post influenced this progress...

+1  A: 

My company scores a 9. #2 & #3 aren't so relevant as we do web development and unless I'm missing something we don't need to build anything :P

thesmallprint
So you never deploy from a test server to a production server?
MarkJ
A: 

Two software companies for I which worked have, both, 5 points.

Grzegorz Gierlik
+9  A: 

Our team scores at 12. Really, we do. Now to the seriouos part of my post. :-)

Joel's 12 point are relevant no matter the type of projects. I am tired of people that try to tell me that some of these do not apply, as they are creating a website. Let's say we are talking about a static set of HTML documents:

  1. Where do you keep the history of the changes? How do you merge conflicts on shared files like CSS? Where do people get the latest official version of all the files?
  2. How do you package the styles and documents and anything else you need in a final package that can be used to get the product to your clients starting from zero?
  3. How do you verify that new changes that came during the day did not mess everything? How do you regress anything?
  4. How do you track any issues and make sure you have not lost them in somebody's Inbox? How do you search, or get a list of the actual ones? How do you triage and prioritize?
  5. How do you ensure that issues with old content would ever be addressed, if you always prioritize the new content over them?
  6. How do you know when and what is available for your customers?
  7. How do you know what needs to be done?
  8. When are your authors actually writing the content?
  9. Notepad is NOT enough for writing html. :-)
  10. You do run all content by the editors, right?
  11. You wouldn't hire a chef based on how well he dances salsa, right? So why would you hire an author without seeing a sample of her writing?
  12. "Hey, can you read this page quickly and let me know if I got everything right?"
Franci Penov
good point, good post.
Brian R. Bondy
I slightly disagree about #3. We do not do daily builds, but we build and run a battery of tests at every bug fix. The end effect is the same in my opinion - theres nothing to build, if theres nothing changed. Maybe we could tweak that definition a little bit.
Mostlyharmless
@Mostlyharmless "daily build" should be really called "daily product verification"
Franci Penov
A: 

I score an 11 on my personal projects, a 10 for my company but 2 of the points are irrelevant so either way I get top scores.

Unkwntech
What are the irrelevant points? I am curios what software project doesn't need something from that list.
Franci Penov
A: 

My last job was a 3, and that only because we developers insisted on source control, and also ran the interviews. We didn't have control of, or have, much else.

My current job is a 7-ish, with pretty easy possibility of 11 if we wanted to. We've talked about daily builds, but don't necessarily generate that much progress in a day. Besides, the builds are 1 step. Still, we'll probably add that whenever we upgrade our build server. We get specs, but they might not be complete. We have a schedule, but it might not be up-to-date. We mostly work on bugs first, but not always.

The only one we can't easily do much about is the quiet work environment, especially right now as the company is building a new addition, and the expansion is right next to our spot in the building. Once that is done, we'll be better, but still no offices.

Caleb Huitt - cjhuitt
A: 

My job only scored 2 points, that is scary. I am trying to implement better development environment at work, with TDD and Agile but i am still learning so it will take sometime before that happens.

Forser
+1  A: 

We are an 8 I would say but not all these criteria fit. I work on projectst that are 8 - 40 hours long so a schedule or daily builds seems a little overkill. and never concurrent development on one project.

Brian G
+1  A: 

We score 9.5.

Our projects usually last between two weeks and two months, with outliers that are 6 months long. Given this, our schedules are usually kept up-to-date till about 75% through the calendar time for a project; after that, it takes an hour a day to keep the Gantt chart up-to-date, and that's unacceptable when your team is 2-4 people. So I guess that's a "no" for #6.

New candidates don't write code during the interview, but they DO perform a mini code review. Still, that's a "no" on #11.

The half-point comes from #5 - given the nature of our project cycle, we're always fixing bugs, and we're always adding new features. Keeping the defect list at zero would be a waste of time; we just aim to keep a consistently low count.

Ben Straub
+1  A: 

We score 11, our office doesn't have corridors :-)

I don't think it's a particularly good test though. It covers some good points, but could cover something in more detail such as:

  • Testing - Developer unit tests, peer/code review of the tests to ensure decent coverage, test plans for the test teams, testing in multiple environments etc.
  • Code Review
  • Work packages - Are you sure the developers understand exactly what they are meant to be doing? Even the tightest plain English spec will be open to interpretation due to the ambiguity of the language; do you verify that a developers technical approach will actually produce the correct result?
  • Branching / tagging / labeling strategies
  • Do your developers KNOW how to leverage the tools you have.. even basic things like understanding branching/merging concepts for the source control you use? It's no use paying for the best tools in the world if the people using them don't know how to use them.

I'm not sure point 12 is always relevent if you have a separate design team who is putting the interface together - although you could argue it's purely to make sure the design matches the spec/wireframe/design.. but that's a bit tenuous.

Steven Robbins
It's a good test until you score highly. That's the point to move on - and by that point you will have the resources to do so (as you clearly do).
MarkJ
+1  A: 

Microsoft UK ran an online poll in August 2009 through the MSDN Flash newsletter. 43% of the respondents scored 6 or below. I'm guessing that people who subscribe to MSDN flash are above average, so the overall picture may be worse! Or maybe in August the gurus were in luxury Bahamas resorts and missed the poll?

Here's the full results

4%  // Perfect 12. We rock!
9%  // Very good 10 or 11. Respect!
41% // Respectable 7, 8 or 9
29% // Disappointing 4, 5 or 6
13% // Disastrous 1, 2 or 3
1%  // 0. Enough said.
4%  // The Joel Test is highly irresponsible and sloppy

PS Our own Joel score may be provided to individual enquirers, if a satisfactory non-disclosure agreement is signed :)

MarkJ
+1  A: 

I've recently worked for companies scoring 8-9 on a strict interpretation of the test, but I would argue that they were actually better than many companies scoring 12.

This is almost a true statement: "The neat thing about The Joel Test is that it's easy to get a quick yes or no to each question."

#3 - Daily builds are not appropriate for every project. I have worked on projects where we did continuous builds (every checkin) that included automated unit/regression testing. In the spirit (but not literal text) of the question, I would agree with the comment by Franci Penov that it is really about regular verification of the code base.

#5 - At a couple of companies I've worked for, some engineers were tasked with fixing bugs while others were implementing new features. Proper branch management in source control was crucial, but it did work; we didn't stop working on the next release just to get the bugs out of the current one. But every "must-fix" bug was fixed for release, usually on time, and all fixes were propagated to the next release branch with minimal disruption. Speaking of which, not every bug is "must-fix" and that's a business decision, not an engineering decision. And the issue gets even muddier when you track enhancement requests in the same database as bugs.

#12 - At a previous company, we regularly did formal usability testing--not the "hallway" method Joel describes elsewhere, but bringing in customers and having them use the UI and/or various prototypes. IMO it was far superior, because we got feedback from real users instead of random programmers.

The neat thing about the Joel Test is that it's easy to make glib pronouncements about software development organizations, without paying attention to details. :-)

system PAUSE