views:

435

answers:

8

Most developers are aware of idea of eating one's own dog food, but at the same time its mathematically proven that its cheaper to have QA staff (or Testers) do QA than having developers do QA.

Now of course there is no point in being an extremist in either direction, but I've noticed that depending on the project and developer (or QA staff, or manager) the balance sways in one way or another, but I'm curious about what would be some good rules of thumb to apply when determining how much QA should be done in each camp.

Update: Although not mathematically in every case, Joel's article on QA should be clear enough, he actually has one of dogfood too :)

+5  A: 

Dogfooding isn't about QA at all. It's about using the product you are developing yourselves so that you can see where things might be improved in the workflow and generally feeling the pain points of using your software.

It isn't about improving the quality of the code, it's about making your software easier to use and guide you in your choices of feature development.

Garry Shutler
+2  A: 

There are two roles commonly described as "QA".

  • Quality Assurance -- folks who assure that the quality plan is being executed.

  • Testers -- developers who don't code.

If your question was focused on "assuring that the quality plan is being executed", then then it's easy: developers do work that meets the quality plan and QA audits that the quality plan was followed.

Since your question is focused on "developers who don't code", then you don't have a much of a quality plan in the first place. In this case, the developers need to (1) integrate the testers into their ranks, (2) create a quality plan, and (3) work according to that plan.

The plan may involve some independent testing. This can be done by having developer A write tests for developer B. It can also be done by having developer A write tests and peer review those tests before commencing coding.

The idea is that developers write and check and test their own code. One development organization. Everyone codes -- some more than others.

QA makes sure the developers really are doing that consistently and completely.

I don't see "developers who don't code" as a viable job. These are really junior-level developers. You can either leverage them to grow some additional technical skills; or squander them in a position where they spend a fair amount of time being beat up by users and arguing with developers.

One way to leverage "developers who don't code" is to start them out writing the tests; then use them to fix the broken code; then code from someone else's design; then do design work on their own.

There are many ways to discourage these "developers who don't code". One is to leave them guessing what the users meant. Do this by narrowing their knowledge of the process to just what they find in business analysis documents. Another is to put them in a position where they have to argue over the interpretation of business analysis documents with developers.

S.Lott
I meant QA in the sense of Testers
Robert Gould
Very interesting ideas about "developers who don't code", it seems like a great topic for a more extensive article, or blog. Good ideas too, in my industry we have plenty of these folks, so making better use of them is an important topic aswell
Robert Gould
+8  A: 

I nearly agree with Garry's answer - except he claims it isn't about improving the quality of the code. I think it absolutely is to do with that, as well as the usability he mentions in his first paragraph.

If you can have a wide dogfood, you will get:

  • More realistic data than QA is likely to use (given that it's real data!)
  • A wider range of data than QA is likely to use, by sheer force of numbers

I've certainly fixed bugs found in dogfooding plenty of times, where the situation just hadn't been tested by QA. In many situations you really can't test all possibilities, but dogfooding helps to test more.

Get as many people to dogfood as you can (having set appropriate expectations about potential problems, of course). It certainly shouldn't be a "developer only" thing (unless you're building a developer-only product, of course). This doesn't take away from the valuable work of QA - it adds to it.

It depends massively on what you're developing though. I've been in one company where the employees would never have any cause to use the product in everyday life, so dogfooding wasn't really feasible. In another company we were building a web proxy, so it made sense to get a large chunk of the company to browse through the proxy. I've recently been working on a synchronization product aimed at consumers, so again it makes sense to dogfood widely.

Jon Skeet
I was more trying to make the point that dogfooding isn't a type of QA, it is an activity that is useful to do in addition to regular QA practices. You may turn up bugs through doing it, but that isn't the main thrust of it.
Garry Shutler
I'd say the split between it being usability testing and other kinds of testing (bugs, real-world load testing etc) varies by product. I think it's *a* form of QA, just a different one to "dedicated" QA.
Jon Skeet
+1  A: 

Generally DogFooding isn't that useful, unless you happen to be developing developer tools that you as developers also need to use.

Where I work we do use our own product but not nearly as extensively as our customers do. Since our customers are not in the business of software development.

AnthonyWJones
It's not restricted to just developer tools. Are there no other programs that you and your colleagues use from day to day? Suppose you were writing a browser plugin, or a chat tool, or something like that - wouldn't *that* be useful to dogfood. The test isn't "is it a developer tool". (Continued)
Jon Skeet
It's "it the product something people in my company (whether developers or not) would use." If that's the case, it's dogfoodable.
Jon Skeet
True but that is still pretty narrow in the overall scheme of things and developers tend to use computers in ways quite different from a typical user.
AnthonyWJones
Does your company only employ developers? If not, why limit dogfooding to them? Employees often use a wide range of software, both at home and at work. Not all of it needs to be on a "computer" either - you could be developing a mobile product, for instance.
Jon Skeet
Not everybody in our company is a developer but everyone uses the product. Never-the-less its still meager usage in comparison to our customers and I still think that in the general case this will be true of most software. Hence "Dog Fooding isn't that useful", note I didn't say it was of no use.
AnthonyWJones
+3  A: 

Formal vs. Informal

Using your own product ("eating your dog's food") is part of Quality Assurance and I would put it under informal testing category as opposed to more formal testing usually done by a separate QA department.

Depth vs. Breadth

“Eating your dog’s food” is aimed at giving a first-hand experience of using a product or service to people directly responsible for its design and development. For a feature rich application it might go deeper than formal testing and at the same time put a specific user perspective, when the formal testing normally covers breadth of software use cases and tries to operate in more objective and measurable terms.

Little Annoyances That Make Difference

There is also a category of shortcomings very specific to an environment that can be only diagnosed “in the field” (like this really annoying rattling sound that seems to be coming from nowhere whenever you’re cruising at 72.52mph and drives you absolutely nuts, yet none of the mechanics is able, nor willing to get down to the cause when the car is at the garage).

Naked Software vs. Application + Set of Practises

Using your own product goes far beyond the software itself; it unavoidably encompasses the entire “user journey”. “Eating your dog food” lets to develop and better understand the techniques of using the software within a domain. It is fair to say that some software is used within a number of widely different contexts. Formal testing might not be able to cover each of these contexts in depth, simply because it might be difficult to define formally what specific tests you need to do once something has changed. Or it might be too expensive to mimic these contexts inhouse or go through all the known tests cases every time the software has slightly changed.

Internal Change vs. External

Formal testing is really good at checking that changes done to the software do work, but the other important area of evolution where formal testing is struggling is when you need to detect the changes in the software environment or usage patterns that come from the external world. Using your own product will help detecting these changes much sooner, probably before you're told by the users (depending on the quality of the user feedback channel).

What is Wrong with "the Balance"?

As such there is no balance to strike between using your own product and formally testing it (do both as much as it is feasible, i.e. returns you’re getting are worth the effort). One is to compliment another, not to serve as a replacement.

Who Should Do Testing: the Distinction

This is unless there is no dedicated resource to do the formal testing (sole developer, developers doing all the testing) and you have to select how much time you can dedicate to any of the two activities. Well, don’t. Important distinction here is that formal testing is best done by an independent authority, i.e. people who don't report to the development and design team. On the other hand everyone (designers, developers, marketers and even CEO) in should be using company products or services as much as possible, because the core idea behind “dogfooding” is to give everyone involved the first-hand experience of service or product they contribute to in a real life context.

You can't Dogfood Everything! Or can you?

As far as “you can’t dogfood certain types of software” argument goes, well, let not concentrate on how dogfooding looks: eating your dog’s food, but rather think instead of what it means: getting a first-hand experience and “aligning interests” with these of the actual users.

The next best thing to eating the food yourself is observing the dog whilst it eats the food, as opposed to relying on someone else telling you what your dog, supposedly, likes or dislikes.

Whilst the dev team members may not be able to personally put sewage control software to use in their everyday jobs nothing stops them from spending some time regularly observing a sewage engineer using the app, which doesn’t follow the letter of “dogfooding”, but certainly captures it spirit.

Totophil
Nice complete answer!
Robert Gould
A: 

Dogfooding is a good solution if you're building something that is usable by your staff on a daily basis, but unfortunately, I don't think it's feasible for every application. If you're writing an instant messaging client, then dogfooding is easy. If you write control systems for a sewage treatment plant, well then maybe not.

QA is about professional quality control for your software. I think the decision on whether you need it or not is entirely dependent on the system under test in terms of complexity and cost of failure. There is a (possibly oversimplified) analogy to manufacturing. I might buy a pencil that hasn't passed a quality control department, but I'm definitely not buying a ticket on an airplane that hasn't.

Andrew Burke
+2  A: 

I'm sure you can come up with an equation on cost savings, but that would be point in time. Developers should always 'eat their own dog food' when practical (why would a developer create an IDE for others and not use it themselves???) - but they should always be involved to some degree in base QA activities. I've been in to many companies where developers toss code over the wall (the 'I crap gold syndrome') - causing cycle after cycle of bad code being tested, not fixed completely and tested again.

The manager should determine the level of testing each group does and tweaks/tunes over time and specific to each person involved. The worst the code being delivered the more time testing (TDD) should be spent with the hold of improving the developer.

meade
+3  A: 

I completely agree with Jon’s answer, with the addendum that dogfood testing is important even for QA, and often isn’t done. More than once I’ve started using a product that other parts of QA (sometimes even myself—ouch) have supposedly finished testing, and found unconscionable bugs in normal use. Sometimes this is, as the Joel on Software article notes, just poor testing; it is surprisingly common for the first example in a tutorial to fail (e.g. Deja Gnu). But more often it’s because dogfooding is a very different process from systematic testing: You’re not doing what you’re told to do, you’re doing what you actually do when you’re not trying to find bugs.

A very, very good test plan might cover some of those cases, but in the most embarrassing example in my own company, the test plan was part of the problem: A conspicuous option, which most hard-core developers would be certain to need, was officially unsupported, and hence untested, but necessary nonetheless. I suppose a more reasonable functional requirements definition wouldn’t have made that mistake; if you know of any company whose FRD’s are always entirely reasonable and which is hiring, do let me know :-)

Flash Sheridan