views:

104

answers:

4

Some DO facts:

  • two developers in a team
  • we write unit tests (auto run with every build)
  • we use source versioning control system (SVN)
  • we (the two developers) are product managers as well (a classic situation with high risk of product over-engineering)

Some DON'T facts

  • we don't have nightly builds
  • we don't have continuous integration
  • we don't have integration tests
  • we don't have regression tests
  • we don't have acceptance/customer tests
  • we don't have a dedicated tester yet

I read a lot about all these different kinds of tests, but I don't see a reason to write them at the moment. Right now it looks like a plain overhead without any value (edit: work that doesn't seem to add much value at the moment).

Question: What causes will force us to decide to implement any of the don'ts and which ones can/should be automated with which tools/libs?

+3  A: 

"What causes will force us to decide to implement any of the don'ts "

Nothing.

Nothing forces you to improve your quality. Many people write code that mostly works most of the time, requires a lot of careful maintenance and their users are mostly satisfied.

That's fine. Some people like it like that. Clearly, since you've characterized practices that lead to high quality as "plain overhead without any value" then you don't need that level of quality and can't foresee ever needing that level of quality.

That's fine.

I don't know how you deliver without doing acceptance testing, but you've stated clearly that you don't. I can't understand how this works, but you seem to be happy with it.

"which ones can/should be automated"

None. This is pretty trivial stuff. You're already using C#'s unit testing. Unit testing, essentially is regression testing. To a degree you can use the same tools and framework for integration and elements of acceptance testing.

There are numerous make-like (ant-like, maven-like, scons-like) tools to do nightly builds.

You don't need any more automation than you have.

"continuous integration" doesn't require tools, just the "plain overhead without any value" of checking stuff in often enough that the build is never broken.

As far as I care, every developer is a tester, so you are all dedicated testers. Many people debate the "dedicated tester" role. I no longer know what this means. It seems to be a person who doesn't produce deliverable code. Don't know why you'd hire someone like that. It makes more sense to simply make everyone responsible for all the testing at all times.

A "dedicated tester" -- who acts as surrogate for the user -- always turns out to be working closely with the business analyst. Since this is how it usually shakes out, they're usually junior business analysts with a focus on the acceptance test. This is a good thing because they have a deliverable: a solved business problem.

I'm never sure what testers deliver. More tests? More bugs? How about making them answerable to the users for assuring that the business problem is solved?

Nothing forces you to do any of this.

S.Lott
Sorry. I have just updated my tags to include C# and .Net. And as mentioned earlier. We do Unit testing already (NUnit + Moq).
Robert Koritnik
I edited my "plain overhead without any value". I could offend someone... ;) But I have to disagree about dedicated testers. Developers always do ad-hoc testing, but developers rarely make good regular users (unless the product is aimed at developers).
Robert Koritnik
And to point out Joel's article about testers I hope you'll read: http://www.joelonsoftware.com/articles/fog0000000067.html
Robert Koritnik
+2  A: 

I'm not entirely sure this is answer-worthy, but here are a few things to consider:

  • If your unit tests are good and up-to-date (a bug report leads to unit tests that confirm the bug), unit tests can act as regression tests.
  • If your unit tests run with every check-in, that sounds very close to my understanding of nightly builds, but with a different frequency.
  • If you don't have acceptance tests, how do you know that what you build meets the needs of your customer? An acceptance test can be as simple as making sure that the requirements are fulfilled as documented.
Thomas Owens
As mentioned we are also product managers, so we decide what to develop and also accept it's implementation. So we do acceptance testing on the go so to speak... It that makes sense.
Robert Koritnik
+1  A: 

At some point quality may become a bigger driver of profit then time to market. Then you need to start thinking about having match better quality.

Until that happens, you should think of nightly builds, automated regression tests, …, etc as a way to save you time and effort in development. So spent a little time (say 1 week) every few months to make your development (and release) more productive (and fun).

Go for the big gains first, .e.g automating the setup for a manual regression test can often be a lot more cost effective the automating the UI test it’s self.

(Also consider what you find fun, if you enjoy writing automated UI testing code more then manual regression testing, you may do a better job with the automated UI testing code. )

Ian Ringrose
+1  A: 

If you have no (or not too much) complaints on your product quality – take it easy and try not to loose the level you already have. Team of two people can afford it, I think.

But if you ask about QA improvements I think you do have some doubts on product quality. In that case I recommend to perform some third-party assessment to make sure your estimates on quality are correct.

Are you on the market already? Or it's some "for one customer" project?

Mad
Not on the market yet... It'll be something local anyway. Why?
Robert Koritnik