views:

440

answers:

10

If I have unit tests for each class and/or member function and acceptance tests for every user story do I have enough tests to ensure the project functions as expected?

For instance if I have unit tests and acceptance tests for a feature do I still need integration tests or should the unit and acceptance tests cover the same ground? Is there overlap between test types?

I'm talking about automated tests here. I know manual testing is still needed for things like ease of use, etc.

+6  A: 

The idea of multiple testing cycles is to catch problems as early as possible when things change.

Unit tests should be done by the developers to ensure the units work in isolation.

Acceptance tests should be done by the client to ensure the system meets the requirements.

However, something has changed between those two points that should also be tested. That's the integration of units into a product before being given to the client.

That's something that should first be tested by the product creator, not the client. The minute you invlove the client, things slow down so the more fixes you can do before they get their grubby little hands on it, the better.

In a big shop (like ours), there are unit tests, integration tests, globalization tests, master-build tests and so on at each point where the deliverable product changes. Only once all high severity bugs are fixed (and a plan for fixing low priority bugs is in place) do we unleash the product to our beta clients.

We do not want to give them a dodgy product simply because fixing a bug at that stage is a lot more expensive (especially in terms of administrivia) than anything we do in-house.

paxdiablo
So you're saying unit and acceptance tests don't cover all the bases? That things could slip through that neither one could catch on its own?
codeelegance
Yes, at a bare minimum, all the unit tests should be re-done on the integrated product before handing over to the client for acceptance testing. There are bugs that can arise from the integration process and user acceptance testing will only test the requirements of the user, not the myriad of things that the dev/test teams should be able to come up with.
paxdiablo
+2  A: 

It's really impossible to know whether or not you have enough tests based simply on whether you have a test for every method and feature. Typically I will combine testing with coverage analysis to ensure that all of my code paths are exercised in my unit tests. Even this is not really enough, but it can be a guide to where you may have introduced code that isn't exercised by your tests. This should be an indication that more tests need to be written or, if you're doing TDD, you need to slow down and be more disciplined. :-)

Tests should cover both good and bad paths, especially in unit tests. Your acceptance tests may be more or less concerned with the bad path behavior but should at least address common errors that may be made. Depending on how complete your stories are, the acceptance tests may or may not be adequate. Often there is a many-to-one relationship between acceptance tests and stories. If you only have one automated acceptance test for every story, you probably don't have enough unless you have different stories for alternate paths.

tvanfosson
A: 

Integration tests are for when your code integrates with other systems such as 3rd party applications, or other in house systems such as the environment, database etc. Use integration tests to ensure that the behavior of the code is still as expected.

David Yancey
+8  A: 

If I have unit tests for each class and/or member function and acceptance tests for every user story do I have enough tests to ensure the project functions as expected?

No. Tests can only verify what you have thought of. Not what you haven't thought of.

troelskn
+1. Great one-liner.
Lieven
A: 

Multiple layers of testing can be very useful. Unit tests to make sure the pieces behave; integration to show that clusters of cooperating units cooperate as expected, and "acceptance" tests to show that the program functions as expected. Each can catch problems during development. Overlap per se isn't a bad thing, though too much of it becomes waste.

That said, the sad truth is that you can never ensure that the product behaves "as expected", because expectation is a fickle, human thing that gets translated very poorly onto paper. Good test coverage won't prevent a customer from saying "that's not quite what I had in mind...". Frequent feedback loops help there. Consider frequent demos as a "sanity test" to add to your manual mix.

Dave W. Smith
A: 

In short no.

To begin with, your story cards should have acceptance criteria. That is, acceptance criteria specified by the product owner in conjunction with the analyst specifying the behavior required and if meet, the story card will be accepted.

The acceptance criteria should drive the automated unit test (done via TDD) and the automated regression/ functional tests which should be run daily. Remember we want to move defects to the left, that is, the sooner we find ‘em the cheaper and faster they are to fix. Furthermore, continuous testing enables us to refactor with confidence. This is required to maintain a sustainable pace for development.

In addition, you need automated performance test. Running a profiler daily or overnight would provide insight into the consumption of CPU and memory and if any memory leaks exist. Furthermore, a tool like loadrunner will enable you to place a load on the system that reflects actual usage. You will be able to measure response times and CPU and memory consumption on the production like machine running loadrunner.

The automated performance test should reflect actual usage of the app. You measure the number of business transactions (i.e., if a web application the clicking on a page and the response to the users or round trips to the server). and determine the mix of such transaction along with the reate they arrive per second. Such information will enable you to design properly the automated loadrunner test required to performance test the application. As is often the case, some of the performance issues will trace back to the implementation of the application while other will be determined by the configuration of the server environment.

Remember, your application will be performance tested. The question is, will the first performance test happen before or after you release the software. Believe me, the worse place to have a performance problem is in production. Performance issues can be the hardest to fix and can cause a deployed to all users to fail thus cancelling the project.

Finally, there is User Acceptance Testing (UAT). These are test designed by the production owner/ business partner to test the overall system prior to release. In generally, because of all the other testing, it is not uncommon for the application to return zero defects during UAT.

Cam Wolff
A: 

It depends on how complex your system is. If your acceptance tests (which satisfy the customers requirements) exercise your system from front to back, then no you don't.

However, if your product relies on other tiers (like backend middleware/database) then you do need a test that proves that your product can happily link up end-to-end.

As other people have commented, tests don't necessarily prove the project functions as expected, just how you expect it to work.

Frequent feedback loops to the customer and/or tests that are written/parsable in a way the customer understands (say for example in a BDD style ) can really help.

Al Power
A: 

Technically, a full suit of acceptance tests should cover everything. That being said, they're not "enough" for most definitions of enough. By having unit tests and integration tests, you can catch bugs/issues earlier and in a more localized manner, making them much easier to analyze and fix.

Consider that a full suit of manually executed tests, with the directions written on paper, would be enough to validate that everything works as expected. However, if you can automate the tests, you'd be much better off because it makes doing the testing that much easier. The paper version is "complete", but not "enough". In the same way, each layer of tests add more to the value of "enough".

It's also worth noting that the different sets of tests tend to test the product/code from a different "viewpoint". Much the same way QA may pick up bugs that dev never thought to test for, one set of tests may find things the other set wouldn't.

RHSeeger
+4  A: 

I'd recommend reading chapters 20 - 22 in the 2nd edition of Code Complete. It covers software quality very well.

Here's a quick breakdown of some of the key points (all credit goes to McConnell, 2004)

Chapter 20 - The Software-Quality Landscape:

  • No single defect-detection technique is completely effective by itself
  • The earlier you find a defect, the less intertwined it will become with the rest of your code and the less damage it will cause

Chapter 21 - Collaborative Construction:

  • Collaborative development practices tend to find a higher percentage of defects than testing and to find them more efficiently
  • Collaborative development practices tend to find different kinds of errors than testing does, implying that you need to use both reviews and testing to ensure the quality of your software
  • Pair programming typically costs the about the same as inspections and produces similar quality code

Chapter 22 - Developer Testing:

  • Automated testing is useful in general and is essential for regression testing
  • The best way to improve your testing process is to make it regular, measure it, and use what you learn to improve it
  • Writing test cases before the code takes the same amount of time and effort as writing the test cases after the code, but it shortens defect-detection-debug-correction-cycles (Test Driven Development)

As far as how you are formulating your unit tests, you should consider basis testing, data-flow analysis, boundary analysis etc. All of these are explained in great detail in the book (which also includes many other references for further reading).

Maybe this isn't exactly what you were asking, but I would say automated testing is definitely not enough of a strategy. You should also consider such things as pair programming, formal reviews (or informal reviews, depending on the size of the project) and test scaffolding along with your automated testing (unit tests, regression testing etc.).

Jason Down
A: 

If I have unit tests for each class and/or member function and acceptance tests for every user story do I have enough tests to ensure the project functions as expected?

This is enough to show your software is functionally correct, at least as much as your test coverage is sufficient. Now, depending on what you're developing, there certainly are non-functional requirements that matter, think about reliability, performance and scability.

philippe