views:

826

answers:

12

My current position is this: if I thoroughly test my ASP.NET applications using web tests (in my case via the VS.NET'08 test tools and WatiN, maybe) with code coverage and a broad spectrum of data, I should have no need to write individual unit tests, because my code will be tested in conjunction with the UI through all layers. Code coverage will ensure I'm hitting every functional piece of code (or reveal unused code) and I can provide data that will cover all reasonably expected conditions.

However, if you have a different opinion, I'd like to know:

1) What additional benefit does unit testing give that justifies the effort to include it in a project (keep in mind, I'm doing the web tests anyway, so in many cases, the unit tests would be covering code that web tests already cover).

2) Can you explain your reasons in detail with concete examples? Too often I see responses like "that's not what it's meant for" or "it promotes higher quality" - which really doesn't address the practical question I have to face, which is, how can I justify - with tangible results - spending more time testing?

Thanks, Richard

+6  A: 

Unit testing is likely to be significantly quicker in turn-around than web testing. This is true not only in terms of development time (where you can test a method in isolation much more easily if you can get at it directly than if you have to construct a particular query which will eventually hit it, several layers later) but also in execution time (executing thousands of requests in sequence will take longer than executing short methods thousands of times).

Unit testing will test small units of functionality, so when they fail you should be able to isolate where the issue is very easily.

In addition, it's a lot easier to mock out dependencies with unit tests than when you hit the full front end - unless I've missed something really cool. (A few years ago I looked at how mocking and web testing could integrate, and at the time there was nothing appropriate.)

Jon Skeet
Good points Jon. I would weight the speed issue as probably my biggest concern with regards to your points. I must admit I haven't used mocking so i'm unclear on how that would help for the sorts of projects I generally do (corporate intranet web to db workflow type stuff)
ZeroBugBounce
+2  A: 

Good, focused unittests make it a lot faster to find and fix problems when they crop up. When a well-written unittest breaks, you know pretty much what the failure means and what caused it.

Also, they're typically faster to run, meaning that you're much more likely to run them during development as part of your edit-compile-test cycle (as opposed to only when you're about to check-in).

eschercycle
+1  A: 

Unit testing gives you speed and most of all, pin point accuracy of where a failure or bug has been introduced. It enables you to test each component, isolated from every other component and be assured that it works as it should.

For example, if you had a web test, that sent an Ajax request back to a service on the server which then hit a database, which then fails. Was it the Javascript, service, business logic or database that caused the problem?

Where as if you unit test each of the service on its own, stubbing/mocking out the database, or each business logic unit, then you are more likely to know exactly where the bug is. Unit testing is less about coverage (although important) and more about isolation.

Xian
A: 

It depends on how the ASP.NET application architecture. If the web pages are merely hooking up an underlying business logic layer or data access layer, then unit tests working independently of the ASP.NET state model are faster to developer and run than similar WaitiN tests.

I recently developed an area of a legacy ASP.NET application, in Visual Studio 2003, with NUnit as the test framework. Whereas previously testing involved working through UI tests to ensure functionality was implemented correctly, 90% of the testing occurred wihtout requiring UI interaction.

The only problem I had was time estimates - one of the tasks was planned in Trac as taking 1 day for the data access/business logic, and 2 days for the UI creation and testing. With NUnit running over the data access/business logic the time for that portion of the development went from 1 day to 2 days. The UI developmente was reduced to a single 1/2 day.

This continued with other tasks within the new module being added to the application. The unit tests discovered bugs faster, and in a way that was less painful (for me) and I have more confidence in the application functioning as expected. Even better the unit tests are very repeatable, in that they do not depend on any UI redesign, so tend to be less fragile as changes in design fail in compilation, not at runtime.

Liam Westley
A: 

Unit testing allows for more rigourous performance testing and makes it much easier to determine where bottlenecks occur. For large applications, performance becomes a major issue when 10,000 users are hitting a method at the same time if that method takes 1/100th of a second to execute, perhaps because of a poorly written database query, because now some of the users have to wait up to 10 seconds for the page to load. I know I personally won't wait that long for pages to load, and will just move on to a different place.

cdeszaq
+16  A: 

Code coverage will ensure I'm hitting every functional piece of code

"Hit" does not mean "Testing"

The problem with only doing web-testing is that it only ensures that you hit the code, and that it appears to be correct at a high-level.

Just because you loaded the page, and it didn't crash, doesn't mean that it actually works correctly. Here are some things I've encountered where 'web tests' covered 100% of the code, yet completely missed some very serious bugs which unit testing would not have.

  1. The page loaded correctly from a cache, but the actual database was broken
  2. The page loaded every item from the database, but only displayed the first one - it appeared to be fine even though it failed completely in production because it took too long
  3. The page displayed a valid-looking number, which was actually wrong, but it wasn't picked up because 1000000 is easy to mistake for 100000
  4. The page displayed a valid number by coincidence - 10x50 is the same as 25x20, but one is WRONG
  5. The page was supposed to add a log entry to the database, but that's not visible to the user so it wasn't seen.
  6. Authentication was bypassed to make the web-tests actually work, so we missed a glaring bug in the authentication code.

It is easy to come up with hundreds more examples of things like this.

You need both unit tests to make sure that your code actually does what it is supposed to do at a low level, and then functional/integration (which you're calling web) tests on top of those, to prove that it actually works when all those small unit-tested-pieces are chained together.

Orion Edwards
Great Orion, I'm with you on some points right from the gate. I'd like to go through a few of them, though, and ask some follow-up questions in the comments here.Re. #1 - I believe the code coverage would reveal that issue, and it seems more a matter of the env. being broken, not the code.
ZeroBugBounce
#2 Okay - well, how could a unit test better address that issue. You could easily 'under test' inside a unit test as easily as a web test. #3 I take your point here... I would sayk this could just as easily make a case for better, automated validation of web tests, though.
ZeroBugBounce
#4 I think this is a great point... testing intermediate steps (the 'innards' of your code) is important in lots of cases. #5 If the web test was automated, would it not be valid to test for this as a pass condition? This is similar to #3 for me.
ZeroBugBounce
#6 Another great point. Thanks!
ZeroBugBounce
I agree that some of the above questions could be solved by more exhaustive web tests, but you'll never cover all your bases as well as if you just wrote unit tests.
Orion Edwards
+3  A: 

Unit testing does not generally prove that any given set of functionality works--at least it's not supposed to. It proves that your class contract works as you expect it to.

Acceptance tests are more oriented at customer requirements. Every requirement should have an acceptance test, but there is really no requirement between acceptance tests and unit tests--they might not even be in the same framework.

Unit testing can be used to drive code development, and speed of retesting is a significant factor. When testing, you often remove parts that the class under test relies on to test in isolation.

Acceptance tests the system just as you would deliver it--from GUI to database. Sometimes they take hours or days (or weeks) to run.

If you start to think of it as two completely different beasts, you will be a much more effective tester.

Bill K
+2  A: 

When you write unit tests you will be forced to write your code in a better way. More loosely coupled and more object oriented. That will lead to better architecture and a more flexible system.

If you write unit tests in a TDD style you will probably don't do as much unnecessary code because you will focus on tiny steps and only do the necessary.

You will be more confident when doing refactoring to improve your code to increase maintainability and reduce code smell.

And the unit test themselves will serve as exilent documentation of what the system does and does not.

Those are just a few examples of benefits I have noticed when applying TDD to my work.

HAXEN
A: 

Unit testing allows you to focus on the logic of getting data, applying rules, and validating business processes before you add the intricacies of a front end. By the time the unit tests have been written and conducted, I know that the infrastructure for the app will support the data types that I pass back and forth between processes, that my transactions work appropriately, etc.

In other words, the potential points of failure are better isolated by the time you start your acceptance testing because you have already worked through the potential errors in your code with the unit tests. With the history of the unit tests, you'll know that only certain modules will throw certain errors. This makes tracking down the culprit code much easier.

David Robbins
+1  A: 

Almost as Dijkstra put it: unit-tests could only be used to show that software has defects, not to prove that it's defect-free. So, in general, hitting every code path once (and obtaining 100% coverage) has nothing to do with testing - it just helps to eliminate bitrot.

If you are playing it by the book, every serious bug should be eliminated only after there is a unit test written that triggers that bug. Coincidentally, fixing the bug means that this particular unit-test is not failing anymore. In future, this unit test checks that this particular bug stays fixed.

It is much easier to write a unit test that triggers a particular bug than write an end-to-end (web) test that does ONLY that and doesn't run heaps of completely irrelevant code along the way (which could also fail and mess up with root cause analysis).

ADEpt
+1  A: 

Unit Tests test that each component works. These are extremely helpful in finding defects close to the time they are created, which dramatically cuts down the cost to fix defects and dramatically reduces the number of defects which end up in your release. Additionally, good unit tests make refactoring a whole lot easier and more robust.

Integration tests (or "web" tests in this case) are also very important but are a later line of defense than unit tests. A single integration test covers such a huge swath of code that when one fails it requires a lot of investigation to determine the defect (or possibly the group of defects) which caused the failure. This is costly, especially when you are trying to test a release build to get it out the door. This is even more costly given that the chance of introducing a bug with the fix tends to be pretty high on average and the chance that the failure is blocking further testing of the release (which is extremely expensive to the development cycle).

In contrast, when a unit test fails you know exactly where the defective code is and you usually know exactly what the code is doing wrong. Also, a unit test failure should only impact one developer at a time and be fixed before the code is checked in.

It's always more expensive to fix bugs later than earlier. Always.

Consider building an automobile. Would you wait until the entire vehicle rolls off the assembly line to test that each component works? At that point if you discover the CD player or the engine or the air conditioner or the cruise control doesn't work you have to take the whole vehicle off the line, fix the problem, then re-test everything (which hopefully doesn't reveal any new defects). This is obviously the wrong way to do it, just as it is obviously wrong to try to release software while only testing if it works at the end of the process rather than at every important step along the line from the ground up.

Wedge
+1  A: 

One more aspect - forgive the somewhat ideal environment that this is situated in:

Suppose you have 3 components that finally have to work together. Each can individually be completely unit-tested (whatever that means) with 5 unit tests. This makes 5 + 5 + 5 = 15 unit tests for complete coverage.

Now if you have your integration/web/combined test that tests all components together, you'd need (remember the ideal world for this abstract scenario) 5 * 5 * 5 = 125 tests that test all permutations to give you the same result as the 15 test cases above (given that you can even trigger all permutations, otherwise there would be untested/unspecified behaviour that might bite you later when you extend your components).

Additionally the 125 test cases would have a significantly more complicated setup, higher turnaround time and greatly decreased maintainability should the feature ever change. I'd rather opt for 15 unit tests plus some basic web tests that ensure that the components are wired somewhat correctly.

Olaf