views:

718

answers:

13

Is it normal to have tests that are way bigger than the actual code being tested? For every line of code I am testing I usually have 2-3 lines in the unit test. Which ultimately leads to tons of time being spent just typing the tests in (mock, mock and mock more).

Where are the time savings? Do you ever avoid tests for code that is along the lines of being trivial? Most of my methods are less than 10 lines long and testing each one of them takes a lot of time, to the point where, as you see, I start questioning writing most of the tests in the first place.

I am not advocating not unit testing, I like it. Just want to see what factors people consider before writing tests. They come at a cost (in terms of time, hence money), so this cost must be evaluated somehow. How do you estimate the savings created by your unit tests, if ever?

+2  A: 

Well,

This is a trade-off scenario where more tests ensure stability. By stability, it not only means that the code under test is more error free and foolproof, it gives an assurance that the program will not break under any case in future. However crazy you pass arguments to a method, the code block will return properly (ofcourse with appropriate error messages whereever required).

Even more, you can write your unit test cases before even having to know the internal operation of your method under test. This is like a black box scenario where in you will first finish writing your test cases then start coding. The heavy advantage is that the development effort will become error free in fewer iterations by parallely running the test cases.

And the size of the test code does not matter at all. All that matters is the comprehensiveness and the coverage of your unit tests. Whether it just tests for namesake or its a serious test case which handles all the possible cases.

Bragboy
How do you draw the line? I mean, what do you consider the point of diminishing returns?
Martinho Fernandes
This is understandable and trivial for testing how parts of the program integrate together, but most of the code should not be touched anyway or will be changed in a very trivial manner (in which case you have to modify your unit tests, creating another cost, doh!)
HeavyWave
@HeavyWave : Yeah those are side effects. But again, it will be better, if you change your tests first, before making the 'trivial' changes
Bragboy
@Martinho: There is no line here my friend. But its a best practice to write tests. Thats wat I meant initially by trade-off.
Bragboy
@Bragaadeesh: I'm sure there is a point where the value of writing a test is not enough to cover the effort of writing it. Is 100% test coverage a plausible goal? (I don't follow "best practices", I prefer to understand why/when something is good or not and make my choice)
Martinho Fernandes
A: 

Well yes, it can well happen that the tests have more loc than the actual code you are testing, but it is totally worth it when considering the time you save when debugging code.

Instead of having to test the whole application/library by hand every time you make a change you can rely on your testsuite, and if it fails, you have more accurate information on where it broke than "it does not work".

About avoiding tests: If you don't test certain parts of your code you are actually undermining the whole concept and purpose of tests and then the tests are in fact rather useless.

You do not, however, test stuff you did not wrote. That is, you assume that external libraries work properly, and generated getter/setter methods (if your language supports those) do not have to be tested, either. It is very safe to assume that it won't fail at assigning a value to a variable.

dominikh
unless a cosmic ray causes them to fail.
Carson Myers
+11  A: 

You might be testing the wrong thing - you should not have different tests for every method in your code.

You might have too many tests because you test implementation and not functionality - try testing how things are done test what is done.

For example if you have a customer that is entitled to get a discount on every order - create a customer with the correct data and create an order for that customer and then make sure that the final price is correct. That way you actually test the business logic and not how it's done internally.

Another reason is for big tests is lack of Isolation (a.k.a mocking) if you need to initialize difficult objects that require a lot of code try using fakes/mocks instead.

And finally if you have complicated tests it might be a smell - if you need to write a lot of code to test a simple functionality it might mean that your code is tightly coupled and your APIs are not clear enough.

Dror Helper
and I'd argue that unit tests are supposed to test exactly the implementation. Otherwise they are functional tests.
Bozho
@Bozho, I concur. You cannot escape mocking things, which are part of the implementation and most of my tests are those mock setups.
HeavyWave
@Bozho it doesn't matter what you call it - you should test the requirements and not the implementation. If you test the implementation you will end up with very fragile tests that will break whenever you change your code even if the end result is the same.
Dror Helper
you should test whether individual units behave according to the requirements. But your tests should not be requirement-centered. I.e. you should test whether `createOrder()` successfully creates a new order, rather than testing the whole purchasing process in a single test method.
Bozho
@Bozho Of course you're right - although I would do both unit test and integration (whole process) tests. What I meant was that you should not test the actual implementation - don't test how the order was created only that nit was actually created
Dror Helper
+1  A: 

Testing should be about finding the right balance, which depends on many different factors, such as:

  • Business purpose (think "pacemaker controller" vs "movie inventory manager")
  • Skills of development staff
  • Staff turnover (how often are people added to the developer pool)
  • Complexity (of the software, related to "business purpose" above)

I typically only write tests for the "public API" and thereby only implicitly test any assembly-internal classes used to deliver the public functionality. But as your desire for reliability and reproducibility increases you should also add additional tests.

Morten Mertner
+5  A: 

Unit test code should follow the same best practices as production code. If you have that much unit test code it smells of a violation of the DRY principle.

Refactoring your unit tests to use Test Utility Methods should help reduce the overall unit test footprint.

Mark Seemann
DRY does not always apply to unit test code. It might be the same now, but the chance that it you'll need slightly different data later is quite high.
HeavyWave
I beg to differ. If you don't follow the DRY principle, the unit tests will slow you down every time you want to refactor because you will need to change lots of unit tests. Don't violate the DRY principle just because you *might* need a slightly different setup later. The YAGNI principle also applies here, and in the cases where it turns out that you actually need a slightly different setup, you can always extend your Test Utility Methods to deal with these differences.
Mark Seemann
For most of the code in our application the input is the database. So to test the set of inputs we have to mock the data for each unit test separately, which looks like repetition at first, but it would be completely invalid to take it out of tests.
HeavyWave
You can still write a test utility API that lets you specify only those things you want to deviate from a baseline.
Mark Seemann
A: 

One of the things that guides me when I write tests or do TDD (which incidentally I learnt from an answer to one of my questions on SO) is that you don't have to be as careful about design/architecture of your tests as much as you have to be so about your actual code. The tests can be a little dirty and suboptimal (code design wise) as long as they do their job right. Like all pieces of advice on design, it's to be applied judiciously and there's no substitute for experience.

Noufal Ibrahim
+1  A: 

Very valid and good question. I follow simple principle when needed.

  1. Set the category for the issues we know (critical, high, low )
  2. See how much time we have and rearrange then by internal discussion
  3. set the priorities
  4. Then fix the issues

Though all this takes considerable time but as long as we remember that output should be good and bug free and we adhere to above things things go fine.

Anil Namde
+1  A: 

This is true more often than not. The key to finding out if it's a good or bad thing is to find out the reason why the tests are bigger.

Sometimes they're bigger simply because there are a lot of test cases to cover, or the spec is complex, but the code to implement the spec is not that lengthy.

Also, consider the time it takes to eliminate bugs. If unit tests prevented certain bugs from happening, ones that would've taken a lot more time to debug and fix, would you argue that TDD made your development longer?

Jon Limjap
+1  A: 
  • If you are testing simple (CRUD) operations, it is entirely logical to have longer tests
  • Otherwise I suppose you can refactor your code so that repeated code is moved to separate methods
  • If using Java you can use checkstyle (or other tools) to check for code duplication
Bozho
A: 

Yes, this is normal. It's not a problem that your test code is longer than your production code.

Maybe your test code could be shorter than it is, and maybe not, but in any case you don't want test code to be "clever", and I would argue that after the first time of writing, you don't want to refactor test code to common things up unless absolutely necessary. For instance if you have a regression test for a past bug, then unless you change the public interface under test, don't touch that test code. Ever. If you do, you'll only have to pull out some ancient version of the implementation, from before the bug was fixed, to prove that the new regression test still does its job. Waste of time. If the only time you ever modify your code is to make it "easier to maintain", you're just creating busy-work.

It's usually better to add new tests than to replace old tests with new ones, even if you end up with duplicated tests. You're risking a mistake for no benefit. The exception is if your tests are taking too long to run, then you want to avoid duplication, but even that might be best done by splitting your tests into "core tests" and "full tests", and run all the old maybe-duplicates less frequently.

Also see http://stackoverflow.com/questions/2556594/sqlites-test-code-to-production-code-ratio

Steve Jessop
+1  A: 

Too much test code could mean that the actual code being tested was not designed for testability. There's a great guide on testability from Google developers that tries to address this issue.

Badly designed code means tons of test code that has only one reason: making the actual code testable. With a good design the tests can be focused more on what's important.

Imeron
A: 

In my practice of TDD, I tend to see larger tests (in LOC) testing the classes that are closer to the integration points of a system, i.e. database access classes, web service classes, and authentication classes.

The interesting point about these unit tests is that even after I write them I still feel uneasy about whether those classes work which leads me to write integration tests using the database, web service, or authentication service. It is only after automated integration tests have been established that I feel comfortable moving on.

The integration tests are normally much shorter than their respective unit tests and do more for me and the other developers on the team to prove that this part of the system works.

-HOWEVER-

Automated integration tests come with their own nasties that include handling the larger runtime of the tests, setting up and tearing down the external resources and providing test data.

At the end of day, I have always felt good about including automated integration tests but have almost always felt that the unit tests for these "integration" classes were alot of work for not much payoff.

SargeATM
A: 

Tests that 2-3 times bigger are NOT normal.

Use helper classes/methods in tests.

Limit scope of tests.

Use test fixture effectively.

Use test tear down effectively.

Use unit test frameworks effectively.

And you won't have such tests anymore.

grigory