views:

169

answers:

5

I have my unit tests. Each test method tests a logical UNIT of functionality in my system. In my unit tests external dependencies (db, file etc) are dealt with by using Mocks and Fakes.

Now iam not 100% sure how i should approach the integration test. Should i repeat the unit tests and replace then with the actual resources (DB, Files etc) or should i be testing for more lower level things such as:

1) Can ping Database
2) Can retrieve one record
3) Does File exist
etc...

My gut feel is that i should avoid biz logic in this phase as most of it should've been done in Unit, right?

Thanks

EDIT: I was a bit lazy in composing my question, what i also wanted to know was, if i needed to test biz logic in my integration phase then how should i set up my test suites so to minimize test code repeition. Take for example:

[TestMethod] //Unit Tests
public void CanGetData()
{
IRepository rep = new MockRepository();
var result = rep.GetData();
Assert.IsTrue(result != null)
}

[TestMethod] //Integration Test
public void CanGetData()
{
IRepository rep = new Repository(); //real repository
var result = rep.GetData();
Assert.IsTrue(result != null)
}

What test structure works for you? Do you use the Unit test assembly directly in your integration project and inject in the correct resources?

+2  A: 

Integration tests should not avoid any business logic. The point of an integration test is to verify behavior of the different pieces of your app domain working together cohesively. This includes your business logic.

Unit testing verifies that a single unit of work is operating correctly given certain conditions. However, this does not guarantee that said "unit of work" will operate correctly with other "unit of work"s in your system. This is where integration tests play a key role in your testing suite.

The actual resources of your system (DB, files, etc.) should be introduced into your test suite at some point, although not in your unit tests. Most people find integration tests an appropriate place to include your resources. Note that including environmental resources into your test suite can be a bit of an undertaking. Also, it will definitely slow down your integration tests.

I would also keep my Unit Tests and my Integration Tests separate. I prefer separate assemblies. This way I can run each separately through my tester and get results for each suite. The reason here, again, is because your integration tests are typically going to take much longer to run than your unit tests will.

Joseph
+2  A: 

Well, not to be coy, but in your integration tests you should test integration. You want broader tests that demonstrate components working together rather than testing individual units. You generally also want, at some point, to demonstrate your system works with the real resources instead of the mocks.

So, yeah, generally your should test with the real database etc. You also should test business logic, even if it's been unit-tested. You basically should eventually run scenarios to test every user accessible function from start to finish, with confirmation that the results (including the contents of the data base) are as expected.

Charlie Martin
Charles for president!
ojblass
+1  A: 

Unit tests check whether a single component works. "component" as in "smallest thing you can build and which does something". These verify the internal working of a component.

Integration tests verify the interfaces between components. Examples: Can my class write data to a real database? Does it handle errors from the database correctly? When I put this data into the database, will I see it in the webapp?

Usually, the line between the two is a bit fluent. You will put the "can class X persist itself" in a unit test (despite the fact that this is really an integration test).

Most project separate the tests by effort: If most developers can run them without setup, they say "unit test". If you need to prepare several computers (load data, start programs, make sure that the right version is where it belongs), then that's what I would call "integration test".

Note that both tests can (and should) be automated.

Aaron Digulla
+1  A: 

YMMV greatly with definitions here. IMHO the term "Unit Testing" has suffered from linguistic drift. (See my blog post on this for more information).

It sounds like you have a good understanding of what a unit test is. In this case it may be easier to define what something is by explaining what it isn't. An integration test is effectively an automated test that isn't a unit test (provided your definition). I believe an integration test is a mutually exclusive category that encompasses all other techniques we use to automate tests where the components in a system are actually communicating with each other. This means, as you say, any external dependencies exist in your test's context.

Others may or may not agree, but the important point to take away is that the maintainability of a given test increases as the size of your test context increases. The larger the context, the slower and less maintainable the test will be.

Because of this, you really want to consider what you're going to get out of a test at this level. You'll really need continuous integration to maintain integration tests, and will probably need to schedule them to run on an interval if they take a long time to run. Often they'll be harder to diagnose failures for when they break (because they're more complex) and you'll so you want to be sure that the test provides clear business value if it is to be run continuously in your test suite. Another way to say this is, it's worse to have bad tests than no tests. This is why unit tests are really of the utmost importance to you as a developer. Testing at levels higher than an isolated unit/component provide less bang-for-the-buck.

Naming and documenting can help a lot here, but just be careful. Write integration tests that are directly aimed at requirements/features or regressions/bugs. If it's a "smoke test" test things you care about the most, or things that break the most. You need to be pragmatic.

Hope this helps.

cwash
At the last count on my current project we are on approx 550~ unit test methods! So, yes as the project is progressing its becoming a monster to maintain. I think we are at the point of no return now. Hence my question about intergration testing and how to minimise test code reptetion. Will read your blog in the morning :)
Gotcha. I can empathize. :) I would stand behind my point that integration tests aren't going to help you maintain anything, arguably they make maintenance harder. Make sure if you introduce a higher level test that it provides clear value for you! Good luck!
cwash
To elaborate, you may have less repetition of actual code to test a given scenario, but there is a *lot* of duplication involved in creating isolated integration tests. (i.e., setup all of this infrastructure for test1, run test1, teardown a whole lot of infrastructure. repeat...)
cwash
A: 

To answer your edited question, (you may want to rephrase or break it off into separate question entirely...) you definitely want to separate concerns, and definitely want to use dependency injection. Isolate dependencies and inject them using configuration; this allows you to swap out objects in the configuration and not the code. That way you keep initialization of objects out of your test code altogether. (You also use a configuration for staging and production environments...)

I typically use Spring for this purpose, but any DI/IoC container will give you this capability. This is the most configurable approach, and works well if you're already doing dependency injection. It also aligns well with the idea of keeping the two suites separate.

cwash