views:

72

answers:

4

Hi,

I have some initialization code to use my API. The initialization may fail and I´d like to test it in an NUnit test.

After the initialization the API may be used. I´m testing the API too, but all my test methods will use the same, common, initialization code.

What I would ideally like is if this behavior:

  1. The Initialization test is run.
  2. The other tests are run if [1] succeeded.

In all cases where [1] will fail, so will all other tests. But the valuable information is that [1] fails. That's where I most likely will find the problem. It would be nice if the other tests could be marked with ? or something, indicating that they did not execute as functionality they depend on didn't pass the tests.

I know that tests should not be brittle. But I can't get around the fact that the initialization code is necessary for correct execution of other functionality.

This is a more general problem where some functionality depends on other functionality. Where the "other functionality" is far too commonly used to provide any real value by failing all tests depending on it. It would be better if the "other functionality" would be tested separately.

+2  A: 

OK here's how I would go about this...

Put the common initialization into a Setup method since its needed for all tests. If initialization throws an error, you'd see

  • all Tests in a suite failing (which I have been trained over time to recognize as a hint that maybe setup / teardown has thrown an exception).
  • the stacktrace for the failing tests containing the Setup method.

If this is too implicit for you, you may (although I wouldn't recommend it) add an empty test with a good name to the same suite. If that test shows up as green, you can be sure that Setup / common init code has succeeded.

[Test]
public void VerifySetup() {}

Update: Seem like you have a pretty niche requirement. I don't know of any mechanism in NUnit to specify such conditional execution of tests - e.g. Run Test2 thru 10 only if Test1 passes.

Gishu
Thank you for your answer! Altough I can't say I identify it as an answer to my question. So -1.That's exactly what I want to avoid. I want the tests dependent on common functionality to not run, as they provide no additional information. If my test with the common functionality was the only one failing (and the others getting a Excluded state or something) it would lead me to the common code immediately.There might be scenarios where there are lots of common code, for example to use different parts of an API. To use Setup and maybe an empty method seems icky.
Binary255
I respectfully disagree.. All the tests failing in a test suite provide me with a hint that setup/teardown has a problem. The stacktraces then help me confirm it. As for ignoring the tests if Setup failed, how does it matter - Having all tests in a suite failing vs having one specially-named test failing - the fact remains that all tests in the suite are broken/non-functional and someone needs to take a look at it. I don't think you can include/exclude tests at runtime in NUnit, at least not without complicating the test code..
Gishu
+1. I can't speak for NUnit but with DUnit it certainly would be possible to have tests run or not depending on the results of other tests. I see 2 problems with this, it complicates the test code like you mentioned and in pure unit testing, testcases should not be dependend on each other (strictly speaking, this would not be a dependency).
Lieven
@Gishu: I am glad you answered and really appreciate your comments! But I don't think we'll reach consensus. With more complex dependencies I can see an advantage to have this type of ability. Simply using SetUp and separating into different test suites is nice too. But I don't see it as enough in all cases. With JUnit I think this would have been possible. If it isn't possible with NUnit that's valuable information too, if you're sure please post that as an answer and I'll accept it. As it's one of the possible answers I am looking for.
Binary255
@Binary255 - here's something that might work for you based on a 2005 blog post that I hit yesterday.. Partition your tests into 'SetupTests' and 'RealTests' using Categories. Now use something like a batch file/script to run 2 tasks - where the 2nd is only run if the first one succeeds. First run Task1 - run all tests with category SetupTests. If Task1 succeeds, only then run all tests with category RealTests.
Gishu
@Binary255: One thing you're not considering - what happens if your setup fails, but your tests that depend on it still don't... you have no idea unless all of your tests run - you could have a whole suite full of invalid tests without ever knowing it!
SnOrfus
"Update: Seem like you have a pretty niche requirement. I don't know of any mechanism in NUnit to specify such conditional execution of tests - e.g. Run Test2 thru 10 only if Test1 passes."Yes I am afraid it is. I'm thankful for your time and your nice reply!
Binary255
A: 

Try this:

  1. Define the "Initialization test" as a TestCase

  2. Make all your other tests a subclass of this TestCase

  3. Create a Suite which runs your tests in some specific order so that your "Initialization test" is first.

S.Lott
-1. That won't prevent the other test cases from executing if my initialization test fails.
Binary255
@Binary255. "If my test with the common functionality ... [fails] it would lead me to the common code immediately". That solves your problem. It doesn't need to be the **only** test unless you're claiming that you're unable to read the logs or something. Clearly, you have brains. You **can** read the logs. The common stuff will be clearly identified as failing.
S.Lott
Sure. I don't think it's a big problem. But it's not what I am looking for. I want some tests to be ignored if another one fails. If it was only the order couldn't I just have placed the initialization test first and separated the actual initialization code into a private (not a test) helper method?We are monitoring each test case and generating statistics. If the initialization code fails, we want that to show for the test testing the initialization code. After the initialization test we are testing parts of an API. We want to run the tests when they have a chance of succeeding.
Binary255
If one test fails often it could indicate that the code the test is testing should be rewritten. We want to be specific and not get such indications each time a step we always need to make fails. In our opinion it will only clutter things up.
Binary255
@Binary255: You're missing the point. Testing is not "discovery". Your plan is to have all tests pass all the time. Period. If all tests don't pass, you're no longer testing, you're debugging. The level of "clutter" in the test log doesn't matter any more, because -- once the tests fail -- you have all the information you need. Once a test fails, you're off into debugging, which is a separate activity with separate tools.
S.Lott
A: 

I think that this pretty clear cut. The problem is that you're having a hard time separating out your API responsibilities. You have two. Initialization of API and API execution. Writing your tests to have this kind of dependency can kill you.

So I would recommend that the API create an initialization object and then various command objects to execute the API. The command objects will be in some kind of store or you could create on the fly.

The API will use a mocked initialization object and it will use mocked command objects.

The Initialization Object really doesn't have any dependencies except for whatever you need to initialize.

The Command objects will need a mocked initialization object.

[EDIT]

There are two way to get the other tests to fail if the initialization test fails

  1. Add a private variable to the test case. private isInitialized = false;. Then all your other tests check the state of this variable if not true then fail.

  2. Extend your test case with the API class. Add private function that interrogate state of initialization.

The cleaner of the two is 2. The fastest implementation is 1.

IMHO This can be a code smell when you have to couple your tests in such a manner. If as you said it is an integration tests. Why do you have a separate test for initialization. Integration is more along the lines of run some action against your API. So each integration test MUST initialize the API.

You might want to rethink your cascade failure scenario. It might be too much noise at completion of tests.

[EDIT1a]

The only way I can see to satisfy your requirements is to extend NUnit. Specifically look into Test Case Builders and Test Decorators.

The link is dated March 2008, so hopefully it isn't to out of date.

Gutzofter
-1. As I am writing an integration test I do not want to mock anything. It would be different if I wrote a unit test. And sometimes things has to be executed in order, it's not always a sign of bad design.
Binary255
@Binary255 - Sorry I did not see any relevant information concerning integration testing. Your tag was unit-testing. See my edit for more...
Gutzofter
@Gutzofter: Ah, sorry about that. I notice now that I didn't mention I was doing integration testing anywhere. :-( I've changed the tag.
Binary255
For the tests I know will fail if initialization fails, I want them to not execute. Like if they had the Ignore attribute. My reasoning is that their failure will only clutter up the results, and the test which would actually inform me about where the problem lies is the initialization test. So I think I get less noise by having some tests not execute if another one fails. I realize this could get complicated and out of hands if one is not careful.And it sure could be a code smell that things are dependent on one another that shouldn't be. But for my scenario I don't think that is true.
Binary255
And from your edit "There are two way to get the other tests to fail if the initialization test fails". I do not want the other tests to fail, I want them to be Ignored. There is a big difference. The ignore state will alert me that they haven't been executed, I won't get misleading information that the test succeeded or failed.I am immensely grateful for your time and nice reply! I'll have to stand by my -1 though as it doesn't answer the problem. That I gave a misleading tag to begin with is my fault, I'm sorry about that.
Binary255
+1  A: 

I've been in contact with the NUnit developers. It's not possible at the moment without writing a pretty complex plugin. The feature will turn up somewhere in the 3.x code base but will not appear in 2.5. I will consider writing it, but not for the time being.

Binary255