tags:

views:

65

answers:

3

I am writing unit tests for a PHP class that maintains users in a database. I now want to test if creating a user works, but also if deleting a user works. I see multiple possibilities to do that:

  1. I only write one method that creates a user and deletes it afterwards
  2. I write two methods. The first one creates the user, saves it's ID. The second one deletes that user with the saved ID.
  3. I write two methods. The first one only creates a user. The second method creates a user so that there is one that can afterwards be deleted.

I have read that every test method should be independent of the others, which means the third possibility is the way to go, but that also means every method has to set up its test data by itself (e.g. if you want to test if it's possible to add a user twice).

How would you do it? What is good unit testing style in this case?

+7  A: 

Two different things = Two tests.

Test_DeleteUser() could be in a different test fixture as well because it has a different Setup() code of ensuring that a User already exists.

[SetUp]
public void SetUp() 
{ 
  CreateUser("Me"); 
  Assert.IsTrue( User.Exists("Me"),  "Setup failed!" );
}

[Test]
public void Test_DeleteUser()
{ 
  DeleteUser("Me");
  Assert.IsFalse( User.Exists("Me") );
}

This means that if Test_CreateUser() passes and Test_DeleteUser() doesn't - you know that there is a bug in the section of the code that is responsible for deleting users.

Update: Was just giving some thought to Charlie's comments on the dependency issue - by which i mean if Creation is broken, both tests fail even though Delete. The best I could do was to move a guard check so that Setup shows up in the Errors and Failures tab; to distinguish setup failures (In general cases, setup failures should be easy to spot by an entire test-fixture showing Red.)

Gishu
But that also means Test_DeleteUser has a dependency on Test_CreateUser, is that ok? That's something I've never been 100% sure on.
Charlie
@Charlie - Its never OK for Test_DeleteUser to have a dependency on Test_CreateUser. Either put the setup in the top of the test or have it separated into different fixtures. If you start running tests that depend on each other you can have 1 failure that appears like many more because of the cascade affect
AutomatedTester
Ok thanks, that's what I thought, answer has been modified since I put that comment.
Charlie
There's no dependency here. He's done the user-creation in the SetUp(), rather than as a separate Test. But you're right that you shouldn't have dependencies between tests, because I think the order isn't guaranteed.
davewasthere
@Charlie: 2 tests should never depend on each other - because tests should be isolated. Was updating my answer but got side-tracked by my dayjob :)
Gishu
+1  A: 

How you do this codependent on how you utilize Mocks and stubs. I would go for the more granular approach so having 2 different tests.

Test A
  CreateUser("testuser");
  assertTrue(CheckUserInDatabase("testuser"))

Test B
  LoadUserIntoDB("testuser2")
  DeleteUser("testuser2")
  assertFalse(CheckUserInDatabase("testuser2"))

TearDown
  RemoveFromDB("testuser")
  RemoveFromDB("testuser2")

   CheckUserInDatabase(string user)
     ...//Access DAL and check item in DB

If you utilize mocks and stubs you don't need to access the DAL until you do your integration testing so won't need as much work done on the asserting and setting up the data

AutomatedTester
+1  A: 

Usually, you should have two methods but reality still wins over text on paper in the following case:

You need a lot of expensive setup code to create the object to test. This is a code smell and should be fixed but sometimes, you really have no choice (think of some code that aggregates data from several places: You really need all those places). In this case, I write mega tests (where a test case can have thousands of lines of code spread over many methods). It creates the database, all tables, fills them with defined data, runs the code step by step, verifies each step.

This should be a rare case. If you need one, you must actively ignore the rule "Tests should be fast". This scenario is so complex that you want to check as many things as possible. I had a case where I would dump the contents of 7 database tables to files and compare them for each of the 15 SQL updates (which gave me 105 files to compare in a single test) plus about a million asserts that would run.

The goal here is to make the test fail in such a way that you notice the source of the problem right away. It's like pouring all the constraints into code and make them fail early so you know which line of app code to check. The main drawback is that these test cases are hell to maintain. Every change of the app code means that you'll have to update many of the 105 "expected data" files.

Aaron Digulla