views:

4958

answers:

28

anti-pattern : there must be at least two key elements present to formally distinguish an actual anti-pattern from a simple bad habit, bad practice, or bad idea:

  • Some repeated pattern of action, process or structure that initially appears to be beneficial, but ultimately produces more bad consequences than beneficial results, and
  • A refactored solution that is clearly documented, proven in actual practice and repeatable.

Vote for the TDD anti-pattern that you have seen "in the wild" one time too many.
The blog post by James Carr and Related discussion on testdrivendevelopment yahoogroup

If you've found an 'unnamed' one.. post 'em too. One post per anti-pattern please to make the votes count for something.

My vested interest is to find the top-n subset so that I can discuss 'em in a lunchbox meet in the near future.

+22  A: 

The Mockery

Sometimes mocking can be good, and handy. But sometimes developers can lose themselves and in their effort to mock out what isn’t being tested. In this case, a unit test contains so many mocks, stubs, and/or fakes that the system under test isn’t even being tested at all, instead data returned from mocks is what is being tested. (Source: James Carr's post.)

Now you see me, now you don't -- Children at play

Gishu
I believe the cause for this is that your class under test has way too many dependencies. Refactored alternative is to extract code that can be isolated.
Spoike
@Spoike; If you're in a layered architecture that really depends on the role of the class; some layers tend to have more dependencies than others.
krosenvold
I saw recently, in a respected blog, the creation of a mock entity setup to be returned from a mock repository. WTF? Why not just instantiate a real entity in the first place. Myself, I just got burned by a mocked interface where my implementation were throwing NotImplementedExceptions all around.
Thomas Eyde
+16  A: 

The Inspector
A unit test that violates encapsulation in an effort to achieve 100% code coverage, but knows so much about what is going on in the object that any attempt to refactor will break the existing test and require any change to be reflected in the unit test.


'how do I test my member variables without making them public... just for unit-testing?'

Gishu
Cause: Absurd reliance on white-box testing. There are tools for generating these kind of tests like Pex on .NET. Refactored solution: Test for behavior instead and if you really need to check boundary values then let automated tools generate the rest.
Spoike
Before Moq came around, I had to abandon mocking frameworks in favor of handwriting my mocks. It was just too easy to tie my tests to the actual implementation, making any refactoring next to impossible. I can't tell the difference, other than with Moq, I rarely do these kinds of mistakes.
Thomas Eyde
+10  A: 

The Giant

A unit test that, although it is validly testing the object under test, can span thousands of lines and contain many many test cases. This can be an indicator that the system under tests is a God Object (James Carr's post).

A sure sign for this one is a test that spans more than a a few lines of code. Often, the test is so complicated that it starts to contain bugs of its own or flaky behavior.

I stood on a mountain of work and I knew it must be good.

Gishu
+9  A: 

The Slow Poke

A unit test that runs incredibly slow. When developers kick it off, they have time to go to the bathroom, grab a smoke, or worse, kick the test off before they go home at the end of the day. (Src: James Carr's post)

a.k.a. the tests that won't get run as frequently as they should

A good think takes its while. -- Parents all over the world.

Gishu
Some tests run slowly by their very nature. If you decide to not run these as often as the others, then make sure that they at least run on a CI server as often as possible.
Christian Vest Hansen
A good *thing* takes its while?
trenton
+27  A: 

The Local Hero

A test case that is dependent on something specific to the development environment it was written on in order to run. The result is the test passes on development boxes, but fails when someone attempts to run it elsewhere.

The Hidden Dependency

Closely related to the local hero, a unit test that requires some existing data to have been populated somewhere before the test runs. If that data wasn’t populated, the test will fail and leave little indication to the developer what it wanted, or why… forcing them to dig through acres of code to find out where the data it was using was supposed to come from.


Sadly seen this far too many times with ancient .dlls which depend on nebulous and varied .ini files which are constantly out of sync on any given production system, let alone extant on your machine without extensive consultation with the three developers responsible for those dlls. Sigh.

Leave me alone, will you! It works for me! -- Unknown Developer

annakata
That's a nice example of the WOMPC developer acronym."Works on my PC!" (usually said to get testers off your back.)
MadKeithV
+16  A: 

Excessive Setup -- James Carr
A test that requires a huge setup in order to even begin testing. Sometimes several hundred lines of code are used to prepare the environment for one test, with several objects involved, which can make it difficult to really ascertain what is tested due to the “noise” of all of the setup going on. (Src: James Carr's post)

I'll need the services, the frameworks, the databases, .. Oh My!

Gishu
+23  A: 

Chain Gang

A couple of tests that must run in a certain order, i.e. one test changes the global state of the system (global variables, data in the database) and the next test(s) depends on it.

You often see this in database tests. Instead of doing a rollback in teardown(), tests commit their changes to the database. Another common cause is that changes to the global state aren't wrapped in try/finally blocks which clean up should the test fail.

I feel like a puppet, invisible strings pull at me ... driving me on ... all I can hope is to stay on my feet because I fear what will happen should I stumble and fall ...

Aaron Digulla
this one is just plain nasty.. Breaks the tests must be independent notion. But I've read about it in multiple places.. guess 'popular TDD' is pretty messed up
Gishu
+12  A: 

Anal Probe

A tests which has to use insane, illegal or otherwise unhealthy ways to perform its task like: Reading private fields using Java's setAccessible(true)) or extending a class to access protected fields/methods or having to put the test in a certain package to access package global fields/methods.

If you see this pattern, the classes under test use too much data hiding.

The difference to The Inspector is that the class under test tries to hide even the things you need to test. So your goal is not to achieve 100% test coverage but to be able to test anything at all. Think of a class that has only private fields, a run() method without arguments and no getters at all. There is no way to test this without breaking the rules.

Hello, Mr. Anderson. That will not hurt at all ... not me anyway ... now hold still ...


Comment by Michael Borgwardt: This is not really a test antipattern, it's pragmatism to deal with deficiencies in the code being tested. Of course it's better to fix those deficiencies, but that may not be possible in the case of 3rd party libraries.

Aaron Digulla: I kind of agree. Maybe this entry is really better suited for a "JUnit HOWTO" wiki and not an antipattern. Comments?

Aaron Digulla
isn't this the same as the Inspector?
Gishu
No, the inspector thrives to achieve the utmost code coverage. This one here tries to test anything at all. Think of a class which has only private fields, a run() method without arguments and no getters at all.
Aaron Digulla
Hmm.. this line 'the class under test tries to hide even the things you need to test' indicates a power struggle between the class and the test. If it should be tested.. it should be publicly reachable somehow.. via class behavior/interface.. this somehow smells of breaching encapsulation
Gishu
This most often happens when you need to access some service from a third party API. Try to write a test for the Java Mail API or MQSeries which doesn't actually modifies any data or needs a running server ...
Aaron Digulla
Also try writing a unit test for a Maven2 plugin...
npellow
npellow: Maven2 has a plugin for that, hasn't it?
Aaron Digulla
This is not really a test antipattern, it's pragmatism to deal with deficiencies in the code being tested. Of course it's better to fix those deficiencies, but that may not be possible in the case of 3rd party libraries.
Michael Borgwardt
@Michael: Yes the antipattern here is exactly that the test should be testing externally visible behavior instead of poking into internals.Such tests frequently break when the SUT is refactored... Same as inspector. The test author is doing the easy thing instead of the right thing.. this anti-pattern is a deo to patch the design smells of the code.. Over an extended period, you have a tangled mess of tests that are a pain to maintain.
Gishu
@Gishu: Still, sometimes you *cannot* do the right thing - for instance when, as I wrote, your test involves code that you don't control.
Michael Borgwardt
@Michael: Aah.. you're speaking for scenarios involving legacy code/third party code. This post (most of it) deals with greenfield TDD if I'm not mistaken. For legacy code, it might be ok (although I'd still try to fix the design if it's a 1-2 day effort). For third party code, you definitely should not be testing it. e.g. I'd not write unit tests for classes in the .net framework... in short you don't write tests for code that you don't control. What you might want to do there is write interface level tests so that you know if a new version of the dll breaks your code.
Gishu
+4  A: 

Doppelgänger

In order to test something, you have to copy parts of the code under test into a new class with the same name and package and you have to use classpath magic or a custom classloader to make sure it is visible first (so your copy is picked up).

This pattern indicates an unhealthy amount of hidden dependencies which you can't control from a test.

I looked at his face ... my face! It was like a mirror but made my blood freeze.

Aaron Digulla
+6  A: 

The Butterfly

You have to test something which contains data that changes all the time, like a structure which contains the current date, and there is no way to nail the result down to a fixed value. The ugly part is that you don't care about this value at all. It just makes your test more complicated without adding any value.

The bat of its wing can cause a hurricane on the other side of the world. -- Edward Lorenz, The Butterfly Effect

Aaron Digulla
Nitpicker from Carr's post?
Gishu
Haven't read it, yet :)
Aaron Digulla
+11  A: 

Inappropriately Shared Fixture -- Tim Ottinger
Several test cases in the test fixture do not even use or need the setup / teardown. Partly due to developer inertia to create a new test fixture... easier to just add one more test case to the pile

Gishu
It may also be that the class under test is trying to do too much.
Mike Two
+32  A: 

The Free Ride / Piggyback -- James Carr, Tim Ottinger
Rather than write a new test case method to test another/new feature or functionality, a new assertion rides along in an existing test case.

The unique ability of the Ninja is to merge with almost any background. You simply don't notice them until it is too late.

Gishu
Yeah, that's my favorite one. I do it all the time. Oh... wait... you said that this was a *bad* thing. :-)
guidoism
+6  A: 

The Cuckoo -- Frank Carver
A unit test which sits in a test case with several others, and enjoys the same (potentially lengthy) setup process as the other tests in the test case, but then discards some or all of the artefacts from the setup and creates its own.
Advanced Symptom of : Inappropriately Shared Fixture

One day, I might need it -- Unknown Developer

Gishu
+3  A: 

The Mother Hen -- Frank Carver
A common setup which does far more than the actual test cases need. For example creating all sorts of complex data structures populated with apparently important and unique values when the tests only assert for presence or absence of something.
Advanced Symptom of: Inappropriately Shared Fixture

I don't know what it does ... I'm adding it anyway, just in case. -- Anonymous Developer

Gishu
+26  A: 

Happy Path

The test stays on happy paths (i.e. expected results) without testing for boundaries and exceptions.

JUnit Antipatterns

Cause: Either exaggerated time constraints or blatant lazyness. Refactored solution: Get some time to write more tests to get rid of the false positives. The latter cause needs a whip. :)
Spoike
+4  A: 

The Secret Catcher -- Frank Carver
A test that at first glance appears to be doing no testing, due to absence of assertions. But "The devil is in the details".. the test is really relying on an exception to be thrown and expecting the testing framework to capture the exception and report it to the user as a failure.

[Test]
public void ShouldNotThrow()
{
   DoSomethingThatShouldNotThrowAnException();
}
Gishu
This can in fact be a valid test, in my opinion - especially as a regression test.
Ilja Preuß
sorry again got this confused with Silent catcher... unit tests should state intent clearly about what is being tested rather than saying 'this should work'.. (+1 tp something is better than nothing. esp if you're in legacy regression country)
Gishu
In this kinds of tests, I am at least catching Exception and assign it to a variable. Then I assert for not null.
Thomas Eyde
+21  A: 

The Silent Catcher -- Kelly?
A test that passes if an exception is thrown.. even if the exception that actually occurs is one that is different than the one the developer intended.
See Also: Secret Catcher

[Test]
[ExpectedException(typeof(Exception))]
public void ItShouldThrowDivideByZeroException()
{
   // some code that throws another exception yet passes the test
}

That can't ever happen ... -- Comment in code

Gishu
+21  A: 

Second Class Citizens - test code isn't as well refactored as production code, containing a lot of duplicated code, making it hard to maintain tests.

Ilja Preuß
+10  A: 

The Test With No Name -- Nick Pellow

The test that gets added to reproduce a specific bug in the bug tracker and whose author thinks does not warrant a name of its own. Instead of enhancing an existing, lacking test, a new test is created called testForBUG123.

Two years later, when that test fails, you may need to first try and find BUG-123 in your bug tracker to figure out the test's intent.

My name is Nobody. -- Terence Hill

npellow
So true. Tho that is slightly more helpful than a test called "TestMethod"
DeletedAccount
unless the bugtracker changes, and you loose the old tracker and its issue identifiers...so PROJECT-123 no longer means anything....
Chii
+6  A: 

The Forty Foot Pole Test

Afraid of getting too close to the class they are trying to test, these tests act at a distance, separated by countless layers of abstraction and thousands of lines of code from the logic they are checking. As such they are extremely brittle, and susceptible to all sorts of side-effects that happen on the epic journey to and from the class of interest.

Don't touch it; it might break -- Unknown Developer

+5  A: 

The Turing Test

A testcase automagically generated by some expensive tool that has many, many asserts gleaned from the class under test using some too-clever-by-half data flow analysis. Lulls developers into a false sense of confidence that their code is well tested, absolving them from the responsibility of designing and maintaining high quality tests. If the machine can write the tests for you, why can't it pull its finger out and write the app itself!

Hello stupid. -- World's smartest computer to new apprentice (from an old Amiga comic).

+4  A: 

The Environmental Vandal

A 'unit' test which for various 'requirements' starts spilling out into its environment, using and setting environment variables / ports. Running two of these tests simultaneously will cause 'unavailable port' exceptions etc.

These tests will be intermittent, and leave developers saying things like 'just run it again'.

One solution Ive seen is to randomly select a port number to use. This reduces the possibility of a conflict, but clearly doesnt solve the problem. So if you can, always mock the code so that it doesn't actually allocate the unsharable resource.

gcrain
@gcrain.. tests should be deterministic. IMO a better approach would be to use a 'well-known-in-the-team' port for testing and cleanup before and after the test correctly such that it's always available...
Gishu
@gishu - the problem is not that there are no setup() and teardown() methods to handle using these ports. the problem is for example running a CI server, and multiple versions of the test run at the same time, attempting to use the same, hardcoded-in-the-test port numbers
gcrain
+4  A: 

The Sleeper, aka Mount Vesuvius -- Nick Pellow

A test that is destined to FAIL at some specific time and date in the future. This often is caused by incorrect bounds checking when testing code which uses a Date or Calendar object. Sometimes, the test may fail if run at a very specific time of day, such as midnight.

'The Sleeper' is not to be confused with the 'Wait And See' anti-pattern.

That code will have been replaced long before the year 2000 -- Many developers in 1960

npellow
I'd rather call this a dormant Volcano :).. but I know what you're talking about.. e.g. a date chosen as a future date for a test at the time of writing will become a present/past date when that date goes by.. breaking the test. Could you post an example.. just to illustrate this.
Gishu
@Gishu - +1 . I was thinking the same, but couldn't decide between the two. I updated the title to make this a little clearer ;)
npellow
+6  A: 

Wait and See

A test that runs some set up code and then needs to 'wait' a specific amount of time before it can 'see' if the code under test functioned as expected. A testMethod that uses Thread.sleep() or equivalent is most certainly a "Wait and See" test.

Typically, you may see this if the test is testing code which generates an event external to the system such as an email, an http request or writes a file to disk.

Such a test may also be a Local Hero since it will FAIL when run on a slower box or an overloaded CI server.

The Wait and See anti-pattern is not to be confused with The Sleeper.

Then, a miracle occurs -- Quote from cartoon "You should be more specific in step #2"

npellow
Hmm.. well I use something like this. how else would I be able to test multi-threaded code?
Gishu
@Gishu, do you really want to unit test multiple threads running concurrently? I try to just unit test whatever the run() method does in isolation. An easy way to do this is by calling run() - which will block, instead of start() from the unit test.
npellow
@Gishu use CountDownLatches, Semaphores, Conditions or the like, to have the threads tell each other when they can move on to the next level.
Christian Vest Hansen
An example: http://madcoderspeak.blogspot.com/2008/11/my-solution-for-unclebobs-mark-iv_08.html Brew button evt. The observer is polling at intervals and raising changed events.. in which case I add a delay so that the polling threads gets a chance to run before the test exits.
Gishu
although I agree.. I don't need this often..
Gishu
I think the cartoon link is broken.
Andrew Grimm
+5  A: 

I'll believe it when I see some flashing GUIs
An unhealthy fixation/obsession with testing the app via its GUI 'just like a real user'

Testing business rules through the GUI is a terrible form of coupling. If you write thousands of tests through the GUI, and then change your GUI, thousands of tests break.
Rather, test only GUI things through the GUI, and couple the GUI to a dummy system instead of the real system, when you run those tests. Test business rules through an API that doesn't involve the GUI. -- Bob Martin

“You must understand that seeing is believing, but also know that believing is seeing.” -- Denis Waitley

Gishu
If you thought flashing GUIs is wrong, I saw someone who wrote a jUnit test that started up the GUI and needed user interaction to continue. It hanged the rest of the test suite. So much for test automation!
Spoike
I disagree. Testing GUI's are hard, but they are also a source of errors. Not testing them is just lazy.
Ray
the point here is that you shouldn't test GUIs but rather that you shouldn't test only via the GUI. You can perform 'headless' testing withouth the GUI. Keep the GUI as thin as possible - use a flavor of MVP - you can then get away with not testing it at all. If you find that you have bugs cropping up in the thin GUI layer all the time, cover it with tests.. but most of the time, I dont find it worth the effort. GUI 'wiring' errors are usually easier to fix...
Gishu
+4  A: 

The Flickering Test (Source : Romilly Cocking)

A test which just occasionally fails, not at specific times, and is generally due to race conditions within the test. Typically occurs when testing something that is asynchronous, such as JMS.

Possibly a super set to the 'Wait and See' anti-pattern and 'The Sleeper' anti-pattern.

The build failed, oh well, just run the build again. -- Anonymous Developer

@Stuart - a must see video describing this is "Car Stalled - Try Now!" http://www.videosift.com/video/Car-Stalled-Try-it-now-Classic-Kids-in-the-Hall-sketchThis pattern could also be called "Try Now!", or just - "The Flakey Test"
npellow
I once wrote a test for a PRGN that ensured a proper distribution. Occasionally, it would fail at random. Go figure. :-)
Christian Vest Hansen
Wouldn't this be a *good* test to have? If a test ever fails, you need to track down the source of the problem. I fought with someone about a test which failed between 9p and midnight. He said it was random/intermittent. It was eventually traced to a bug dealing with timezones. Go figure.
trenton
@Christian Vest Hansen: couldn't you seed it?
Andrew Grimm
+4  A: 

The Dead Tree

A test which where a stub was created, but the test wasn't actually written.

I have actually seen this in our production code:

class TD_SomeClass {
  public void testAdd() {
    assertEquals(1+1, 2);
  }
}

I don't even know what to think about that.

Milan Ramaiya
:) - also known as Process Compliance Backdoor.
Gishu
+2  A: 

got bit by this today:

Wet Floor:
The test creates data that is persisted somewhere, but the test does not clean up when finished. This causes tests (the same test, or possibly other tests) to fail on subsequent test runs.

In our case, the test left a file lying around in the "temp" dir, with permissions from the user that ran the test the first time. When a different user tried to test on the same machine: boom. In the comments on James Carr's site, Joakim Ohlrogge referred to this as the "Sloppy Worker", and it was part of the inspiration for "Generous Leftovers". I like my name for it better (less insulting, more familiar).

Zac Thompson