views:

87

answers:

4

I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing().

Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test exits prematurely - say due to some logic not executing when you expected it to.

What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used?


Details:

My unit tests usually take the form of:

use Test::More;
my @test_set = (
   [ "Test #1", $param1, $param2, ... ]
  ,[ "Test #1", $param1, $param2, ... ]
  # ,...
);

foreach my $test (@test_set) {
    run_test($test);
}

sub run_test {
    # $expected_tests += count_tests($test);
    ok(test1($test)) || diag("Test1 failed");
    # ...
}

The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known.


My current approach involves making @tests global and defining it in the BEGIN {} block as follows:

use Test::More;
BEGIN {
    our @test_set = (); # Same set of tests as above
    my $expected_tests = 0;
    foreach my $test (@tests) {
        my $expected_tests += count_tests($test);
    }
    plan tests => $expected_tests;
}
our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :(
foreach my $test (@test_set) { run_test($test); } # Same
sub run_test {}  # Same

I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.


Another approach is to emulate done_testing() by calling Test::More->builder->plan(tests=>$total_tests_calculated). I'm not sure if it's any better idiomatically-wise.

+3  A: 

Don't hack around old versions, just ship with a copy of Test::More. It has no dependencies. Simply install it into t/lib of your distribution (you can build it and then copy blib/lib) and then use lib "t/lib" in your tests.

Schwern
@Schwern - software engineering actually looks at test code. Were I to do that, the **best** case scenario would be a big chew-out by managing director running that team. Sadly, I play by corporate game rulebook. If you want to know what happens one one doesn't, ask Mr. Original Just Another Perl Hacker (http://www.lightlink.com/spacenka/fors/) :(
DVK
@DVK Chewed out for what? The copy of Test::More is specific to your tests. It has no effect on anyone else's code.
Schwern
@Schwern - using local copies of CPAN modules is... how should I put it... frowned upon.
DVK
@DVK: What's funny is that corporate types will use a quite similar case to what they are rejecting in order to shoot you down: "If you put a special Test::More here, suppose we wanted to upgrade Test::More later, code depending on that might be out of sync with other code... blah blah blah..." and you may even see this "lack of understanding" on your next review.
Axeman
@DVK They won't let you upgrade modules and they won't let you use local copies, even in tests? You have a problem and its not Test::More. Don't let fear keep you from revising a broken system, especially if its just policy. And don't invoke Randal's case as some sort of justification for inaction, making a local install of a Perl module does not apply. Push back. Point out that you're wasting time hacking around missing Test::More features and having to write dodgy versions. Point out that real productivity and quality is being lost because local copies are "frowned upon".
Schwern
I can identify with DVK here... some folks just have an inherent distrust of modules and libraries. Don't ask me why.
Zaid
There are always companies looking for Perl programmers. If you don't work at a company that lets you work smartly, find a company that does.
brian d foy
@brian - the pros of this company majorly outweigh the cons. Thus I'm posting "How to take a best approach to work around a minor problem that I already know how to fix" as opposed to careeroverflow.com candidate questions :) .
DVK
@Shwern - is there any data (I think you're the owner) as far as backwards compatibility of Test::More and company? The request to upgrade might go through a lot smoother if it's backwards compatible 100% (both by design, and by current version passing all unit tests from older one :) )
DVK
@DVK Incompatibilities for Test::More are logged in the Changes file as "Features Changed" or "Feature Changes". They are all extremely minor and mostly deal with error message formatting. Occasionally a bug fix will reveal failing tests that passed before. I refer to this as "fixage". Examples are the use_ok fix in 0.71 and cmp_ok fix in 0.85_01. New versions will not pass the old version's tests, they are not all blackbox nor were they all correct. Test::More is one of the most vital, widely used, derived from and best tested modules on CPAN. It even tests 3rd party dependent modules.
Schwern
+1  A: 

Here's a fairly idiomatic approach:

use warnings;
use strict;
use Test::More;
use List::Util 'sum';

sub count_tests {1}

BEGIN {
    plan tests => sum map {
        count_tests($_)
    } @test::set = (
        [ "Test #1", '$param1, $param2, ...' ],
        [ "Test #1", '$param1, $param2, ...' ],
    )
}

run_test($_) for @test::set;

Using a fully qualified name avoids the need for our, you could also use @::test_set if you are worried about putting something in the test:: package. And using map and sum from List::Util shortens the code in the BEGIN block a bit. The functional form also inverts the flow of data, which allows all of the tests to be declared at the end, keeping the plan call at the top, to remind why the BEGIN block was used in the first place.

Eric Strom
Eric - I like the idea of putting something in a test package. +1
DVK
+1  A: 

How about using a closure to return the test sets, which allows you to avoid the awkwardness of a package variable? Here's an illustration:

use strict;
use warnings;
use Test::More;

BEGIN {
    my @ts = (
        [ 'Test 1', 1, 1 ],
        [ 'Test 2', 3, 3 ],
    );

    plan tests => scalar @ts;

    sub test_sets { return @ts }
}

for my $ts ( test_sets() ){
    run_test($ts);
}

sub run_test {
    my ($msg, $val, $exp) = @{shift()};
    is $val, $exp, $msg;
}
FM
@FM - Yay! Closures! +1
DVK
+1  A: 

If all you need is to calculate the plan based on a table of tests, that's trivial.

use Test::More;

my $Asserts_Per_Set = 10;
my %Tests = (
    "Test #1" => { foo => "bar", this => "that" },
    "Test #2" => { foo => "yar", this => 42     },
    ...
);

plan tests => keys %Tests * $Asserts_Per_Set;

for my $name (keys %Tests) {
    run_tests($name, $Tests{$name});
}

If for some reason run_tests needs to run a variable number of tests based on the data, make use of skip rather than an if so it always runs a consistent number of tests.

SKIP: {
    skip "Can't run foo test on frobnitz", 2 if $test->{foo} and $test->{frobnitz};

    is foo(), $test->{foo};
    is bar(), $test->{foo} + 9;
}

For anything more complicated, add to the plan as you go by making use of BEGIN blocks.

use Test::More;
my $Count;

BEGIN { $Count += X }

...run X tests...

BEGIN { $Count += Y }

...run Y tests...

BEGIN { plan tests => $Count }

This at least keeps the test count calculations in line with the block of tests its calculating or, instead of having it all in one big unmaintainable blob at the top. Its all highly visible and requires no magic except BEGIN.

Incidentally, new versions of Test::More have subtest to better handle the problem of breaking up a test into multiple plans.

Schwern