views:

119

answers:

3

I'm writing a module that has some functions dealing with text files. I'm new to testing, so I decided to go with Test::More. Here's how my test file looks like now:

use mymod;
use 5.10.0;
use strict;
use warnings;
use Test::More 'no_plan';

my $file_name = "test.file";

sub set_up {
    my $self = shift;
    open(my $handle,">",$file_name) or die "could not create file test.file $!\n";
    # generate a sample text file here
    close($handle);
}

sub tear_down {
    my $self = shift;
    unlink($file_name) or die "could not delete $file_name $!\n";
}

set_up();

open(my $handle,$file_name) || die "could not open $file_name $!\n";

my @lines = mymod->perform($handle);

is_deeply(\@lines,["expected line","another expected line"]);

close($handle);

tear_down();

Is this a good way of performing tests? Is it ok to deal with generating the sample input file in my test?

By the way, I started writing this as a Test::Unit test, and then switched to Test::More. That's why the set_up and tear_down functions are there.

+5  A: 

Using Test::More's 'no_plan' option makes testing less reliable: you can't really know if you missed tests for some reason. It's best to plan a predefined number of tests, or if that isn't possible you can use the done_testing function (but that required a recent version of Test::More).

ETA: I don't see the use of the open-close-open-close-unlink thing you do. I think you can better open a tempfile, fill it and use that for your tests.

Leon Timmermans
what about the fact that I'm generating a text file for my module to work with? is that acceptable?
Geo
Generating a text file as part of your test is OK, but if you have bugs with your generation code, then your test results can't be trusted. Usually it's better to have a pre-defined test file with known contents, if at all possible.
Rudedog
http://www.shadowcat.co.uk/blog/matt-s-trout/a-cunning-no_plan/
innaM
+1  A: 

Besides the use of no_plan, which has already be commented:

Regarding the generation of a file to be read during the unit testing, this can be deemed acceptable, although it is generally preferred to avoid touching the files in the unit tests (or any other "slow" resource) because this slow the tests down.

This can become problematic if a lot the unit tests read or write a file and if the number of tests grows too much. Indeed, the unit tests should be unobtrusive and run in a snap.

If the execution time of your unit tests begins to become a problem, you can either extract the tests that access the filesystem and add them to an integration test suite that you will run less often, or you can modify your code to separate out the reading of the file and the processing of its content. That way you can test the processing of the content independently of the file reading, and have the data stored in a array of line in your unit test code.

This sort of code tends to be more re-usable as what you read in a file today can come from another source (e.g. network, DB) yesterday.

philippe
+7  A: 

You can "open" a string as a filehandle, so you can still feed your method a filehandle but not have to create a physical file. That way you could put your test content in a string (ideally an array of strings, one for each data sample to test against) and not have to create temp files:

my @testdata = (
    "test data 1",
    "test data 2",
    # ...
);

foreach my $data (@testdata)
{
    open my $datahandle, "<", \$data or die "Cannot open handle to string: $!";
    my @lines = mymod->perform($datahandle);
    # ...
}
Ether
Wow! I did not know `open` can "open" a string :)
Geo
@Geo: Starting with perl 5.8
tsee