I have a couple of modules (DZP::Catalyst and DZP::OurPkgVersion) both of their purposes involve writing out files to the disk. I'm not sure how to test them, are there any good strategies for testing files written out to disk? any place I could go to read up on it?
Well, in this particular case (Dist::Zilla plugins), you're going to want to use Dist::Zilla::Tester. It takes care of much of the grunt work of creating a temporary directory, populating it with files, and cleaning up afterwards. For example, see the tests from my DZP::CJM or the tests from Dist::Zilla itself, especially the plugins directory.
It depends somewhat on the module, but my general strategy is:
Ensure that file content logic is 100% separate - as far as being in different methods - from file mechanics (e.g. choosing directory/opening files/closing files/error handling).
Ensure that the file mechanics is 100% flexible, e.g. you can choose the directory/filename from the external driver.
Write tests for the file mechanics, by simply opening the specified file in specified directory, closing it, making sure no errors happen and that expected file exists and has size zero
create an array of test data, with each element of the array consisting of 3 parts
Input data for file content logic, possibly coupled with test configuration indicating which methods from file content logic to call on that data if warranted.
Expected file name to be set
Expected file contents, in the form of tar-balled expected files (exact files with exact expected content to be generated and the correct expected name).
The expected results tarballs should be in a separate sub-directory (say "expected_results" under the directory where your test script lives.
You need a tarball in case your file generation logic produces >1 file.
Then, run a loop over each test in the test array you previously created:
Create a new "actual results" temp directory (or clean up the one from prior test)
Set the directory in you module to the temp directory; set the filename of your module to the expected filename from test info.
Run the file opener method (previously tested)
Run the content generation logic from the module using test's logic directions (if applicable) and test's input data.
Run the file closer method (previously tested)
Create an "temp expected results" temp directory (or clean up the one from last test)
Copy an "expected results" tarball from "expected_results" test sub-directory to the "temp expected results" temp directory created in last bullet point
untar that tarball in "temp expected results" temp directory and delete the tarball from there.
directory-diff the "temp expected results" temp directory with "actual results" temp directory (e.g. ensure both have 100% identical list of files and that each file's contents are 100% the same, either via native Perl or using
diff
viasystem()
calls.
Since the logic above is very generic, I generally abstract most of it away into a "Test::FileGenerator" module re-used by all of the unit and integration tests that test file generation ability.