views:

65

answers:

1

Hello,

I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python.

Here is what I need to do:

  • I have a list of tests, with the following attributes:
    • uri: a URI to test (could be HTTP/HTTPS/SSH/local) ;
    • depends: an associative array of tests/values that this test depends on ;
    • join: a list of DB joints to be added when selecting items to process in this test ;
    • depends_db: additional conditions to add to the DB request when selecting items to process in this test.
  • The program builds a dependency tree, beginning with the tests that have no dependencies ;
  • for each test:
    • a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ;
    • the list of items is sent to the URI (using POST or stdin) ;
    • the result is retrived as a YAML file listing the state and comments for the test for each tested item ;
    • the results are stored in the DB ;
    • the test returns, allowing depending tests to be performed.
  • the program generates reports (CSV, DB, graphviz) of the performed tests.

The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry.

Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

+4  A: 

This sort of testing is not unit testing.

If you decide not to follow rjh's advice in order to keep most of the code you've already written, then convert your tests to use the standard test toolchain.

daxim
The goal is not so much to do tests as to store the results in the DB to be displayed on a website. Also, most of my tests are launched via HTTP, would that work with Test::Class?
Raphink
No, you have to rework that, or write a harness that makes HTTP requests instead of running program files.
daxim
I second the idea to use TAP. Get your results into TAP and you'll have lots of options for storage (like TAP Archives) and display (the aformentioned TAP::Formatter::HTML or Smolder)
mpeters
I second Daxim's suggestion to use Perl's testing modules.To get your web testing covered, use Test::WWW::Selenium http://search.cpan.org/dist/Test-WWW-Selenium/lib/Test/WWW/Selenium.pm The variety and scope of testing modules on CPAN is amazing. For anything that doesn't have a test module already, you can use Test::Builder to make your own custom tools that are compatible.
daotoad
TAP looks interesting, however each request to a test returns several results, one per item. For example, when testing backup, I send the whole list of machines to the test and it returns a YAML with the state of each machine in the backup. This is done to prevent doing 3000 HTTP requests to test 3000 machines.Selenium might not come very useful for me as my goal is not to test if a website is up (or its content), but to communicate with tests running on distant machines via HTTP, sending an HTTP request that contains the items to test and retrieving the result of the test for each item.
Raphink
That case is easy, too. You expect the returned YAML have a certain structure and content. Write a test file (using Test::More and LWP is enough) that makes the request and analyses the YAML ([Kwalify](http://p3rl.org/Kwalify) could save you some time here). The test file thus drives your existing test infrastructure wholesale. For each passing test, emit `pass`, for each failing test `fail`; for the other cases you have to decide yourself how to properly map them to TAP (temporary down for scheduled maintenance = SKIP?), you are the expert after all.
daxim