I am developing some Python modules that use a mysql database to insert some data and produce various types of report. I'm doing test driven development and so far I run:
- some CREATE / UPDATE / DELETE tests against a temporary database that is thrown away at the end of each test case, and
- some report generation tests doing exclusively read only operations, mainly SELECT, against a copy of the production database, written on the (valid, in this case) assumption that some things in my database aren't going to change.
Some of the SELECT operations are running slow, so that my tests are taking more than 30 seconds, which spoils the flow of test driven development. I can see two choices:
- only put a small fraction of my data into the copy of the production database that I use for testing the report generation so that the tests go fast enough for test driven development (less than about 3 seconds suits me best), or I can regard the tests as failures. I'd then need to do separate performance testing.
- fill the production database copy with as much data as the main test database, and add timing code that fails a test if it is taking too long.
I'm not sure which approach to take. Any advice?