Since I've started using rspec, I've had a problem with the notion of fixtures. My primary concerns are this:
- I use testing to reveal surprising behavior. I'm not always clever enough to enumerate every possible edge case for the examples I'm testing. Using hard-coded fixtures seems limiting because it only tests my code with the very specific cases that I've imagined. (Admittedly, my imagination is also limiting with respect to which cases I test.)
- I use testing to as a form of documentation for the code. If I have hard-coded fixture values, it's hard to reveal what a particular test is trying to demonstrate. For example:
Using the first method gives the reader no indication what the most expensive item is, only that its price is 100. All three methods ask the reader to take it on faith that the fixture :expensive is the most expensive one listed in fixtures/items.yml. A careless programmer could break tests by creating an Item in before(:all), or by inserting another fixture into fixtures/items.yml. If that is a large file, it could take a long time to figure out what the problem is.describe Item do describe '#most_expensive' do it 'should return the most expensive item' do Item.most_expensive.price.should == 100 # OR #Item.most_expensive.price.should == Item.find(:expensive).price # OR #Item.most_expensive.id.should == Item.find(:expensive).id end end end
One thing I've started to do is add a '#generate_random' method to all of my models. This method is only available when I am running my specs. For example:
class Item
def self.generate_random(params={})
Item.create(
:name => params[:name] || String.generate_random,
:price => params[:price] || rand(100)
)
end
end
(The specific details of how I do this are actually a bit cleaner. I have a class that handles the generation and cleanup of all models, but this code is clear enough for my example.) So in the above example, I might test as follows. A warning for the feint of heart: my code relies heavily on use of 'before(:all)':
describe Item do
describe '#most_expensive' do
before(:all) do
@items = []
3.times { @items 50})
end
it 'should return the most expensive item' do
sorted = @items.sort { |a, b| b.price a.price }
expensive = Item.most_expensive
expensive.should be(sorted[0])
expensive.price.should >= 50
end
end
end
This way, my tests better reveal surprising behavior. When I generate data this way, I occasionally stumble upon an edge case where my code does not behave as expected, but which I wouldn't have caught if I were only using fixtures. For example, in the case of '#most_expensive', if I forgot to handle the special case where multiple items share the most expensive price, my test would occasionally fail at the first 'should.' Seeing the non-deterministic failures in AutoSpec would clue me in that something was wrong. If I were only using fixtures, it might take much longer to discover such a bug.
My tests also do a slightly better job of demonstrating in code what the expected behavior is. My test makes it clear that sorted is an array of items sorted in descending order by price. Since I expect '#most_expensive' to be equal to the first element of that array, it's even more obvious what the expected behavior of most_expensive is.
So, is this a bad practice? Is my fear of fixtures an irrational one? Is writing a generate_random method for each Model too much work? Or does this work?