[There are several essays worth reading on this. Stack Overflow won't let me post more than one, so I've aggregated them in a blog post, linked at the end of this answer.]
First, a quick note on terms. I tend to use James Bach’s definition of Testing as “Questioning a product in order to evaluate it”. All test rely on /mental/ models of the application under test. The term Model-Based Testing though is typically used to describe programming a model which can be explored via automation. For example, one might specify a number of states that an application can be in, various paths between those states, and certain assertions about what should occur in on the transition between those states. Then one can have scripts execute semi-random permutations of transitions within the state model, logging potentially interesting results.
There are real costs here: building a useful model, creating algorithms for exploring it, logging systems that allow one to weed through for interesting failures, etc. Whether or not the costs are reasonable has a lot to do with what are the questions you want to answer? In general, start with “What do I want to know? And how can I best learn about it?” rather than looking for a use for an interesting technique.
All that said, some excellent testers have gotten a lot of mileage out of automated model-based tests. Sometimes we have important questions about the application under test that are best explored by automated, high-volume semi-randomized tests. Harry Robinson (one of the leading theorists and proponents of model-based testing) describes one very colorful example where he discovered many interesting bugs in Google driving directions using a model-based test (written with ruby’s Watir library). [1]
Robinson has used MBT successfully at companies including Bell Labs, Microsoft, and Google, and has a number of helpful essays.[2]
Ben Simo (another great testing thinker and writer) has also written quite a bit worth reading on model-based testing.[3]
Finally, a few cautions: To make good use of a strategy, one needs to explore both its strengths and its weaknesses. Toward that end, James Bach has an excellent talk on the limits and challenges of Model-Based Testing. This blog post of Bach’s links to his hour long talk (and associated slides).[4]
I’ll end with a note about what Boris Beizer calls the Pesticide Paradox: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.” Scripted tests (whether executed by a computer or a person) are particularly vulnerable to the pesticide paradox, tending to find less and less useful information each time the same script is executed. Folks sometimes turn to model-based testing thinking that it gets around the pesticide problem. In some contexts model-based testing may well find a much larger set of bugs than a given set of scripted tests…but one should remember that it is still fundamentally limited by the Pesticide Paradox. Remembering its limits — and starting with questions MBT addresses well — it has the potential to be a very powerful testing strategy.
Links to all essays mentioned above can be found here: http://testingjeff.wordpress.com/2009/06/03/question-about-model-based-testing/