A new project we began introduced a lot of new technologies we weren't so familiar with, and an architecture that we don't have a lot of practice in. In other words, the interfaces and interactions between service classes etc of what we're building are fairly volatile, even more so due to internal and customer feedback. Though I've always been frustrated by the ever-moving specification, I see this to some degree a necessary part of building something we've never built before - if we just stuck to the original design and scope, the end product would probably be a whole lot less innovative and useful than it's becoming.
I also introduced test-driven development (TDD), as the benefits are well-documented and conceptually I loved the idea. Two more new things to learn - NUnit and mocking - but seeing all those green circles made it all worthwhile.
Over time, however, those constant changes in design seemed to mean I was spending a whole lot more time changing my tests than I was on writing the code itself. For this reason alone, I've gone back to the old ways of testing - that is, not automated.
While I have no doubt that the application would be far more robust with hundreds of excellent unit tests, I've found the trade-off of time to launch the product to be mostly unacceptable. My question is, then - have any of you also found that TDD can be a hassle if you're prototyping something / building a beta version? Does TDD go much more naturally hand-in-hand with something where the specifications are more fixed, or where the developers have more experience in the language and technologies? Or have I done something fundamentally wrong?
Note that I'm not trying to criticise TDD here - just I'm not sure it's always the best fit for all situations.