Bear with me on this one.
Consider an iterative cycle of a developer.
- Make Changes
- Run
- Test
So my question: is the elapsed time for the "Run" phase tending to zero the most efficient? Or, at some point, does a lower "Run" time result in less efficient iterations, and less efficiency overall?
Clearly, a smaller "Run" time is a superset of a longer "Run" time - a developer could always "pretend" that the "Run" time is longer. So this really boils down to a human nature/developer psychology question.
By analogy, there's some evidence that Anti Lock Braking Systems on cars don't change the overall accident rate. Drivers compensate for the increased safety of ABS by driving more aggressively. Is there a similar effect at play here?