views:

338

answers:

5

Estimating how long any given task will take seems to be one of the hardest parts about software development. At my current shop we estimate tasks in hours at the start of an iteration, but once the task is complete we do not use it to aide us in future estimations.

How do you use the information you gather from past estimations to refine future ones?

+3  A: 

If an estimate blew out, attempt to identify if it was just random (environment broke, some once off tricky bug etc) or if there was something that you didn't identify.

If an esimate was way too large, identify what it was that you thought was going to take so long and work out why it didn't.

Doing that enough will hopefully help developers in their estimates.

For example, if a dev thinks that writing tests for a controller is going to take ages and then it ends up taking less time than he imagined, the next estimate you make for a controller of similar scope you can keep that in mind.

SCdF
+2  A: 

I estimate with my teammates iteratively until we reach consensus. Sure, we make mistakes but we don't calculate the "velocity" factor explicitely but rather, we use gathered experience in our new estimation debates.

Marko Dumic
+2  A: 

I've found that estimating time will get you so far. Interuptions with other tasks, unforseen circumstances or project influences will inevitably change your time frames and if you were to constantly re-asses you would waste much time managing when you could be developing.

So for us here, we give an initial estimation based on experience to the solution for time (we do not use a model, I've not found one that works well enough in our environment) but do not judge our KPIs against it, nor do we assure the business that this deadline WILL be hit. Our development approach here is largely reactive, and it seems to fill the business' requirements of us very well.

Odd
+9  A: 

By far one of the most interesting approaches I've ever seen for scheduling realistically is Evidence Based Scheduling which is part of the FogCreek FogBugz 6.0 release. See Joel's blog post linked above for a synopsis and some examples.

Jay
+2  A: 

when estimates are off, there is almost always a blatant cause, which leads to a lesson learned. Recent ones from memory:

  • user interface assumed .NET functionality that did not exist (the ability to insert a new row and edit it inline in a GridView); lesson learned is to verify functionality of chosen classes before committing to estimate. This mistake cost a week.

  • ftp process assumed that FtpWebRequest could talk to a bank's secure ftp server; it turned out that there's a known bug with this class if the ftp server returns anything other than a backslash for the current directory; lesson learned is to google for 'bug' and 'problem' with class name, not just 'tutorial' and 'example' to make sure there are no 'gotchas' lurking. This mistake cost three days.

these lessons go into a Project Estimation and Development "checklist" document, so they won't be forgotten for the next project

Steven A. Lowe
GridViews are evil!
DaveDev