views:

1035

answers:

17

Whilst answering “Dealing with awful estimates” posted by Ash I shared a few tips that I learned and personally use to spot weak estimates. But I am certain there must be many more!

What heuristics to use in the scenario when one needs to make a quick evaluation of software project estimate that has been compiled by a third-party (a colleague, a business partner or an external company)?

What are the obvious and not so obvious signs of weak software estimates that can be spotted without much detailed knowledge of task at hand?

+3  A: 

One good heuristic is to see if test time is roughly the same a development time. That's a good sign for the estimate.

If they can't give you a breakdown of the estimate then that's a bad thing. Usually a sign of lots of little things that may have been forgotten. They don't need to provide the complete original breakdown, just a breakdown like:

  • requirements
  • development
  • testing
  • packaging and deployment
  • etc.

They should be using a standard template to calculate their estimate. They don't need a number in every column, but they do the template to list all possible tasks. That way the template can be used to jog peoples's minds when working on the estimate.

If the estimate is overly precise, e.g. 0.25 hour increments, then that, for me, is a bad smell.

If there are things missing like requirements capture, testing, deployment, and handover to any Ops group? If any of those are missing, that's the sort of thing that will come back and bite you.

Edit: One other thing to watch for is the old "perpetually 90% complete" tasks. You get progress update after progress update listing a task as "90% complete". That's not good!

HTH

cheers

Rob Wells
If they are roughly equal is it a good thing or bad? Does testing include bug fixing or not? :-)
Totophil
I disagree. Testing time and dev time can be (and often are) vastly different.
cletus
@cletus, i agree under certain conditions. but generally for a green field project i think they should be approximately equal.
Rob Wells
@cletus, they often are forced/pushed to be vastly different - that said, teams can organized in lots of different ways which highly affects what is done by each role
eglasius
+1  A: 

Estimates of the form 3, 6, or 12 months (basically any round numbers) reek of guessing. Usually when you guess you pick some round number bigger than you think it will take -- quarters, half a year, etc. -- are the usual suspects. I much prefer estimates in terms of actual development iterations (whatever their size).

tvanfosson
Picking a number bigger than you think will only improve the estimate, as most people underestimate. If the estimate comes out right at a business goal, that's a bad sign.
David Thornley
I wasn't thinking of the case where someone intentionally adds time to an estimate to compensate for the unknown. I'm thinking more of the case where little thought has been given to the problem at all and you just pick a round number.
tvanfosson
Or it may be a case where there isn't enough information, in which case the best thing to do is to guess high with a round number. If the schedule is not marked as tentative, then I agree, round numbers are danger signs.
David Thornley
+10  A: 

There are two types of estimates: task estimates and project estimates. You can view these as the big and small pictures.

Project estimates are necessarily high level (granularity no smaller than days typically) and must include things like:

  • High level architecture;
  • Time for testing;
  • Ramp up times;
  • Defect processes;
  • Time for documentation;
  • Relevant training;
  • Assumptions;
  • Dependencies (eg team A can't do what they need to until team B delivers phase 1);
  • Critical path (which pieces are likely to determine if the project slips and by how much); and
  • Risks.

The more of those things that are missing, the more unrealistic (or risky) the estimates.

The second kind of a task estimate, which is typically much lower level. For this kind of estimate it should be simply a task breakdown (with no task being larger than say 5 days).

These don't tend to address the above items but some of them might be relevant, such as assumptions regarding decisions not made yet (eg production hardware). It may also be worth identifying who can and can't do the tasks due to relevant experience, background knowledge or skills (as that person or those persons may end up overcommitted).

Other posts have mentioned the testing time should equal or exceed dev time. I strongly disagree with this. I've seen 8 hour dev tasks result in 100+ hours test time and 80 hour dev tasks result in less than 2 hours of testing. In both cases the testing time was entirely reasonable. The is no absolute correlation between the two. At best, there is a loose connection.

cletus
Even smaller, I'd say. In a very large project this might be impractical, but task estimates in the range of a couple hours are very reliable in my experience.
Hanno Fietz
The problem with detailed estimates is the cost of doing them, if you are not careful you spend as match estimating the 3 or 4 project you don't do, then programming the 1 that gets the goahead.
Ian Ringrose
There's a tradeoff between speed and accuracy. Pick which matters to you.
cletus
+21  A: 
  • A single person having done the estimates, rather than having used consensus based estimation (to fully understand the implied scope of requirements) such as Wideband Delphi.
    • Especially true if the person doing the estimation is not the person doing the implementation!! - I once worked on a project estimated by someone else as 60 days before any requirements had even been given. Lets just say I was not a happy bunny
  • No time for documentation.
  • No time for ramp-up (in terms of learning, and team size).
  • No list of risks, and their impact to the timescale.
  • No buffer for the unexpected - in terms of late breaking requirements, and risks.
toolkit
+1  A: 

What are the obvious and not so obvious signs of weak software estimates that can be spotted without much detailed knowledge of task at hand?

Estimates which are given without much detailed knowledge of the task at hand are generally not good.

Perhaps a general approach you could take is to check that items in the requirements correspond to those in the estimate. If you want to be very quick check the relative sizes, if there is a 100 word estimate given to a 100,000 word brief it stands no chance of being right.

Also (as others have said) check that analysis, coding, debugging, testing, integration, contingency etc are mentioned. It shows some thought has gone into it.

Having success and sign off criteria at various stages is a great sign. If they have a defined point which is 10% done at least if the estimate is wrong you know early and have a chance to adapt. If there are no checkpoints until “finish” you may not know that you are behind until that date is hit.

Jeremy French
+2  A: 
  • Is the compiler of the estimate available and willing to discuss it with other senior project members? If not, that is often a concern.

  • Was the estimate sent to the customer before the experience and skills of the development staff are known. Two point estimates may help but only to some extent.

  • Before even getting a chance to look at the estimate, you are told that you are committed to delivering all of the functionality described by a specific date.

(Thanks for responding to my question, by the way.)

Ash
A: 

One other helpful way to evaluate the estimates is to compare it with the actual effort that was spent on previous projects of similar kind. The best data for the comparison would be the effort data of the previous projects that the organization has done. If there is no organizational historical data you can try to benchmark the estimate against industry wide benchmarks.

I would also say if the estimate is presented as single absolute number (say 180 days) then it is not a good sign. A single absolute number would indicate that the estimate is that the task will be finished with 100% probability on the given data. The estimates presented in a range (say 130 to 180 days) would indicate that the range in which task could be completed.

Much of what I have written above I attribute it to the book :

Software Estimation: Demystifying the Black Art by Steve McConnell

sateesh
A: 

I check the estimates against the man-power. Although not a very accurate heuristic, it's clear if some massive work has just one or two devs assigned to it, that the task was not estimated correctly

Robert Gould
A: 

A good estimate will have a good breakdown, involving all phases of the project.

It will almost certainly not finish at a convenient date for the business.

It will include risks of various sorts.

It will be presented in terms of confidence intervals, either implicitly (10-12 months) or by using large units (about four quarters).

It will be made by somebody with responsibility for getting the project done, preferably more than one such person.

If there are delays at the start, there will be delays at the end (expressed as 10-12 months from start, or about 1Q2010 if we start now, not something like January 2010 when the project hasn't started yet).

Assumptions and dependencies will be clearly and prominently listed.

Edit: Part of this depends on the stage the project is in. An early but precise estimate is a warning sign, particularly if there is no confidence interval assigned. That reeks of a Procrustean estimate.

Also, watch for other development methodologies. A timeboxed project can have a rigid and arbitrary schedule, but the feature set will be flexible.

David Thornley
+17  A: 

No one has said it, so I will. The obvious answer is that if you have software schedule estimates then that is a sure sign of unrealistic figures. Yes, there are many methods for estimating software but none of them are accurate in any way, shape or form. What usually happens is that deadlines are set. If the task is over-estimated then extra time is spent making the result better. If the task is under-estimated then something is sacrificed to meet the delivery (like testing and features).

I know this answer isn’t what people want to believe, but estimating is always a guess. More often than not, a developer can’t even predict how much they will accomplish by the end of the day. You are expecting them to guess things months/years down the road on something that they aren’t even sure what is really involved yet.

The only practical answer to your question that isn’t prone to giving unrealistic results would be using a worksheet that comes up with guesses based on previous history at your company. Unfortunately, that will not account for tasks the estimator missed. At least this may give ballpark numbers.

Unless you develop knock offs of the same exact system over and over again, then anyone who thinks they have figured this out is fooling themselves. There are way too many variables involved.

Dunk
+5  A: 
  • Is the estimate what the management wanted to be told?
  • Does the estimate nicely fit in with the planned shipment date for the next release?
  • Does the management reward people that give good news more then people the give bad news?
  • Was the estimate prepared before knowing who would be working on the project?
  • Did someone that wanted that bit of functionally implemented prepare the estimate?
  • Is there a history of software being late?
  • Is it normal for developers to be moved onto other tasks partway though a project?
  • Have some or all developers given up on commenting on bad estimates as a waste of time?

Count up the number of questions you get “yes” or “maybe” answers.…

If you get mostly “no” answers to the above questions, then it may be worth looking at the estimate in detail to see if it includes the tasks that other people of listed in this thread.

Ian Ringrose
+2  A: 

If you see one or more of these, you may have a bad estimate:

  • Single point estimates: an estimate should be associated with a range of possible dates or a confidence value
  • Insufficient granularity of tasks: a large task bucket usually indicates that the functionality is not well understood (which is especially a problem since poorly understood problems are usually under-estimated)
  • No expression of assumptions and/or risks
  • Inadequate effort allocated for commonly skipped or underestimated items (e.g. build scripts, documentation, deployment, etc.)

I agree with sateesh, I really like Software Estimation: Demystifying the Black Art by Steve McConnell. He has several checklists which are useful when reviewing and/or preparing estimates.

Peter Tate
A: 

Any of the following:

  • It is one big project and there isn't a short high level strategy described
  • There isn't a clear, short and concise vision of what wants to be achieve with the project
  • The project isn't structured around business value being delivered gradually
  • The team is trying to give "accurate" estimates for a big project, going into (or was done with) a long analysis phase? (changes will come, and will usually affect those estimates in way that can't be easily quantified without yet more big efforts)
  • There are "detailed" estimates for the whole project (related to previous)
  • There aren't detailed estimates for the first phase, or there is something wrong in those.
eglasius
+5  A: 

Wow... I really like toolkit's answer.

And I agree that any estimate at all is flawed, because it assumes that the estimator has way more of a clue for how to solve the problem than any estimator actually does when a project gets estimated. However, I think you still need to at least estimate the size of the mountain before you start. Some thought as to whether it's worth trying to do it should precede any endeavor and that's what the essence of an estimate should be.

I did come up with a few more indicators of a dangerous estimate:

  • No cross-reference - If the estimate can't be validated at least two different ways, it's likely to be unreliable. For example, the last estimates I've done I've been able to break down the work into small feature chunks, and consider how long it's taken our team to do similarly scoped features. Then I was able to look at the sum of these costs and see how big the scope was relative to other projects I've worked on. That's two ways to validate.
  • The background of the estimator - if this is a software estimate done by a hardware guy who's never written code - be very afraid. More subtle - the closer the estimator's background is to the technology and problem domain of the project, the better.
  • Detail - as said a few different ways in a few different posts - I like to see detail for both individual features, as well as the tasks needed to complete each feature. Most estimates I see don't show the detail in the formal presentation, but if you ask the person who did the estimate, they should have a file somewhere. Hopefully it's not the back of a paper napkin stained with beer and ketchup. :)
  • Documented Assumptions - any estimator will have had to make some set of assumptions about the task. These should be documented somewhere, perferably in the formal paperwork. I always get a little worried when I see a short proposal with not many documented assumptions. Either they were thought through and not communicated to the customer, or they were not thought through. I'm not honestly sure which one is worse. It goes without saying that the assumptions should also be reasonable.
  • Balanced Lifecycle - However the task is broken down, what's the ratio of design, code and test? How about documentation, integration with external systems and post release support? How about those extra things that are so vital (system admins, CM gurus, management effort)?
  • Slack - I'm sure the corporate daemons of cheapness will come and flay me, but a schedule and a cost should have some slack. If the estimate looks ambitious and agressive to an experienced eye, it is likely to be too low. Estimates are almost never too high.
bethlakshmi
+1  A: 

How familar is the person giving the estimate with the people doing the work?

I have often seen estimates where there is a generic person doing the work, even though the team is made up of individuals with very different backgrounds. Most likely the tasks and the specialities don't line up perfectly and you get a c++ serverside programmer who ends up doing either your gui or your database... Sometimes the manager of the team doesn't really appreciate the team member's strengths, so if he has been asked to come up with the estimate on his own because his team is busy on the previous project you will find that the work in question is really only suitable for part of the team (not motivating, lack of skills etc)

Oskar
+2  A: 

I totally agree with Dunk, the first sign of bad estimates is the mere presence of a large detailed upfront schedule. Estimates are exactly that, an approximation, otherwise we would call them exactimates. So they should never be used alone in the management of a project.

The most important point to consider is not the accuracy of estimates but the consistency. If a third party were doing estimates for you, then ask to see a history of their successes or failures, speak with their past clients and determine their reliability.

That all being said, from an Agile standpoint, some of the ways we attempt to gain more consistent estimates during a project are;

  • Use a relative sizing standard (S,M,L,XL) rather than time based (ideal days).
  • focus on complexity not time
  • Always use group estimates not single person estimates
  • Gather estimates frequently throughout the project, generally at the start of each sprint
  • use feedback from previous sprints in determination of story complexity
  • track velocity to give meaning to the relative sizing
  • frequent and early story retrospection to examine/understand any thrashing

If you are dealing with companies that use these estimation methods then, chances are you are going to receive consistent and therefore better results.

Xian
A: 

"Four to six weeks", when not accompanied with a breakdown into shorter tasks...

MaxVT