views:

628

answers:

23

I'm working on a project that took quite a long time to build, and now it's near completion. What are the specific dangers that might threat a project in its completion stages? What are the most overlooked aspects that I must be concerned about near the completion of a project?

+23  A: 

The danger that the customer looks at it and says it's just not what he has wanted.

Developer Art
or *her* has wanted.
Robert W
or it has wanted.
01
or just "was wanted" ;)
FractalizeR
or *we* wanted ;-)
Chris
+1  A: 

IT ignorant Management

astander
+4  A: 

On the development side, perhaps you built the project in your "development sandbox" and when it goes to production, it may experience conditions that you didn't account or plan for. For example, application performance may be speedy when you (the developer) is the only one using it, but when the data load and user load increases, and pressure is put on the system in ways you didn't expect, things can go wrong. I've seen deadlocks in the database that only happen in production, and are hard to reproduce.

On a different note, If the project took a long time to compelete, have the requirements/expectations changed since the start of the project (sometimes this isn't communicated to the people that need to know). Are there expectations that were never communicated since the beginning, or aren't in writing, that are actually expected?

Also, if you have a QA team, they can help stop the introduction of new bugs near the end of the project as well...

Jon Schoning
+8  A: 

The most disastrous things that I've seen:

  1. Inadequate change management for people who are actually using the system. Some systems are just dumped on people, who are understandably resistant. Remember, software have a big effect on people's lives.
  2. Missing features that prevent people from doing day-to-day jobs. This will stop even the best software in its tracks. Starting involving actual end users if you haven't done that already.
  3. Not enough budget to see the system through. Minor enhancements and other tweaks ofter go-live are unavoidable for any big software project.
  4. Load testing, especially for web applications. For desktop applications actual usage often have much bigger data sets than during testing. Try to get actual load profiles from real users.
leonm
+1 for #1. At one project I was involved in, one person was designated as "Change Champion".
David Thornley
+2  A: 

All the little things you noticed during development and where you always thought "I have to look into this when there is time" and never did.

Most of them are forgotten near project completion but some of them will come back...

Wolfgang
That's what issue tracking is for. If the process is so bad there's no issue tracking, any project is probably doomed.
David Thornley
A: 

Crackdown of actual customer! (it happened...)

ZZambia
wth is a crackdown?
Matt Joiner
+11  A: 

A big danger is delayed release due to final QA cycles. Often products get tested more thoroughly at the end of the development cycle than they did during development so the number of discovered bugs can shoot up at the end of the project. Reasons for this inclue:

  • Focus shifts from development to release, so more effort is transferred to testing
  • You finally have a complete product to test. Some things (esepcially user experience) just can't be tested very well until you have the final product to try out.
  • beta testing usually involves more end-users than during development, and throws a lot of "new" users at the program (people who have not built up a set of habitual workarounds from earlier versions of the program, and who will discover many obvious issues that are in your seasoned testers' "blind spot")

The obvious problem with this is that you can suddenly have a lot of work to do to clean up the newly discovered bugs. And to clean up the bugs introduced by the last minute bug fixes.

A trap for the unwary is trying to fix every bug (diminishing returns). Care has to be taken not to compromise the product or the ship date by attempting to refactor or fix minor bugs - many bugs don't need to be fixed for an acceptable release, and fixing bugs is time consuming and adds risk and extra testing cycles. Know when to stop!

Jason Williams
+1 for cleaning up bugs introduced by bug fixes. A minor bug fix can produce an even worse bug due to unintended consequences. If testing doesn't catch the new bug immediately, you can be in a world of hurt.
Mark Ransom
+2  A: 

The real issue I've experienced is defining what 'completed' means for the project. The project just drags on with people fixing bugs that don't really matter and adding to the risk of breaking some of your core functionality.

You've probably covered most of the coding aspects and you (or the team) know what the dangers are there.

Pull together all of those open issues and so that you can say "when these are closed, the project is closed". That way you can make it clear to the team whats expected and when they are really finished so that they can move onto something new.

Klelky
+2  A: 

The development team all resigning at once. Why? They're fed up with the job. The development team all walking into the path of that proverbial bus. Why? They worked too many hours and just didn't see it coming.

Sam
Sometimes they all resign near the end of a large project because they know it is going to fail (or at least thier part is going to fail). We had a guy fdo this to us once and it was very painful to recover from.
HLGEM
Or they do see it coming, and don't like what they see. This typically happens when the project isn't going to meet expectations (almost always from bad planning or management; big projects usually don't fail due to bad developers) and expect to take the blame.
David Thornley
+2  A: 

Do not let any non-critical features or enhancements put the core functionality at risk. A bad experience with a marginal feature can destroy your credibility with users/customers and you probably won't get a second chance with them. Keep it simple to the extent you possibly can. You can always disclose some enhancements progressively after you have built trust and dependency with reliable core features. Late change requests are especially deadly; turn down late requests and negotiate to deliver them after the first release is trusted and stable.

Kilo
A: 

It depends also a lot on what you define by near to completion. Just finished implementing the latest requirements, then you'll still need months to ensure the quality for large scale projects...

Have a look at common anti-patterns (organizational and project-management), maybe you recognize something, I hope not :-) Here is a condensed list I created from wikipedia for my private sake (small overview). The details are linked into wikipedia.

For financial/market risks I would differentiate the cases

  • You are supplier: so the requirements and payment is written down in a contract
  • You are the company selling on the market: you probably analyzed the market but this might have changed in the meanwhile.
jdehaan
+1  A: 

Coming under pressure to add new features and still hit a deadline. Then hit the deadline, ship the project, and have a Manager say that the project is over despite the fact that nothing is done about the technical debt that has accrued in the final phase.

Hitting the final deadline != project complete.

Dean Johnston
+1  A: 

Integration Issues

When a large project is nearing completion many large and small pieces start coming together (or are deliberately avoided). Make sure to test end-to-end all of the pieces and systems that are involved.

Ryan Doherty
+6  A: 

If the project has been managed correctly all the way through (iterations, interim targets, user involvement, end-to-end testing including integration testing etc) then there unlikely to be any serious scope or quality issues just prior to the completion that cannot be rectified given you have all the right people on a standby.

That’s it, as long as you have all the right people on a standby. The biggest risk with “nearly completed” projects, in my experience, is that they’re perceived as low risk, “a done deal” by everyone involved. If you managed well the number of known issues towards the completion decreases, so does the development workload. People are assigned to look at other projects; upper management diverts its attention to “more urgent “ and newer projects. The hand-over might be perceived as a bare formality and thus might drag forever.

These are real risks of not having instant executive level support when things go wrong and loosing people to other projects and not having them on a standby during the transition period should something go wrong.

The seamy side of the coin is not having another project in the pipeline for the project team to go to, no clearly communicated plan of what will happen to them once the project is finished. As a result the team might unintentionally drag the completion to prolong the project and the established status quo (especially if the experience of development was a positive one).

The opposite might also be true; people might be willing to leave as soon as possible not to see the project through the deployment phase and hanging any inherent problems to the support team to take care of. In the absence of the original team the success will be very difficult to achieve.

And of course, there might be much more political opposition to the project being deployed coming from the actual users than it was initially expected. Even if user group representatives have been happily accepting the functionality all the way through they might change their opinion pretty much overnight (typically the night after deployment) saying vague things like “nothing works”, “software lacks fundamental functionality” and “we’ve got a loads to do, haven’t got around yet to try the new functionality” etc. But the actual reasons behind the resistance to the change can be rooted in the way the new software forces them into greater efficiency, imposes more transparency and prevents them from doing all sorts of optimisations to make the system work in their favour.

It’s worth planning the delivery of any significantly new or different functionality during quiet periods when usage is going to be minimal.

Obviously whenever the “big-bang” approach is chosen to go live the overall level of risk and severity of possible impact increases by an order of magnitude. “Big-bang” simplifies greatly migration from any existing software but still needs to be used only when there is not other viable alternative. Going live gradually buys the team time for sorting out any issues that appear in relative isolation, as for the “all-or-nothing” approach if several things go up in flames at the same time (and sure they would) co-ordination of fire-fighting and prioritisation between usually limited number of available specialists becomes a real test of PM abilities.

As far as any transition or closure concerned as the last stage of a software project it needs to be watched as closely as any other phase in the project lifecycle, it has to have a clear and realistic completion criteria, deadlines and all the right specialists and management support, instead of a sole project manager left to frantically try and pinch a developer here and there to brush up the defects and talk everyone concerned into signing the final paperwork. Of course, not that this would ever happen in a real-world software company!

Totophil
+1. I've seen this over and over - calling a project "done" when it's not. Is everything tested and deployed? Do you have editors for any configurable portions? Is there documentation/training?
TrueWill
+5  A: 

There are several threads coming up almost all due to sloppiness in the project and product management. Here are some really hard ones beyond failed tests and bugs:

Missing the Real World Analysis of the problem was incomplete, the resulting design and implementation is unusable because it don't match with the real world. This happens often when the customer says what the programmer has to do. The customers "solution" fault is often recognized after deployment. The customer detects that his "solution" won't work, and he tries to give the responsibilities back to the developers. From a legal point of view it is fine to have good specification sheet. It is best to analyze the real world as much as it necessary to avid this. And don't accept the customers "solution" without proofing it.

A special case of this is to ignore legal issues like privacy protection laws and so on. This can be a big and bad surprise if the first complaint is made.

Licensing Issues Sometimes some bad surprises are revealed when the deployment team checks the licensing issues of the used components late. There are a lot of pitfalls. On the development engine are versions installed that have features the deployment version won't have (e.g. MS SQL Server vs MS SQL Server Express). Open Source License are clashing or licenses ar misinterpreted (e.g. LGPL Code is compiled in prop code, not linked against the lib)

Deployment Issues Another ugly thing is that the customers system have some "hidden surprises" that come up with the deployment. (e.G. the system replaces some DLLs with older /newer versions that clashes with other apps on the system). And a lot of code depends on this version and can't be changed.

Thomas Maierhofer
Another example of deployment issues: most developers have admin access on their machines, and most users don't (I'm assuming Windows here, as it won't generally be a problem on Unix-based systems like MacOSX or Linux). If it hasn't been tested on a user-class machine with a user-like account, it can be rolled out and suddenly fail to work. My wife had to get a special testing account because of this.
David Thornley
+4  A: 

End of Project Risks include:

  • missed components (make sure you have a check list of what was agreed to)
  • implementation/distribution
  • training the users
  • on-going support and how to provide updates
  • security and logging (log everything so you know who did what when, especially for a new product)
  • fall back - what if the system doesn't work, will you be able to fall back to the prior system easily
  • retaining key people - don't lose your key people to soon, there's bound to be fixes/changes

Good Luck

meade
+1 for fall back.
TrueWill
+3  A: 

Data conversion. If you're importing non-trivial amounts of data from an existing system into the new system, I've noticed that this is very often (a) left until the end of the project, and (b) significantly more complex then it first appears.

Andy Johnson
+2  A: 

Last minute scope change because something doesn't quite work as expected. The idea of, "Well, I also wanted..." comes to mind that would include change management.

QA taking a long time as testing requires complex scenarios and long run times.

Scalability and maintenance could also crop up as issues once you release something into the wild. Are there going to be patches or upgrades?

Lastly, beware the illusion that a project is near completion. Unless it is almost out of budget and there is no chance of additional funds being put into it, some projects near completion just stay at near completion for a long time. This has been said in a few other answers and I just want to echo that as something to note.

JB King
+4  A: 

It works fine on dev but times out on prod due to increased workload (or different hardware specs) that was never fully tested. Sometimes even something as simple as a service pack differnce causes things to fail. Also I've seen people who developed on the latest version of the database and then were suprised when everything didn't work on prod because it was an older version of the database. No SQL Server 2000 does not recognize code that was insitituted in SQl Server 2008.

Don't forget the database may have more than just the user interface writing to it. I've seen releases where no one told the dbas about a major change and all the data imports continued to go to the wrong database (or structure), or they failed becasue the field structure changed or they were changed over in a hurrry with no time to test becasue no one thought about the import or proved a file inteh new structure until the day of the deployment. If there is a datawarehouse, has anyone given thought or time to change the export to it, so that it will get the new data?

Somehow managing to accidentally overwrite exiting data (never put a major release to prod without a good database backup in place) or lose indexes. Always have a plan for a plan for how to revert the release if need be.

Biggest single issue, I've ever seen though was nobody talked to the actual users (anyone who only gets requirements from managers should be shot in my humble opinion) in developing the requirements and on day one, you find out about a task that they must do that the new user interface doesn't have at all.

Make sure to plan to train users on the new system if it is very different from the old one. Be prepared for a lot of resistance to change when doing so. Be prepared that if you are not turning off the old system the day the new system goes live, you will have users who will refuse to use the new system at all.

If you are converting data to a new structure, there may be some data on prod that will fail to convert that was not on dev unless you keep the data in dev up-to-date. I would refresh dev with the current prod database, reload all the new dev objects and then do the final testing.

Warn your users of the day the change is coming and ask them to tell you if there will be problems if the change is made on that date. Wouldn't want to change on the day the quarterly reports are run for instance or when finance is doing end of the year closeouts as they are too busy then to adjust to new software and definitely don't do it on payday if you are talking about the payroll system. HR wouldn't want to have adjustments the last day that W2's must be legally sent or a day they are planning to process 1000 new hires.

HLGEM
+4  A: 

Fear of completion or fear of success.

When people realize that it's about to go live and be in production. You see people who get scared and prefer to work on it forever and keep finding excuses on why it can't be finished just yet. That way they are never responsible or accountable.

I once saw a guy quit his job rather than connect a newly built network to the ISP. He'd configured it all and it was ready to go but he didn't believe it would work. His replacement plugged in the last Cat-5 cable and it all came to life.

james
+2  A: 

Deploying to a server cluster in production but tested on single servers

RailRhoad
+3  A: 

My worst experience was at the end of my first year of computer science. (Yes, it's not a real world project, but I've learned something essential with it)

Everything was near completion, and I wanted to ship this damn project to my teacher. I just wanted to add "this last small feature easy to implement"... Well, this feature has crashed the program during the presentation and it resulted in an epic failure.

Morality: Be careful, of the "last small feature" to add before shipping, chances are that it will not be tested enough...

Nicolas Dorier
For large projects, there will be feature freezes and (one hopes) a change board. If not, the project is badly run and will almost certainly fail anyway.
David Thornley
A: 

When you yourself think that project is not complete.Something is still missing. Take yourself into the users shoes,and then you'll realize everything.

Ravi