views:

321

answers:

4

Just recently I came over an idea called the Application Strangler Pattern. As I understand it it is a solution to the problem with large legacy systems. The idea is to create a new application around the old application. The cost and risk of this will be much less than a complete rewrite of the system. Slowly, over time, the new application will do more and more of the work and eventually strangle the old legacy application. In the mean time developers get to work in a clean, new system with higher efficiency and hopefully producing much better code.

Where I work now we have come to the point were new functionality, even seemingly trivial things, takes a long time to develop, with a high risk of breaking something. We sit on about a million lines of code, with unit test coverage of perhaps 1-2%. The system is a SOA system using web services (neither is really necessary) and is more procedural in style than object oriented. The system is both web & win, all written in .net programming languages.

Finally the question: In considering this new idea/pattern, I want to know if anyone has had any experience with using this pattern they would like to share. For example, what would be a good way of implementing it (hooking up to events from the old application, for example)? Also, if anyone has any thoughts on the subject, why it would be a good or bad idea, that would be appreciated as well.

References:

+2  A: 

The big risk of this pattern is that you end up bodging both the old code AND the new code to get the behaviour you need, especially if the old code was never designed to be strangled (i.e. does not present clean consistent interfaces).

My experience with this has been that debugging becomes harder over time as it's unclear whether problematic functionality has arisen in the new code or the old code (or a shared problem between the two).

I know Martin Fowler talks about writing code that is designed to be strangled, but in my opinion that is simply another way of saying that modular design is good, mmmkay; it's non-controversial and fairly obvious.

Vicky
+2  A: 

The biggest problem to overcome is lack of will to actually finish the strangling (usually political will from non-technical stakeholders, manifested as lack of budget). If you don't completely kill off the old system, you'll end up in a worse mess because your system now has two ways of doing everything with an awkward interface between the two. Later, another wave of developers will probably decide to strangle what's there, writing yet another strangler application, and again a lack of will might leave the system in an even worse state, with three ways of doing things.

If the project is large, run from multiple regions, then you HAVE to get global consensus on what the final state should look like and how everyone is going to cooperate to get there. While the old app is being strangled, it's vital for remote teams to communicate every day, cooperate on the work if possible by doing remote pair programming, and resolve any misunderstandings or disagreements as soon as they arise. Otherwise there's the danger that each regional team will decide to write their own strangler application and they will meet somewhere in the middle and battle it out, leaving the system even worse.

Whatever you do, do not do the refactoring/strangling in a different branch from the main stream of development. The merge difficulties will become insurmountable.

I've seen critical systems that have suffered both of these fates, and ended up with about four or five "strategic architectural directions" and "future state architectures". One large multi-site project ended up with eight different new persistence mechanisms in its new architecture. Another ended up with two different database schemas, one for the old way of doing things and another for the new way, neither schema was ever removed from the system and there were also multiple class hierarchies that mapped to one or even both of these schemas.

Finally, if you're introducing technologies that are new to the team or to support/maintenance staff (e.g. adding reliable async messaging to what is currently a synchronous three-tier client/server architecture) then you have to ensure that there are experienced technical leads on the project who know how to build systems with that technology, and support those systems. And those tech leads have to stick with the project for some time after the old app has been fully strangled. Otherwise, the architecture will degrade as inexperienced developers modify it in ways they know but not in ways that fit with the new architecture.

Nat
We decided to not go for this pattern for now anyway. I've set this answer as the correct one even though it is hard to say that the other answers are less correct, but at least this one was the most thorough.
Halvard
A: 

In my experience the driver for doing this is to add additional functionality rather than obsoleting the original code base. Once the new functionality has been added then the immediate business case for completing the change is weakened and momentum is lost. Obviously this doesn't have to happen and you should at the outset plan to avoid this.

Regards

Howard May
A: 

The old school name for this is "wrapper". It sounds great; who wants to mess with the old application when you can write something new and clean to isolate it? The problem is that it creates a layer of goo, and it isn't long before somebody decides that the wrapper itself is messy. What's the solution to this? Another wrapper? As I see it, such wrappers and "stranglers" basically end up armor-plating the original application and eventually make your life harder. But people often choose short-term solutions that are suboptimal in the long term. After all, nobody will notice until you're long gone.

Ira Baxter