views:

143

answers:

7

Background:
I will be working on tools that will be dependent on a rapidly changing API and rapidly changing data model over which I will have zero control.

Data model and API changes are common, the issue here is that my code must continue to work with current and all past versions (ie 100% backwords compatibility) because all will continue to be used internally.

It must also gracefully degrade when it runs into missing/unknown features etc.

The tools will be written in C# with WinForms and are for testing custom hardware.

<Edit>

My goal would be something close to only having to create classes to add features and when data model changes come, create a new set of data model classes that will get created by a factory based on API version.

The challenge for me is that future features then may depend on the specific data models, which may be mixed and matched (until a final combo is reached). How would you handle this gracefully?

<Edit2>

Of course, once a product is shipped, I would like to reuse the tool and just add code for newer products. Before I started here, every product cycle meant rewriting (from scratch) all the tooling, something I aim to prevent in the future :)

</Edit>

Question:
What design techniques and patterns would you suggest or have had success with to maintain compatibility with multiple versions of an API/Data Model?

What pitfalls should I watch out for?

A: 

Write your own wrapper to interface between your code and the stuff you don't control. Then you can write your code against the API the wrapper exposes, and only have to worry about interop in the wrapper itself.

Anon.
A: 

Your best bet is to have the API you expose require a version number to come in with the request. That way you can select the correct object to create. Worst case, every change is breaking and you end up with dozens of classes. Best case is that your design can handle it and you only end up with separate classes every now and then. Inheritance is probably going to be your friend on this one. The bottom line is you're basically screwed if you need to maintain 100% bacwards compatibility with a rapidly changing API. You're either going to end up with one gigantic unmaintainable class, or several classes that respond correctly to versioning.

Steve
+4  A: 

Practically all the SOLID patterns apply here, but particularly the Single Responsibility Principle (SRP) and Open/Closed Principle (OCP).

The OCP specifically states that the type should be open for extension, but closed for modification - that sounds like a good fit in your case, because this would be a way to ensure backwards compatibility.

The SRP is also very helpful here because it means that if a class does only one thing, and that thing becomes obsolete, it doesn't drag along a lot of other problems. It can just be left to die on its own.

On a more practical level, I would suggest that you follow two principles:

TDD (or just a comprehensive unit test suite) will help protect you against breaking changes.

Mark Seemann
+2  A: 

You mentioned that the code is for testing custom hardware. Will your code (i.e. testing routines) also change? Will your code be testing a circle today and a triangle tomorrow or the same basic circle that is evolving day by day?

If there is a constant point around which things move then I would start from there and write wrappers for each version of API and Data Model that would link to the center given the techniques suggested in the other answers.

However, if there is no constant point and everything moves then I would abandon the project! It cannot be done!

Square Rig Master
It is a little of both...there may be severe architectural changes at a few points, and inbetween those major changes there will be evolutionary changes. All of these will need to be handled correctly.
BioBuckyBall
Of course, at some point we may abandon particular versions
BioBuckyBall
The trick is to find a fixed anchor point that would remain constant. For example in the Intel Architecture the basic 8088 instruction set is the constant and everything else changes. Once the instruction set changes (e.g. Different processor) then everything will stop working and need attention. It is simply impossible to scope the requirements unless you can pinpoint and pin down a fixed point. Once you find it then the solution would present itself. In any case it sounds a great challenge and i wish you luck:)
Square Rig Master
A good point, the spec level code should be unchanging.
BioBuckyBall
+1  A: 

Does your API / your data model provide you with metadata? If so, it would be a good idea to use this to make your code as independent from API changes as possible. For example, if you have a chance to generate your data model access routines generically by using a data model schema, you should do this. Of course, this makes only sense for certain types of operations.

Doc Brown
+3  A: 

One idea that may be useful is the "anti-corruption layer" discussed by Eric Evans in his book Domain Driven Design - the key motivation for the pattern is to insulate your system from changes (and idiosyncrasies) of another.

Bevan
A: 

1) Lots of unit tests.

Whenever you write a piece of code, publish a lot of unit tests for the code that future versions must pass in order to be checked in.

Make sure that the tests are based on functional requirements, i.e. not how the function is written, but what it must do in order not to break other code.

This will help people from checking in changes that break other code.

2) Require good formal specification of all API's and data models. This ensures that they will be designed more carefully, and that changes won't be thrown in nilly-willy without thought or reason.

Larry Watanabe