views:

33

answers:

2

I'm working in some old code which was originally designed for handling two different kinds of files. I was recently tasked with adding a new kind of file to this code. Most of my problems were solved by filling out an extensive XML file with a new entry that handled everything from what lists were named to how the file is written in plural lower case. But this ended up being insufficient, as there were maybe 50 different places in 24 different code files where I had to update hardcoded switch-statements that only branched for the original two file types.

Unfortunately there is no consistency in this; there are methods which operate half from the XML file, and half off of hardcode. Some of the files which look like they would operate off of the XML file don't, and some that I would expect that I'd need to update the hardcode don't need it. So the only way to find the majority of these is to run through testing the whole system when only part of it is operational, finding that one step to fix (when I'm lucky that error logging actually tells me what is going on), and then running the whole thing again. This wastes time testing the parts of the code which are already confirmed to work, time better spent testing the new parts I have to add on top of it all.

It's a hassle and a half, and to my luck I can expect that I will have to add yet another new kind of file in the near future.

Are there any solutions out there which can aid in this kind of endeavour? Something which I can input some parameters of current features, document what points in a whole code project actually need to be updated, and run something nice the next time I need to add a new feature to the code. It needn't even be fully automated, something that'll help me navigate straight to the specific points in everything and maybe even record what kind of parameters need to be loaded.

Doubt it matters specifically, but the code is comprised of ASP.NET pages, some ASP.NET controls, hundreds of C# code files, and a handful of additional XML files. It's all currently in a couple big Visual Studio 2008 projects.

A: 

Not exactly what you are describing, but if you can introduce a seam into the code and lay down some interfaces you can break out and mock, a suite of unit/integration tests would go a long way to helping you modify old code you may not fully understand well.

Andy_Vulhop
I understand the code better than I understand half of what you're saying. "[...]introduce a seam into the code and lay down some interfaces you can break out and mock[...]" means what exactly? I know what the majority of these things are individually but have never run into it all crunched into one statement.
ccomet
In order to mock out an object, you would likely need an interface. With all the legacy code I have to deal with at work, those interfaces that break up the code into meaningful segments are missing. The term "seam" is just another word for a barrier between 2 modules in the code that you could define clearly with an interface. I would also recommend Michael Feathers' book Working Effectively With Legacy Code. http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
Andy_Vulhop
A: 

I completely agree with the comment about using Michael Feathers' book to learn how to wedge new tests into legacy code. I'd also strongly recommend Refactoring, by Martin Fowler. What it sounds like you need to do for your code is to implement the "Replace conditionals with polymorphism" refactoring.

I imagine your code today looks somewhat like this:

if (filetype == 23)
{
  type23parser.parse(file);
}
else if (filetype == 69)
{
  filestore = type69reader.read(file);
  File newfile = convertFSto23(filestore);
  type23parser.parse(newfile);
}

What you want to do is to abstract away all the "if (type == foo)" kinds of logic into strategy patterns that are created in a factory.

class FileRules : pReader(NULL), pParser(NULL)
{
private:
  FileReaderRules *pReader;
  FileParserRules *pParser;
public:
  void read(File* inFile) {pReader->read(inFile);};
  void parse(File* inFile) {pParser->parse(inFile);};
};

class FileRulesFactory
{
  FileRules* GetRules(int inputFiletype, int parserType)
  {
    switch (inputFiletype)
    {
    case 23: 
      pReader = new ASCIIReader;
      break;
    case 69:
      pReader = new EBCDICReader;
      break;
    }
    switch (parserType)
    ... etc...

then your main line of code looks like this:

  FileRules* rules = FileRulesFactory.GetRules(filetype, parsertype);
  rules.read(file);
  rules.parse(file);

Pull off this refactoring, and adding a new set of file types, parsers, readers, etc., becomes as simple as writing one exclusive to your new type.

Of course, go read the book. I vastly oversimplified it here, and probably got stuff wrong, but you should get the general idea of how to approach it from this. I can also recommend another book, "Head First Design Patterns", which has a great section on the Factory patterns (if you like those "Head First" kinds of books.)

John Deters
In honesty, the Factory pattern is exactly what the XML file is for (presumedly because the original programmer eventually found it annoying to have to branch everything when implementing the second file). It's not a stretch to say that I can expand the XML file to include what is currently hardcoded - I was mostly trying to avoid doing so. I figured that a streamlined system for specifying points in a code that need to be updated in concert, if already available, would be faster to implement than [cont...]
ccomet
[cont...] both rewriting a hefty portion of the code and subsequently expanding the Factory itself, both in the data actually stored in the XML file and updating the console application used to add new entries to the XML file. Certainly, adapting everything to use the Factory would be a smarter, much more effective method in terms of usage and reusage. In the end, I guess this just means I should sacrifice laziness in implementation and instead just convert everything to the Factory and make new feature adding that much easier.
ccomet
As far as what the code looks like today, by the way, it's more like this. In A.cs it'd be `switch(str) case "text": return "text advance"; case "image": return "image return";`, and then in B.cs you could find `switch(str) case "text advance": return "client text package"; case "image return": return "client image package";` in one point and `switch(str) case "text": return "routing text"; case "image": return "routing image";`. It's basically a lot of context-specific string key-value pairs. Fetching the XML file requires specifying what value the input string is (list name, file name, etc.)
ccomet