views:

168

answers:

3

I'm working in a company which produce real-time C program for home-made hardware, the program is 15-years old, and still need to be maintained.

Even though we do not verify manually the code, We have a strict no-refactoring policy. Even if a certain code is hard to grasp, or have many clear code smells, you must make minor changes to it only. No refactoring of any code whatsoever.

The reason behind it is, that the less changes you do, the less probability of new error to occur. Since it is a real-time system, we cannot tolerate any bugs, and we must reduce the amount and probability of bugs at any cost.

Is that a common guideline in real-time shops which produce critical hardware? I had the impression that a strong testing capability (even without automatic verification) would allow you to refactor code-smells even for critical real-time code, but I might be mistaken.

+2  A: 

No amount of automatic verification can outsmart the creativity of the users :-), so "the less changes you do, the less probability of new error to occur." is very true.

If the feature velocity for your project is not too high (after 15 years?), then this healthy paranoia you are talking about looks like a very reasonable approach I'd take. Refactoring (at least in my shop) is generally treated the same way as a new feature. Except that the users got used to it already working, so one needs to be extra careful there. The extra benefit that this draconian policy introduces is the requirement for the architect to think more thoroughly about the new functionality - as it's very hard to change it later.

If there is a need for the bigger feature velocity, the tried and tested "stable" and "testing" branches would be probably the way to go. Though, of course depends how "critical" the application is.

Andrew Y
+3  A: 

I can see the logic behind: the code's 15 years old, it doesn't make us any money, we aren't going to invest anything in it, yet we don't want to risk breaking it. However, you also refer to it as critical. I find the concept of critical code without tests to be curious. It's not as if testing didn't exist 15 years ago. I would have expected some form of tests to be available to verify that your critical code does what it is supposed to do.

If the code is not critical (or, at least, not critical to the company's success) and not under active development, I could live with the "no refactoring" rule. If I found myself in the position where I needed to make significant changes to correct a problem -- hopefully this wouldn't happen as all the glaring errors would have been found long ago -- then I certainly would wrap the existing code in tests to verify that I don't break anything when I do make a change. I probably wouldn't make an investment in writing tests for this code unless I was going to make changes to it.

Update

Based on your comment, I would certainly make writing tests a priority for any change going forward. At some point you'll reach a critical mass of tests that may allow you to change the "no refactor" policy without losing confidence in the code. You may be interested in reading "Working Effectively with Legacy Code" by Michael Feathers (Amazon link). It should have some ideas on how to bring your code into a more maintainable state.

tvanfosson
Tests were smoke tests only, it wasn't done with too much thought for tests. The code *is* under active development and *is* making money for the company. The need for refactoring arises from the fact that there are more changes in the stream.
Elazar Leibovich
Updated my answer to address your comment -- then SO went down.
tvanfosson
+1  A: 

According to me it is a quite spreaded guideline, but not everywhere.

For instance, from my own experience, I've worked on real time system in financial firm. Systems needs to be reliable, otherwise business people loose money when they are not able to reach market opportunities. Systems may be 10 or 15 years old, and many differents persons could have modified them, writing no documentations... While the code smells, it does what it does, and trying to refactor it could break to much things. Therefore the rule you are talking about applies here.

But now late's take the airplane/defense industry exemple. People are writing huge real time systems, which needs to be much more reliable(!) as ordniary people lives are in game(!!). Those systems must not have any bugs of any sort: an airplane must not crash because of null pointer access in that system for instance ... Big amount of money are spent to reach that goal. While I've never worked in such firm, some friends of mine does, and it seems that any modification in the code is finely documented and reviewed by multiple different programmers, then the whole system pass through test benchmark to ensure all is ok. It takes time and money to do that, but it is necessary, and of course the code keeps the best clarity as possible, coders must follow very strict coding rules all along the projet.

In conclusion, the rule you are talking about depends really .. on the specification, on how much clients are ready to spent to get their system working, on the environnement. While airplane industry don't have choice, Bank industry have the choice.

Whe a bug happens they loose many money. But actually, it is only a fraction of their gains (well , they really gain a lot of money). I think that, someone in investment banking, some days, must have compare the two ways of maintaining code. And actually I guess that the result is: trying to get the best code as possible for a system will cost too much (and take much more time to develop), than the usual way it is done. Just add bug costs and developer team cost in a bank firm, it will probably always beat the cost if it had been done the way they do in airplane/defense industries.

And I bet the same way of thinking could apply in your firm, as as your hardware is not life critical? Isn't it?

yves Baumes
It is not exactly life critical, but it's very close. It's a very critical system that reports information to the driver wrong information could theoretically cause an accident. A major bug would kill the company's reliability and reputation. So bugs must be avoided at all costs. It's better not to ship a product than to ship it with a critical bug.
Elazar Leibovich
i think your company should consider stronger coding policy therefore (?) It's a matter of responsability . Don't you think?
yves Baumes
This was already written in a bad and untested form, and contains bugs. This product is used in the market, and apparantly no bug is that critical so far. This is the given state of affair, bad and untested code was already delivered. I understand that the company can't take back all the hardware she sold because it's not tested enough, the question is, what to do next. I'm not sure code freeze is the best (or most responsible) solution.
Elazar Leibovich