tags:

views:

429

answers:

4

If you are using agile, the idea is to always be doing incremental refactoring and never build up large technical debt. that being said, if you have an agile team that is taking over software that has a decent amount of technical debt, you have to fit it in somewhere.

Do you go and create developer user stories . .for example .

  • As a developer, i have 50% test coverage over the business logic module so i have confidence in delivery
  • As a developer, the application supports dependency injection so we can swap out concretions and be more agile in the future.

or is there another best practice for cleaning up this code technical debt

+4  A: 

Is your application internal or do you have an external customer? If a client is paying for your work on and support of the application, it may be difficult to get them to sign off on cards like the ones you suggest.

Also, with your second card idea, it might be hard to say what "Done" is.

A specific approach to your issue could be Defect Driven Testing - the idea is that when you get a bug report and estimate the card that says to fix it, see what test(s) you can add in at the same time that are similar but increase coverage.

And you don't specifically ask for technical details about how to get your project under test, but this book is very helpful once you start actually doing it:Working Effectively with Legacy Code

JeffH
internal. i agree with getting customers to sign off but you have to make time in the sprint as these can be large tasks and you want to make them very visible
ooo
Also see my edit to my answer. You could choose to add tests by designing new ones into bugfix card estimations.
JeffH
A: 

I work in an Agile environment, but where the current codebase had existed for several years before the agile techniques were adopted. This leads to having to try to work in an agile way, around code that was not written with automatic regression testing in mind.

Because the technical debt affects how quickly we can deliver new features, we record how much time was added due to working with the legacy code. This data allows us to make a case for time dedicated to paying off technical debt. So when the customer (be it manager, or CTO or whoever) thinks that estimates are too high you have data which can reinforce your position.

Of course occasionally, you find your estimates go over because of unexpected quirks of the legacy code where you had to pay off technical debt. We have found that as long as the extra time can be explained and accounted for, and a case can be made for the benefits of the extra time spent, it's generally accepted pretty well.

Of course, YMMV dependent on customer or other factors, but having statistics which represent the effect of technical debt going forward is very useful.

Grundlefleck
A: 

I think it's a good idea to ask how much longer the customer(s) expect to be using the application. If the application's lifespan is limited (say, three years or less) then it may not make sense to put much effort into refactoring. If the lifespan is expected (or hoped) to be longer, then the payback for refactoring becomes that much more attractive.

You might also want to try creating a business case for the investment in refactoring. Show specific examples of the kinds of improvements that you would want to make. Make an honest assessment of the costs, risks, and expected payback. Try to find a specific refactoring that you could implement independently of the others, and lobby for approval to make that change as a test run of the refactoring process.

Note that, when you talk about payback, you may be expected to provide specific numbers. It's not enough to say "it will be much easier to fix bugs." Instead, you should be prepared to say something like "We'll see a minimum 30% improvement in turnaround time for bug fixes", or "We will experience 40% fewer regressions." You should also be prepared to negotiate with management and/or customers so that you all agree that you have measurements that are meaningful to them, and to provide measurements from before and after the refactoring.

Dan Breslau
how would you go about getting that type of metrics?
ooo
To measure defect fix rates, you would measure the # of fixes made over a week, month, or quarter. To measure regression rates, track whether newly-reported defects were caused by attempts to fix earlier defects. These are very crude measures, but it's better than not having any measures at all.
Dan Breslau
+2  A: 

There should be a distinction between an engineering practice and technical debt. I view test driven development and automated testing as practices.

Having taken code assets that were built by waterfall teams, the assets did not have automated unit, functional or performance tests. When we assumed responsibility for the software asset, we trained the product owner in Agile and told them of the practices we would use.

Once we begin using the practices, we begin to identify technical debt. As technical debt was identified, technical story cards were written and placed on the product backlog by the product owner. The developer and testers estimated all work using the XP engineering practices (TDD, automated testing, pair programming etc.). Those practices identified fragility in the code via TDD, automated function and performance tests. In particular, a significant performance issue was identified via automated performance testing and profiling. The debt was so large that we estimated the fix to take 6 iterations. we informed the product owner that if new features were developed they would not be able to be used by the user base given the poor performance of the application. Given that we had to scale the app from a few hundred users to 10s of thousands of users, the product owner prioritzed the performance technical debt very high and we completed the technical cards in the iterations estimated.

Note: technical debt that can be fixed via refactoring within the estimate of a story card does not require a technical story card. Larger technical debt will. For technical debt that will require a technical card, identify the business impact and ask the product owner to prioritize the technical card. Then work the card. Don't create technical debt for engineering practices. Do all estimating knowing that the engineering practices will be part of the estimate. Do not create a card to retrofit the application with automated unit, functional and performance test. Instead, include the work only in the cards you are estimating and add the automated test to the code you touch via the cards being worked. This will enable the app to improve over time without bringing progress to a halt. Stopping the addition of all business cards should only be saved for the most drastic situation such as inability of the application to perform or scale.

Given the case where you inherit a code base without automated unit, functional and performance test, inform the business partner of the sad state of affairs. Let them know how you will estimate the work. Create technical debt as it is uncovered via the engineering practice. Finally, informed the product owner that the team's velocity will improve as more and more of the code base is touched with automated unit, functional and performance tests.

Cam Wolff