I have seen this theme present itself more than once now. I am hoping that people here who are currently in similar situations or have been in the past can offer some insightful advice. It might be useful if you share your past experiences as well.
So there is this fairly large windows forms application that has been developed over the years. Though the development team has tried to separate business logic from the UI, it hasn’t quite happened and there are numerous areas of the code where the business logic is hard-wired to the UI. In fact remnants of previous attempts to adopt MVP architecture can be seen at a lot of places. There are unit-tests too but with a relatively low degree of code coverage. There are some hot spots however – areas which everyone knows have gotten more complicated that they necessarily need to be.
A lot of times bugs which could have possibly been caught earlier are only found once Testers grab their torch lights and really start looking for bugs which is unfortunately too late, expensive, and risky. Engineering, testers and PMs - all realize that something needs to be done.
What would be the most practical way to address the situation altogether, or improve the situation? Since it is going to be a long task, what would be the best way to measure the progress towards the goal? How would the goal be defined in objective terms to begin with?