views:

213

answers:

8

As noted in another thread, "In most businesses, code quality is defined in dollars." So my company has an opportunity to acquire a large-ish C code base. Obviously, if the code quality is good, the code base is worth more than if it's poor. That is, if we can readily read, understand, and update the code, it's worth more to us than if it's a spaghetti-coded mess. Without being able to see the code ahead of time, we'd like to set some objective measure as an acceptance criteria like "If the XXX measure is below (some threshold) the price will be discounted YY%." What criteria can we or should we measure and what tool can we use to measure it?

+1  A: 

Sounds like you are talking about software metrics? Which metric you will use is very specific to your software.

Cyclomatic complexity will be the metric you should look first for.

Tobias P.
CC has come up in other discussions. Do you have a favorite tool to measure it?
Chris Nelson
A: 

Measuring code value by amount is like measuring aircraft value by weight. In fact, less code for the same task should cost more.

Pavel Radzivilovsky
Generally true, but Perl code serves as a counterexample. Compact code can be worse than sparse code.
mcandre
+7  A: 

Impossible task, IMO. It could be the best code in the world, but if it doesn't DO something valuable, it's not worth anything. You have to figure out how much money you can make with this code, and weigh it against how much it will cost you to maintain/improve it. In some cases, the other assets (user base, IP, etc.) may be so valuable that you don't care if you have to re-write the whole thing.

Additional: Lots of good ideas here, for what to look for in terms of quality/robustness. Along the original discussion of dollar value, I have some more thoughts.... As you review the code, look for (and maybe scan for) trouble spots such as repetive code that should have been re-used. This is easy to explain to your boss (and the lawyers, and the seller) as being something that raises your manitenance costs. Also look for liability issues, such as the use of open-source libraries without proper adherance to the license. Or reliance on 3rd-party libraries - are the licenses transferred? are there enough licenses for your new dev team? are the libraries current, or are they going to be a barrier to upgrading your environment?

Lastly, toss this idea out there: instead of a 100% buy-out, the principals should discuss the idea of a revenue sharing agreement that keeps them vested in your success for several years, then they're gone. ex: instead of paying $1 million for the whole thing, pay $750K and split the revenue (net after costs) on a descreasing scale. First year, 50:50. Year two, 75/25. Year 3, 90/10. Year 4, 100/0. This way, you have lower upfront costs, and they have a good reason to answer your phone calls. If their software is as valuable as they say, they should have no trouble with this. If they want all the $$$ upfront, then they have no faith in you, and their software.
I have done this successfully, although with a much lower $$$ value... And I still get along with the guy.

Chris Thornton
Agreed, business valuation is about more than just the quality of the code they produced - the product itself and user base along with assets could be far more important. Possibly even the staff are more valuable.
Reddog
@Reddog - staff - good one! There are many acquisitions that are worthless without the lead developer (often president/founder/soon-to-be-rich-guy).
Chris Thornton
Chris Nelson
Seems like the problem here is that you want to buy the code without seeing it. I wouldn't go into this venture without actually seeing the code. Of course a non-disclosure agreement would be necessary, but I would still want to see the code. Maybe the code is so bad that it's entirely worthless. If you plan to do extra development on top of the code base, it seems crazy to pay for the code without seeing it. It's like buying a car without looking under the hood, or even without taking it for a test drive.
Kibbee
+2  A: 

Try to look out how long bugs would take to fix, categorized as easy, medium, hard, whatever. You can convert those man-hours into dollars and go that route.

Then you'd be able to say defect-free code is worth 100% of whatever, and then a defect count of 50 is worth 75% of that ideal, etc.

You can figure in support costs if you'd like.

Edit:
of course, if you can't see the code beforehand, well, that's trickier. Maybe you can get a bug count, sorted by bugs, features, enhancements, etc.

Nathan DeWitt
+3  A: 

Ask how many test cases exist. Self-testing code is worth much more because you can immediately know which portions of the code work.

Look for comments. If there are only a few lines of comments in the entire library, it may be hard to understand the codebase.

mcandre
+1 for the test cases ... -1 one for measuring by comments. The less comments, the less lies you're told is my opinion.
Filburt
@Filburt +1 for statement about relationship of comments to lies.
Reddog
@Reddog I caught me lying to myself too often already ...
Filburt
By that logic, less code = less lies. Therefore, pay a premium for 0 bytes of code.
mcandre
@mcandre, if you can get 0 bytes of code to do something useful, it would definitely be worth a great deal.
Mark Ransom
Haha, the less lies you've told, don't agree 100%, but I'm going to have to use that one sometime just the same.
Paul
Again, by this logic the less books you read the less lies you read. The less Stack Overflow answers you read, the less lies you read. Yet lies are rare in technical documentation. If you're that paranoid, install Gentoo and build everything yourself.
mcandre
@mcandre Simply claiming everything written down is lies doesn't cut it. Code cannot lie. It may hide facts in a huge hairy ball, but it cannot lie.
Filburt
@Filburt - comments are good, and are often an indication of the origin of the software. i.e. did they write it themselves, buy it, or steal it? Comments can give clues. As can SCM history.
Chris Thornton
+1  A: 

If you define quality as the ability to read and understand the code, things you might look at as indicators are comment quality (informative, per block, and updated with the code), documentation of classes and methods, operations per line (fewer is easier to read), test cases, and standard practices.

Not sure what dollar amount you would add, but I would think it would be the dollar amount you think it would cost to refactor the code to the quality you would need.

Can you ask for a random section of code from a few classes? Also, can they tell you how many people have worked on the code base?

Eli
A: 

Seems the problem here is that it allows too much tuning of the input to get the desired output. From what I gather from your question, it seems like you want to hire a contractor to produce some code. If the code doesn't rate high enough on some metric, the contractor gets less money. If it does rate high enough, they get the full sum. If the metric and method of calculation is known ahead of time to the contractor, they will simply optimize their code to ensure that it gets a high enough score to get the full payment amount. I don't think any contractor would be stupid enough to enter into a contract where the algorithm for deciding this was unknown ahead of time, as it seems like you could just put the rug out from under them when the project is delivered, and they would have no recourse (not that I'm suggesting that's your plan). It would be like if they wanted you to pay by line of code. They would simply write as many lines as possible to inflate the amount you are paying them. In the same way, they could tune their code to satisfy whatever metric you put on it.

Kibbee
No, no contractor. The code exists and functions. The question is how hard it's going to be for us to work with it.
Chris Nelson
A: 

There is only one relevant criterium: how much money can you make with the software (while spending as little as possible).

Everything else is derived from that. How long you can make money and how much you have to spend, depends on how stable it is, and how essy it is to make changes.

Stephan Eggermont