views:

318

answers:

7

Coming from an IT background, I've been involved with software projects but I'm not a programmer. One of my biggest challenges is that having a lot of experience in IT, people often turn to me to manage projects that include software development. The projects are usually outsourced and there isnt a budget for a full time architect or PM, which leaves me in a position to evaluate the work being performed.

Where I've managed to get through this in the past, I'm (with good reason) uneasy about accepting these responsibilities.

My question is, from a perspective of being technically experienced but not in programming, how can I evaluate whether coding is written well besides just determining if it works or not? Are there methodologies, tips, tricks of the trade, flags, signs, anything that would say - hey this is junk or hey this is pretty damn good?

+1  A: 

how can I evaluate whether coding is written well

There are various ways/metrics to define 'well'or 'good', for example:

  • Delivered on time
  • Delivered quickly
  • No bugs after delivery
  • Easy to install
  • Well documented
  • Runs quickly
  • Uses cheap hardware
  • Uses cheap software
  • Didn't cost much to write
  • Easy to administer
  • Easy to use
  • Easy to alter (i.e. add new features)
  • Easy to port to new hardware
  • ...etc...

Of these, programmers tend to value "easy to alter": because, their job is to alter existing software.

ChrisW
Those are all good points... many of which I already use because I have no means of evaluating the code directly.Is there a good way to evaluate the code directly?Or even more important, if I'm evaluating a developer in the hiring process, what should I be looking for to determine their proficiency?
Krevin
Why are you trying to "evaluate the code": what would you do with that evaluation if you had it? Are you trying to say "this is/isn't good"? To say what needs to be improved? To decide whether to pay for what's been received? To write the specifications for the software, which will be used as input/requirements by the development team and from which they'll develop the software? To choose among competing teams? To assess whether a project is on schedule? ...?
ChrisW
@chris - well yes to most of those questions. But mostly for hiring purposes or project purposes. I need to determine if someone is competent to give them a project, and then I need to determine if they've executed the project well. I can easily determine if its to spec on a superficial level, in that it does what it's supposed to, but the question is was to determine if it was designed well. I think you made an excellent point in valuing whether its easy to alter... but what do you do before the project starts?
Krevin
"I need to determine if someone is competent to give them a project" -- If you're a programmer then IMO you might judge in an interview whether somebody is clearly incompetent. If they pass that test, then you might hire them on probation; and if they pass *that* test (by delivering) then you hire them again. If you hire someone new for each project, that's a bit of a gamble: all the non-programmers that I know (i.e. my employers) worked by finding programmers (employees and/or contractors), having probationary periods for new ones, re-hiring successful ones ...
ChrisW
... and using existing programmers whom they trusted to help them with their hiring of new programmers.
ChrisW
+7  A: 

Great question. Should get some good responses.

  1. Code cleanliness (indented well, file organization, folder structure)
  2. Well commented (not just inline comments, but variables that say what they are, functions that say what they do, etc.)
  3. Small understandable functions/methods (no crazy 300 line methods that do all sorts of things with nested if logic all over the place)
  4. Follows SOLID principles
  5. Is the amount of unit test code similar in size and quality as the code base of the project
  6. Is the interface code separate from the business logic code which in turn should be separate from the infrastructure access code (email, database, web services, file system, etc.)
  7. What does a performance analysis tool think of the code (NDepend, NDoc, NCover, etc.)

There is a lot more to this...but this gets your started.

Andrew Siemer
heh +1 for the compliment, and the info of course... I hope we can get some good feedback here. I think this is a huge question in the industry between us IT guys and you Programmer guys... if we could just bridge the gap a bit better, we'd all be better off.
Krevin
MadMurf
+1 Especially number 4 "Follows SOLID principles". As a good test, add certain feature to the application or make any revision of the current feature of the application. If the coding is following SOLID principles, then the changes made to the application will not make huge impact to the application
hadi teo
Having a test base that is equal in size to your code base doesn't necessarily mean that you are testing every scenario. It is simply a metric that someone has spent a good amount of time adding tests as they add functionality. This is from the perspective of a non-coder. So just that there are test in places doesn't really mean anything! The test may not even work. But tests being present vs. not is at least a sign in the right direction! And 10 tests for 10,000 lines of code is a good sign that something is not right! :P
Andrew Siemer
Can a non-programmer evaluate whether code follows SOLID principles? Isn't that like asking, "How to tell, without learning French, whether a given French prose is well-written?"
ChrisW
+1 for solid (no pun intended) high level, but also useful, language independent advice.
Nathan Koop
+3  A: 

First, set ground rules (that all programmers sign up to) that say what's 'good' and what isn't. Automate tests for those that you can measure (e.g. functions less than a number of lines, McCabe complexity, idioms that your coders find confusing). Then accept that 'good coding' is something you know when you see rather than something you can actually pin down with a set of rules, and allow people to deviate from the standard provided they get agreement from someone with more experience. Similarly, such standards have to be living documents, adapted in the face of feedback.

Code reviews also work well, since not all such 'good style' rules can be automatically determined. Experienced programmers can say what they don't like about inexperienced programmers' code - and you have to get the original authors to change it so that they learn from their mistakes - and inexperienced programmers can say what they find hard to understand about other people's code - and, by being forced to read other people's code, they'll also learn new tricks. Again, this will give you feedback on your standard.

On some of your specific points, complexity and function size work well, as does code coverage during repeatable (unit) testing, but that last point comes with a caveat: unless you're working on something where high quality standards are a necessity (embedded code, as an example, or safety-critical code) 100% code coverage means you're testing the 10% of code paths that are worthwhile to test and the 90% that almost never get coded wrong in the first place. Worthwhile tests are the ones that find bugs and improve maintainability.

ijw
good point.. so there's something to think about as far as diminishing returns in testing... test what's 'testworthy' and don't get too caught up in testing EVERYTHING. Thanks for the insight.
Krevin
+1  A: 

Its a difficult one and could be where your non-functional requirements will help you

  • specify your performance requirements: transactions per second, response time, expected DB records over time,
  • require the delivery to include outcome from a performance analysis tool
  • specify the machine the application will be running on, you should not have to upgrade your hardware to run the app

For eyeballing the code and working out whether or not its well written its tougher, the answers from @Andrew & @Chris cover it pretty much... you want code that looks good, is easy to maintain and is performant.

MadMurf
+2  A: 

I think it's great you're trying to evaluate something that typically isn't evaluated. There have been some good answers above already. You've already shown yourself to be more mature in dealing with software by accepting that since you don't practice development personally, you can't assume that writing software is easy.

Do you know a developer whose work you trust? Perhaps have that person be a part of the evaluation process.

Bernard Dy
+4  A: 

Code has 2 primary audiences:

  • The people who use it
  • The people who develop it

So you neeed 2 simple tests:

  • Run the code. Can you get it to do the job it is supposed to do?
  • Read the code. Can you understand the general intentions of the developer?

If you can answer yes to both of these, it is great code.

When reading the code, don't worry that you are not a programmer. If code is well written / documented, even a non-programmer should be able to see guess much of what it is intended to achieve.

BTW: Great question! I wish more non-programmers cared about code quality.

Kramii
Good advice overall, but I'd be careful about assuming that just because code is readable that it's great. There are many things beyond code legibility that affect its maintenance factor, although I'd agree that code you can understand is a heck of a lot better than code you can't.
Bernard Dy
@Bernard: What do you have in mind?
Kramii
Things like application partitioning, caliber of exception handling, and general design. Those are things you can't always tell just from selected pieces of legible script. Note that I'm not disagreeing with you, just adding some context.
Bernard Dy
@Bernard: Of course, you're absolutely right. In my experience, these are also some of the hardest things for a programmer to do well. They are also very difficult for a non-coder to evaluate, but no less important for being so.
Kramii
A: 

Summary

Use Joel Test.

Why?

Thanks for tough question. I was about to write a long answer on merits of direct and indirect code evaluation, understanding your organisational context, perspective, figuring out a process and setting a criteria for code to be good enough, and then the difference between the code being perfect and just good enough which still might mean “very impressive”. I was about to refer to Steve McConnell’s Code Complete and even suggest delegating code audit to someone impartial you can trust, who is savvy enough business and programming-wise to get a grasp of the context, perspective, apply the criteria sensibly and report results neatly back to you. I was going to recommend looking at parts of UI that are normally out of end-user reach in the same way as one would be judging quality of cleaning by checking for dirt in hard-to-reach places.

Well, and then it struck me: what is the end goal? In most, but very few edge cowboy-coding scenarios, as a result of the audit you’re likely to discover that the code is better than junk, but certainly not damn good, maybe just slightly below the good enough mark. And then what is next? There are probably going to be a few choices:

  1. Changing the supplier.
  2. Insisting on the code being re-factored.
  3. Leaving things as they are and from that point on demanding better code.

Unfortunately, none of the options is ideal or very good either. Having made an investment changing supplier is costly and quite risky: part of the software conceptual integrity will be lost, your company will have to, albeit indirectly, swallow the inevitable cost of the new supplier taking over the development and going through the learning curve (exactly opposite to that most suppliers are going to tell you to try and get their foot in the door). And there is going to be a big risk of missing the original deadlines.

The option of insisting on code re-factoring isn’t perfect either. There is going to be a question of cost and it’s very likely that for various contractual and historical reasons you won’t find yourself in a good negotiation position. In any case re-writing software is likely to affect deadlines and the organisation what couldn’t do the job right the first time is very unlikely to produce much better code on the second attempt. The latter is pertinent to the third option I would be dubious of any company producing a better code without some, often significant, organisational change. Leaving things as they are not good either: a piece of rotten code unless totally isolated is going to eventually poison the rest of the source.

This brings me to the actual conclusion, or in fact two:

  1. Concentrate on picking the right software company in a first place, since going forward options are going to be somewhat constrained.
  2. Make use of IT and management knowledge to pick a company that is focused on attracting and retaining good developers, that creates a working environment and culture fit for production of good quality code instead of relying on the post factum analysis.

It’s needless to expand on the importance of choosing the right company in the first place as opposed to summative evaluation of delivered project; hopefully the point is already made.

Well, how do we know the software company is right? Here I fully subscribe to the philosophy evangelised by Joel Spolsky: quality of software directly depends on quality of people involved which as it has been indicated by several studies can vary by an order of magnitude. And through the workings of free markets developers end up clustered in companies based on how much a particular company cares about attracting and retaining them.

As a general rule of life, best programmers end up working with the best, good with good, average with average and cowboy coders with other cowboy coders. However, there is a caveat. Most companies would have at least one or two very good developers they care about and try their hardest to retain. These devs are always put on a frontline: to fire fight, to lure a customer, to prove the organisation potential and competence. Working amongst more than average colleagues, overstretched between multiple projects, and being treated as royalty, sadly, these star programmers very often loose touch with the reality and become prima donnas who won’t “dirty” their hands with any actual programming work.

Unfortunately, programming talent doesn’t scale and it’s unlikely that the prima donna is going to work on your project past the initial phase designed to lure and lock in you as a customer. At the end the code is going to be produced by a less talented colleague and as a result you’ll get what you’ll get.

The solution is to look for a company there developer talents are more consistent and everyone is at least good enough to produce the right quality of code. And when it comes to choosing such an organisation that’s where Joel Test comes mighty handy. I believe it’s especially suitable for application by someone who has no programming experience but good understanding of IT and management.

The more points company scores on the Joel Test the more it’s likely to attract and retain good developers and most importantly provide them with the conditions to produce quality code. And since most great devs are actually in love with programming all the need is to be teamed up, given good and supportive work environment, a credible goal (or even better incredible) and they’ll start chucking out high quality code. It’s that simple.

Well, the only thing is that company that scores full twelve points on the Joel’s Test is likely to charge more than a sweatshop that scores a mere 3 or 5 (a self-estimated industry average). However, the benefits of having the synergy of efficient operations and bespoke trouble-free software that leverage strategic organisational goals will undoubtedly produce exceptional return on investment and overcome any hurdle rates by far outweighing any project costs. I mean, at the end of the day the company's work will likely be worth the money, every penny of it.

Also hope that someone will find this longish answer worthwhile.

Totophil