views:

2125

answers:

32

I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.)

Code quality does not seem popular. Some examples from my experience include

  • Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests.

  • Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored.

  • Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it probably looks different for conferences on agile topics, as unit testing and CI and such has a higher value there.)

So why is code quality so unpopular/considered boring?

EDIT:
Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.

+8  A: 
  • Laziness / Considered boring
  • Management feeling it's unnecessary - Ignorant "Just do it right" attitude.
  • "This small project doesn't need code quality management" turns into "Now it would be too costly to implement code quality management on this large project"

I disagree that it's dull though. A solid unit testing design makes creating tests a breeze and running them even more fun.

Calculating vector flow control - PASSED
Assigning flux capacitor variance level - PASSED
Rerouting superconductors for faster dialing sequence - PASSED
Running Firefly hull checks - PASSED
Unit tests complete. 4/4 PASSED.

Like anything it can get boring if you do too much of it but spending 10 or 20 minutes writing some random tests for some complex functions after several hours of coding isn't going to suck the creative life from you.

Spencer Ruport
And what about the deep satisfaction of getting THE GREEN BAR at the end of the automatic tests ? Quite like winning the last level of the game...
zim2001
THE GREEN BAR is a life saver when you decide to change some omnipresent code.
Arnis L.
As a part-time cynic, I'll just point out that the GREEN BAR rush is easier to get if you don't write enough tests.
David Thornley
Esp. your third point, prototypes going in production later, is so true...
Peter Kofler
+2  A: 

Unit Testing takes extra work. If a programmer sees that his product "works" (eg, no unit testing), why do any at all? Especially when it is not nearly as interesting as implementing the next feature in the program, etc. Most people just tend to be lazy when it comes down to it, which isn't quite a good thing...

DeadHead
+2  A: 

Code quality is context specific and hard to generalize no matter how much effort people try to make it so.

It's similar to the difference between theory and application.

MSN
+6  A: 

I guess the answer is the same as to the question 'Why is code quality not popular?'

I believe the top reasons are:

  • Laziness of the developers. Why invest time in preparing unit tests, review the solution, if it's already implemented?
  • Improper management. Why ask the developers to cope with code quality, if there are thousands of new feature requests and the programmers could simply implement something instead of taking care of quality of something already implemented.
Grzegorz Oledzki
+1  A: 

One attitude that I have met rather often (but never from programmers that were already quality-addicts) is that writing unit tests just forces you to write more code without getting any extra functionality for the effort. And they think that that time would be better spent adding functionality to the product instead of just creating "meta code".

That attitude usually wears off as unit tests catch more and more bugs that you realize would be serious and hard to locate in a production environment.

Fredrik Mörk
+4  A: 

It reminds me of this Monty Python skit:

"Exciting? No it's not. It's dull. Dull. Dull. My God it's dull, it's so desperately dull and tedious and stuffy and boring and des-per-ate-ly DULL. "

Otávio Décio
lol i love monty pythonn i grew up watchin it with my dad
lucifer
What exactly is dull? Fixing warnings shown by IDE? Writing code that tests your implementation? Discussing your code with your peer? I find it dull to open a project and see 14k warnings, yellow icons everywhere.
Peter Kofler
@Peter: Not that I don't agree with you on seeing lots of warnings, but you can have code that has 14K warnings and still be "bug free" for lack of a better term, and you can have code that has no warnings but is still garbage. The number of warnings in a project is not a good metric either.
Moose
+11  A: 

Code review is not an exact science. Metrics used are somehow debatable. Somewhere on that page : "You can't control what you can't measure"

Suppose that you have one huge function of 5000 lines with 35 parameters. You can unit test it how much you want, it might do exactly what it is supposed to do. Whatever the inputs are. So based on unit testing, this function is "perfect". Besides correctness, there are tons of others quality attributes you might want to measure. Performance, scalability, maintainability, usability and such. Did you ever wondered why software maintenance is such a nightmare?

Real software projects quality control goes far beyond simply checking if the code is correct. If you check the V-Model of software development, you'll notice that coding is only a small part of the whole equation.

Software quality control can go to as far as 60% of the whole cost of your project. This is huge. Instead, people prefer to cut to 0% and go home thinking they made the right choice. I think the real reason why so little time is dedicated to software quality is because software quality isn't well understood.

  • What is there to measure?
  • How do we measure it?
  • Who will measure it?
  • What will I gain/lose from measuring it?

Lots of coder sweatshops do not realise the relation between "less bugs now" and "more profit later". Instead, all they see is "time wasted now" and "less profit now". Even when shown pretty graphics demonstrating the opposite.

Moreover, software quality control and software engineering as a whole is a relatively new discipline. A lot of the programming space so far has been taken by cyber cowboys. How many times have you heard that "anyone" can program? Anyone can write code that's for sure, but it's not everyone who can be a programmer.

EDIT *

I've come across this paper (PDF) which is from the guy who said "You can't control what you can't measure". Basically he's saying that controlling everything is not as desirable as he first thought it would be. It is not an exact cooking recipe that you can blindly apply to all projects like the software engineering schools want to make you think. He just adds another parameter to control which is "Do I want to control this project? Will it be needed?"

Eric
LOL! Having this one huge function of 5000 loc and 35 params is HARD TO TEST... Really???
Arjan Einbu
5K loc, that is one helluva unit! Imagine the mocking needed, not to mention the mockup afterwards. haha.
Peter Lillevold
+1 for not connecting less bugs now with more profit later. Especially re: more cost now => more profit later. This is endemic to organizations that write software without having a software culture. At my org, we get beat up every quarter for high COPQ (cost of poor quality), yet mgm't undermines any and every quality improvement exercise at every turn to hit ridiculous (excuse me, _optimistic_) delivery dates. The current example is a dev, arguably one of the best in the org, estimating a full-fledge designer rewrite to take 13 months. He was given 24 weeks with no cut in functionality.
Greg D
+2  A: 

I'd say for many reasons.

First of all, if the application/project is small or carries no really important data at a large scale the time needed to write the tests is better used to write the actual application.

There is a threshold where the quality requirements are of such a level that unit testing is required.

There is also the problem of many methods not being easily testable. They may rely on data in a database or similar, which creates the headache of setting up mockup data to be fed to the methods. Even if you set up mockup data - can you be certain the database would behave the same way?

Unit testing is also weak at finding problems that haven't been considered. That is, unit testing is bad at simulating the unexpected. If you haven't considered what could happen in a power outage, if the network link sends bad data that is still CRC correct. Writing tests for this is futile.

I am all in favour of code inspections as they let programmers share experience and code style from other programmers.

pcguru
+3  A: 

"There are the common excuses for not writing tests, but they are only excuses."

Are they? Get eight programmers in a room together, ask them a question about how best to maintain code quality, and you're going to get nine different answers, depending on their age, education and preferences. 1970s era Computer Scientists would've laughed at the notion of unit testing; I'm not sure they would've been wrong to.

Michiel Buddingh'
The funny thing is that many programmers do unit testing with console outputs.
01
I still believe we try to excuse ourselves most of the time. See http://monkeyisland.pl/2009/05/12/excuses-for-not-doing-dev-testing/ and http://www.sundog.net/sunblog/posts/top-five-excuses-for-not-unit-testing/
Peter Kofler
Tests are ridiculously ineffective and clumsy compared to program derivation using formal methods, which was quite popular in the 1970s. Or one might instead choose to generate tests: http://www.cs.chalmers.se/~rjmh/QuickCheck/ from specifications; again, a far more effective strategy. The field of Software Engineering has the annoying tendency to gravitate towards an oppressive consensus about best practices, often turning mediocre half-solutions (like unit testing) into sacred cows.
Michiel Buddingh'
+3  A: 

It's the basic psychology of pain. When you'ew running to meet a deadline code quality takes the last seat. We hate it because it's dull and boring.

Pradeep
+1  A: 

A lot of it arises when programmers forget, or are naive, and act like their code won't be viewed by somebody else at a later date (or themselves months/years down the line).

Also, commenting isn't near as "cool" as actually writing a slick piece of code.

McAden
+5  A: 

One big factor that I didn't see mentioned yet is that any process improvement (unit testing, continuos integration, code reviews, whatever) needs to have an advocate within the organization who is committed to the technology, has the appropriate clout within the organization, and is willing to do the work to convince others of the value.

For example, I've seen exactly one engineering organization where code review was taken truly seriously. That company had a VP of Software who was a true believer, and he'd sit in on code reviews to make sure they were getting done properly. They incidentally had the best productivity and quality of any team I've worked with.

Another example is when I implemented a unit-testing solution at another company. At first, nobody used it, despite management insistence. But several of us made a real effort to talk up unit testing, and to provide as much help as possible for anyone who wanted to start unit testing. Eventually, a couple of the most well-respected developers signed on, once they started to see the advantages of unit testing. After that, our testing coverage improved dramatically.

I just thought of another factor - some tools take a significant amount of time to get started with, and that startup time can be hard to come by. Static analysis tools can be terrible this way - you run the tool, and it reports 2,000 "problems", most of which are innocuous. Once you get the tool configured properly, the false-positive problem get substantially reduced, but someone has to take that time, and be committed to maintaining the tool configuration over time.

Mark Bessey
I agree. I once converted a team into belivers usind a build, tests, code analysis and such. Now in a new team I am having a hard time. I can't see why it's so boring?
Peter Kofler
+26  A: 

One obvious answer for the Stack Overflow part is that it isn't a forum. It is a database of questions and answers, which means that duplicate questions are attempted avoided.

How many different questions about code quality can you think of? That is why there aren't 50,000 questions about "code quality".

Apart from that, anyone claiming that conference speakers don't want to talk about unit testing or code quality clearly needs to go to more conferences.

I've also seen more than enough articles about continuous integration.

There are the common excuses for not writing tests, but they are only excuses. If one wants to write some tests for his/her new code, then it is possible

Oh really? Even if your boss says "I won't pay you for wasting time on unit tests"? Even if you're working on some embedded platform with no unit testing frameworks? Even if you're working under a tight deadline, trying to hit some short-term goal, even at the cost of long-term code quality?

No. It is not "always possible" to write unit tests. There are many many common obstacles to it. That's not to say we shouldn't try to write more and better tests. Just that sometimes, we don't get the opportunity.

Personally, I get tired of "code quality" discussions because they tend to

  • be too concerned with hypothetical examples, and are far too often the brainchild of some individual, who really hasn't considered how aplicable it is to other people's projects, or codebases of different sizes than the one he's working on,
  • tend to get too emotional, and imbue our code with too many human traits (think of the term "code smell", for a good example),
  • be dominated by people who write horrible bloated, overcomplicated and verbose code with far too many layers of abstraction, or who'll judge whether code is reusable by "it looks like I can just take this chunk of code and use it in a future project", rather than the much more meaningful "I have actually been able to take this chunk of code and reuse it in different projects".

I'm certainly interested in writing high quality code. I just tend to be turned off by the people who usually talk about code quality.

jalf
good points. writing tests might be a problem with tight deadline. still you could use a build, static code analysis. It would only be a one time setup cost. You could use the information gathered there, or not. Propably you are right, I am not so bright coder, as I always find mistakes when writing even simple tests for my code, but so I have to continue to write them.
Peter Kofler
I'm not saying other people wouldn't benefit from writing unit tests, just that ultimately, it takes time that could be spent trying to hit that short-term deadline instead. And sometimes, that's not really an option.
jalf
You generally make good points about discussions of code quality. However, I take issue with your statement that 'code smell' is a good example of attributing human traits to code. The notion of 'code smell' comes from the idea that something 'smells fishy.' In other words, "I can't quite put my finger on it, but something doesn't seem right." I fail to see how this is an anthropomorphization. That said, the term itself bothers me. However, lacking a better term, I still find myself using it occasionally.
George Marian
@George: Yep, I occasionally use it too, it's just too vague and fluffy a term for me to like it. Code doesn't smell. If we can't put our finger on what the problem is then that is our **first** problem to solve. We shouldn't base our coding practices on gut feelings.
jalf
Yeah, it bothers me because code doesn't have a smell. I also agree that coding practices shouldn't be based on gut feelings. However, just because I can't put it to words, doesn't mean that it isn't there. Yes, that issue is something to be addressed as well. Ultimately, the point is that the uneasy feeling you get -- that something stinks -- is supposed to be motivation for finding the problem. Just like coming home to a nasty smell should have you searching for the source.
George Marian
+1  A: 

Another thing that several people have touched on is that most development engineers are terrible testers. They don't have the expertise or mind-set to effectively test their own code. This means that unit testing doesn't seem very valuable to them - since all of their code always passes unit tests, why bother writing them?

Education and mentoring can help with that, as can test-driven development. If you write the tests first, you're at least thinking primarily about testing, rather than trying to get the tests done, so you can commit the code...

Mark Bessey
I completely disagree. We don't have go-order and the respective time, not the expertise. I personally regularly find bugs our testers missed, even in basic scenarious. I have much better mind-set than testers, it's just I have no time.
User
As for writing tests before - well, do you really know what you are doing 100% in advance? With agile development, everything is prone to change, to quick change. If you will be planning and writing tests first, you will accomplish very little and there will be double work lost when you decide against this functionality.
User
Well, the Agile Development gurus will tell you that writing test code (and running code reviews) actually decreases the total time to deliver working code. There have been a number of academic studies that bear this out. As for your observation that you "regularly find bugs our testers missed, even in basic scenarios" - I certainly *hope* that you do. Your test team shouldn't be responsible for finding boneheaded mistakes in your code. Delivering code that works in the "normal" cases is *your job*. The test team should be concentrating on higher-value tasks.
Mark Bessey
The mantra of the eXtreme Programming development methodology is to write "the simplest thing that could possibly work", and iterate over time. Some philosophy like that is absolutely a necessity for Test Driven Development to not become a huge time-sink. Writing a bunch of tests for functionality that's it's not clear you actually need is just doing up-front design in the form of writing tests. Again, the point of writing tests is to quickly hone in on a proper implementation. Anything that keeps you from writing code you''ll just have to re-write later has to be a big win in productivity.
Mark Bessey
+2  A: 

I also have not seen unit tests written on a regular basis. The reason for that was given as the code being too extensively changed at the beginning of the project so everyone dropped writing unit tests until everything got stabilized. After that everyone was happy and not in need of unit tests. So we have a few tests stay there as a history but they are not used and are probably not compatible with the current code.

I personally see writing unit tests for big projects as not feasible, although I admit I have not tried it nor talked to people who did. There are so many rules in business logic that if you just change something somewhere a little bit you have no way of knowing which tests to update beyond those that will crash. Who knows, the old tests may now not cover all possibilities and it takes time to recollect what was written five years ago.

The other reason being the lack of time. When you have a task assigned where it says "Completion time: O,5 man/days", you only have time to implement it and shallow test it, not to think of all possible cases and relations to other project parts and write all the necessary tests. It may really take 0,5 days to implement something and a couple of weeks to write the tests. Unless you were specifically given an order to create the tests, nobody will understand that tremendous loss of time, which will result in yelling/bad reviews. And no, for our complex enterprise application I cannot think of a good test coverage for a task in five minutes. It will take time and probably a very deep knowledge of most application modules.

So, the reasons as I see them is time loss which yields no useful features and the nightmare to maintain/update old tests to reflect new business rules. Even if one wanted to, only experienced colleagues could write those tests - at least one year deep involvement in the project, but two-three is really needed. So new colleagues do not manage proper tests. And there is no point in creating bad tests.

User
I don't want to do business with a company that has hard and fast rules which no-one knows what the consequences are, because they are too busy to work them out. It seems bizarre that you have allowed yourself into a situation where you have people writing rules who have no idea how they effect the system.
Pete Kirkham
+8  A: 

Why is code quality so unpopular?

Because our profession is unprofessional.

However, there are people who do care about code quality. You can find such-minded people for example from the Software Craftsmanship movement's discussion group. But unfortunately the majority of people in software business do not understand the value of code quality, or do not even know what makes up good code.

Esko Luontola
+4  A: 

Code Quality is unpopular? Let me dispute that fact.

Conferences such as Agile 2009 have a plethora of presentations on Continuous Integration, and testing techniques and tools. Technical conference such as Devoxx and Jazoon also have their fair share of those subjects. There is even a whole conference dedicated to Continuous Integration & Testing (CITCON, which takes place 3 times a year on 3 continents). In fact, my personal feeling is that those talks are so common, that they are on the verge of being totally boring to me.

And in my experience as a consultant, consulting on code quality techniques & tools is actually quite easy to sell (though not very highly paid).

That said, though I think that Code Quality is a popular subject to discuss, I would rather agree with the fact that developers do not (in general) do good, or enough, tests. I do have a reasonably simple explanation to that fact.

Essentially, it boils down to the fact that those techniques are still reasonably new (TDD is 15 years old, CI less than 10) and they have to compete with 1) managers, 2) developers whose ways "have worked well enough so far" (whatever that means). In the words of Geoffrey Moore, modern Code Quality techniques are still early in the adoption curve. It will take time until the entire industry adopts them.

The good news, however, is that I now meet developers fresh from university that have been taught TDD and are truly interested in it. That is a recent development. Once enough of those have arrived on the market, the industry will have no choice but to change.

Eric
I hope that you are right about these developers fresh from university but I am afraid when they are thrown into legacy code where they can't apply TDD, their principles will just crumble.
Peter Kofler
+2  A: 

It's 'dull' to catch some random 'feature' with extreme importance for more than a day in mysterious code jungle wrote by someone else x years ago without any clue what's going wrong, why it's going wrong and with absolutely no ideas what could fix it when it was supposed to end in a few hours. And when it's done, no one is satisfied cause of huge delay.

Been there - seen that.

Arnis L.
+6  A: 

Short answer: It's one of those intangibles only appreciated by other, mainly experienced, developers and engineers unless something goes wrong. At which point managers and customers are in an uproar and demand why formal processes weren't in place.

Longer answer: This short-sighted approach isn't limited to software development. The American automotive industry (or what's left of it) is probably the best example of this.

It's also harder to justify formal engineering processes when projects start their life as one-off or throw-away. Of course, long after the project is done, it takes a life of its own (and becomes prominent) as different business units start depending on it for their own business process.

At which point a new solution needs to be engineered; but without practice in using these tools and good-practices, these tools are less than useless. They become a time-consuming hindrance. I see this situation all too often in companies where IT teams are support to the business, where development is often reactionary rather than proactive.

Edit: Of course, these bad habits and many others are the real reason consulting firms like Thought Works can continue to thrive as well as they do.

hythlodayr
+3  A: 

It's pretty simple when you consider the engineering adage "Good, Fast, Cheap: pick two". In my experience 98% of the time, it's Fast and Cheap, and by necessity the other must suffer.

George Jempty
+1  A: 

I don't know. Have you seen Sonar? Sure it is Maven specific, but point it at your build and boom, lots of metrics. That's the kind of project that will facilitate these code quality metrics going mainstream.

Nathan Feger
Yeah, it's cool. Something like that has been missing for long in the free code quality stack.
Peter Kofler
+1  A: 

The likelyhood of you being replaced by a cheaper fresh out of college student or outsource worker is directly proportional to the readability of your code.

joebert
Exactly: http://www.spinellis.gr/blog/20090902/
Peter Kofler
Such employers should be taken out of the market. Forced to bankruptcy and be allowed to have a business at least 10 years from then.
Andrei Rinea
+2  A: 

A lot of the concepts that are emphasized in modern writing on code quality overlook the primary metric for code quality: code has to be functional first and foremost. Everything else is just a means to that end.

Some people don't feel like they have time to learn the latest fad in software engineering, and that they can write high-quality code already. I'm not in a place to judge them, but in my opinion it's very difficult for your code to be used over long periods of time if people can't read, understand and change it.

James Thompson
+3  A: 

Management needs to be sold on the value of spending more time now to save time down the road. Since they can't actually measure "bugs not fixed", they're often more concerned about meeting their immediate deadlines & ship date than the longterm quality off the project.

Ophidian
+5  A: 

Probably every Java developer knows JUnit...

While I believe most or many developers have heard of JUnit/nUnit/other testing frameworks, fewer know how to write a test using such a framework. And from those, very few have a good understanding of how to make testing a part of the solution.

I've known about unit testing and unit test frameworks for at least 7 years. I tried using it in a small project 5-6 years ago, but it is only in the last few years that I've learned how to do it right. (ie. found a way that works for me and my team...)

For me some of those things were:

  • Finding a workflow that accomodates unit testing.
  • Integrating unit testing in my IDE, and having shortcuts to run/debug tests.
  • Learning how to test what. (Like how to test logging in or accessing files. How to abstract yourself from the database. How to do mocking and use a mocking framework. Learn techniques and patterns that increase testability.)
  • Having some tests is better than having no tests at all.
  • More tests can be written later when a bug is discovered. Write the test that proves the bug, then fix the bug.
  • You'll have to practice to get good at it.

So until finding the right way; yeah, it's dull, non rewarding, hard to do, time consuming, etc.

EDIT: In this blogpost I go in depth on some of the reasons given here against unit testing.

Arjan Einbu
I like your blog post ;-)
Peter Kofler
+2  A: 

Lack of 'code quality' doesn't cost the user, the salesman, the architect nor the developer of the code; it slows down the next iteration, but I can think of several successful products which seem to be made out of hair and mud.

I find unit testing to make me more productive, but I've seen lots of badly formatted, unreadable poorly designed code which passed all its tests ( generally long-in-the-tooth code which had been patched many times ). By passing tests you get a road-worthy Skoda, not the craftsmanship of a Bristol. But if you have 'low code quality' and pass your tests and consistently fulfill the user's requirements, then that's a valid business model.

My conclusion is that developers do not want to write tests.

I'm not sure. Partly, the whole education process in software isn't test driven, and probably should be - instead of asking for an exercise to be handed in, give the unit tests to the students. It's normal in maths questions to run a check, why not in software engineering?

The other thing is that unit testing requires units. Some developers find modularisation and encapsulation difficult to do well. A good technical lead will create a modular architecture which localizes the scope of a unit, so making it easy to test in isolation; many systems don't have good architects who facilitate testability, or aren't refactored regularly enough to reduce inter-unit coupling.

It's also hard to test distributed or GUI driven applications, due to inherent coupling. I've only been in one team that did that well, and that had as large a test department as a development department.

Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects.

Every set of coding conventions I've seen which hasn't been automated has been logically inconsistent, sometimes to the point of being unusable - even ones claimed to have been used 'successfully' in several projects. Non-automatic coding standards seem to be political rather than technical documents.

Usually even compiler warnings like potential null pointer access are ignored.

I've never worked in a shop where compiler warnings were tolerated.

Pete Kirkham
"Non-automatic coding standards seem to be political rather than technical documents." - I did never see it that way, but it's 100% true. They are not worth the paper when not checked at all. But why is it like that? Why are they not obeyed if not enforced? Usually they make sense to everybody.
Peter Kofler
"I've never worked in a shop where compiler warnings were tolerated." - wow! I am truly impressed. I should do the same.
Andrei Rinea
+1  A: 

I think code quality is over-rated. the more I do it the less it means to me. Code quality frameworks prefer over-complicated code. You never see errors like "this code is too abstract, no one will understand it.", but for example PMD says that I have too many methods in my class. So I should cut the class into abstract class/classes (the best way since PMD doesn't care what I do) or cut the classes based on functionality (worst way since it might still have too many methods - been there).

Static Analysis is really cool, however it's just warnings. For example FindBugs has problem with casting and you should use instaceof to make warning go away. I don't do that just to make FindBugs happy.

I think too complicated code is not when method has 500 lines of code, but when method is using 500 other methods and many abstractions just for fun. I think code quality masters should really work on finding when code is too complicated and don't care so much about little things (you can refactor them with the right tools really quickly.).

I don't like idea of code coverage since it's really useless and makes unit-test boring. I always test code with complicated functionality, but only that code. I worked in a place with 100% code coverage and it was a real nightmare to change anything. Because when you change anything you had to worry about broken (poorly written) unit-tests and you never know what to do with them, many times we just comment them out and add todo to fix them later.

I think unit-testing has its place and for example I did a lot of unit-testing in my webpage parser, because all the time I found diffrent bugs or not supported tags. Testing Database programs is really hard if you want to also test database logic, DbUnit is really painful to work with.

martin
"I think too complicated code is not when method has 500 lines of code, but when method is using 500 other methods and many abstractions just for fun." Gotta agree with that!
HLGEM
There are also static analysis tools for complexity, e.g. dependency counts, Cyclomatic Complexity or my favorite http://www.crap4j.org/ (Cyclomatic Complexity + Coverage).
Peter Kofler
+3  A: 

Code quality is subjective. Subjective topics are always tedious.

Since the goal is simply to make something that works, code quality always comes in second. It adds time and cost. (I'm not saying that it should not be considered a good thing though.)

99% of the time, there are no third party consquences for poor code quality (unless you're making spaceshuttle or train switching software).

  • Does it work? = Concrete.
  • Is it pretty? = In the eye of the beholder.

Read Fred Brooks' The Mythical Man Month. There is no silver bullet.

only 99%? i think its 100%.
01
A: 

In the world of the web, I think it's because time-to-market is crucial. Bugs can be fixed on the fly. The emphasis is on getting something cool up fast. You fix it if the money comes flying at you. But first you want to see if your project "sticks." If not, you do something new. But if you have a hit like Twitter or Facebook, then you get serious about engineering.

But there are all kinds of programming tasks. If you're sending game cartridges or discs out, there's (for the most part) no fixing them once they're in the wild.

Nosredna
+1  A: 

People don't have a common sense of what "good" means for code. A lot of people will drop to the level of "I ran it" or even "I wrote it."

We need to have some kind of a shared sense of what good code is, and whether it matters. For the first part of that,I have written up some thoughts:

http://agileinaflash.blogspot.com/2010/02/seven-code-virtues.html

As for whether it matters, that's been covered plenty of times. It matters quite a lot if your code is to live very long. If it really won't ever sell or won't be deployed, then it clearly doesn't. If it's not worth doing, it's not worth doing well.

But if you don't practice writing virtuous code, then you can't do it when it matters. I think people have practiced doing poor work, and don't know anything else.

tottinge
I agree. The mindset is most critical. But people with the proper mindset are conceived religious/too emotional/fanatic by others then.
Peter Kofler
+1  A: 

I think that real problem with code quality or testing is that you have to put a lot of work into it and YOU get nothing back. less bugs == less work? no, there's always something to do. less bugs == more money? no, you have to change job to get more money. unit-testing is heroic, you only do it to feel better about yourself.

I work at place where management is encouraging unit-testing, however I am the only person that writes tests(i want to get better at it, its the only reason I do it). I understand that for others writing tests is just more work and you get nothing in return. surfing the web sounds cooler than writing tests.

someone might break your tests and say he doesn't know how to fix or comment it out(if you use maven).

Frameworks are not there for real web-app integration testing(unit test might pass, but it might not work on a web page), so even if you write test you still have to test it manually.

You could use framework like HtmlUnit, but its really painful to use. Selenium breaks with every change on a webpage. SQL testing is almost impossible(You can do it with DbUnit, but first you have to provide test data for it. test data for 5 joins is a lot of work and there is no easy way to generate it). I dont know about your web-framework, but the one we are using really likes static methods, so you really have to work to test the code.

01
You get nothing back NOW. But later, when you dive into your legacy code again, then you pay less, or at least it's less dull. But of course, you are right, it's a lot of work and tool support could be better. So why are the tools not better?
Peter Kofler
Tools are not better, because nobody last that long doing integration testing - most people give up. I think everybody likes the idea of testing, but reality of it is different. most tests are badly written and hard to understand.
01
+1  A: 

My short answer would be: unless you're the person who's writing the cheques (i.e. freelancer, own a company etc), most of the time speed of deliverance is of much greater importance for your employer. Given this fact, some times programmers are required to go the "quick and dirty" way when facing deadlines.

Additional factors, such as poor analysis and customers who come up with last minute changes, make the production of "quality" code even harder.

Anax