views:

526

answers:

14

Are some programming languages, by their nature, easier to write programs with less bugs in than others? For a fair comparison, assume that the programmer is considered "skilled" in that particular language, however long that may have taken him.

I'm mainly wondering about the influence of conceptual differences, like:

  • manual memory management vs. garbage collection
  • imperative vs. imperative/object-oriented vs. functional vs. declarative
  • languages with side effects vs. side-effect free languages
  • static typing vs. dynamic typing

Again, I don't want to see whether one language is "easier" than another, but compare once the programmer has mastered the language how many bugs are produced in one language versus another, and whether the choice of language makes any difference at all.

Edit: okay, I shouldn't have used the word "bug-free". I thought this word was the easiest way to get the point across. I know such programs do not exist.

+6  A: 

My initial gut reaction is that I imagine writing in languages that did not deal with state as much (functional languages) are less prone to side effects. But you can program errors into anything! :)

Arthur Thomas
+1  A: 

Well, I don't think that there is any "bug-free" program out there, but I would suggest that the higher the level of abstraction the language provides may lead to less overall bugs. Also, garbage collection / memory management is huge. I think that it comes down to the programmer rather than the language in most cases.

Ed Swangren
A: 

The higher-level programming language is, the less chance you have to mess up with those small details like memory management, or just repeating the simple stuff too many times.

Milan Babuškov
A: 

Well, since the only bug-free programs are ones with ZERO lines of code (there is no such thing as a bug-free program), and it's harder to write a program in C, than say C#, therefore its easier to not write at all in C -> hence, its easier to write a blank, bug-free program in C.
(yes, this in tongue in cheek)

Now, assuming you actually meant the inverse - how difficult is it to NOT produce bugs - then as a general guideline, I would say loosely-typed script languages are easiest; then high level languages, such as Java and C#; with complex languages with lots of manual plumbing like C/C++ as the hardest.
Im sure Im generalizing, and I might have missed some... e.g. Not sure where logical languages (lisp/prolog) fit in...

AviD
A: 

Well, after I got cleaned up from laughing so hard beer shot out my nose at the concept of a "bug free program" - I can say that languages that have a lot of pre-built and hardened libraries significantly cut down on the number of bugs in application development.

This is simply because the more code you write, the greater the chance of introducing a bug.

That's why the mantra of "write the simplest code necessary" to get the task done is such good advice.

Ron

Ron Savage
+1  A: 

I think it's easier to create some kinds of bugs in a language like C where you have to worry about memory management, but when you're talking about "bug free" you're talking about the programmer rather than the language.

Telos
A: 

Modern day debuggers and compilers make it much easier on the average or new developers working in a language than it was in the past. It's probably less of an impact on those who are experts or have mastered a particular language.

SaaS Developer
+16  A: 

Maybe.

Some kinds of programming languages simply don't have certain types of bugs.

The most successful advance recently has been the change from memory-managed languages to garbage collected languages. Memory management errors, once common in C++, are almost impossible in Java and C#, as well as in Python, Ruby, and Perl. Of course, you give up a little performance for this advantage and you give up predictability because the garbage collector doesn't always run when you expect it to and finally you give up control because you can't choose to manage your own pool of memory as easily.

Some people will say that widespread adoption of block-structured programming (C, Algol, Pascal) was a similar advance over languages where spaghetti coding was common (COBOL, assembly, BASIC).

There are programmers who propose functional programming as an advance that will enable fewer bugs yet. Others look to the functional semantics included in Python, Perl, and Ruby as the best way to get to that advance.

Some folks argue that the most important thing is to have less code overall and therefore more terse or more expressive languages are less bug-prone. Lately those people observe that Ruby and Python seem to be producing the least verbose code.

A recent innovation called Software Transactional Memory is being discussed as the tonic that will make bug-free multi-threaded programming possible. Since multi-threaded programming is known for its obscure and difficult to fix bugs, languages with support for STM may be a major advance in this aspect. STM is much easier to support with syntax for pure functional and mostly functional languages like Haskell, Scala, and Clojure.

Brian
+1. But would an expert C# coder produce code with fewer errors than an expect C coder? Probably not, I'd guess. Bugs are there because a programmer placed the code there. Having a GC means that C# experts will produce lots of bugs if they ever have to use unmanaged code in their app.
*I'm assuming that the experts are coding in their own language.
+9  A: 

No! Debuggers make your code worse!

Let me support this rash statement by telling you a little tale of first-hand experience in which I accidentally learnt something profound.

I took on a contract as a Delphi coder, and the first task assigned was to write a template engine conceptually similar to a reporting engine - using Java, a language with which I was unfamiliar.

Bizarrely, the employer was quite happy to pay me contract rates to spend months becoming proficient with a new language, but wouldn't pay for books or debuggers. I was told to download the compiler and learn using online resources (Java Trails were pretty good).

The golden rule of arts and sciences is that whoever has the gold makes the rules, and I proceeded as instructed. I got my editor macros rigged up so I could launch a compile with a single keystroke, and I used regexes to parse the compiler output and put my cursor on the reported location of compile errors, and so I had a little IDE with everything but a debugger.

To trace my code I used the good old fashioned technique of inserting writes to the console that logged position in the code and the state of any variables I cared to inspect. It was crude, it was time-consuming, it had to be pulled out once the code worked and it sometimes had confusing side-effects (eg forcing initialisation earlier than it might otherwise have occurred resulting in code that only works while the trace is present).

Under these conditions my class methods got shorter and more and more sharply defined, until typically they did exactly one very well defined operation. They also tended to be specifically designed for easy testing, with simple and completely deterministic output so I could test them independently.

The long and the short of it is that when debugging is more painful than designing, the path of least resistance is better design.

What turned this from an observation to a certainty was the success of the project. Suddenly there was budget and I had a "proper" IDE with an integrated debugger. Over the course of the next two weeks I noticed a reversion to prior habits, with "sketch" code made to work by iterative refinement in the debugger.

Having noticed this I recreated some earlier work using a debugger in place of thoughtful design. Interestingly, taking away the debugger slowed development only slightly, and the finished code was vastly better quality particularly from a maintenance perspective.

Don't get me wrong: there is a place for debuggers. Personally, I think that place is in the hands of the team leader, to be brought out in times of dire need to figure out a mystery, and then taken away again before people lose their discipline.

People won't want to ask for it because that would be an admission of weakness in front of their peers, and the act of explaining the need and the surrounding context may well induce peer insights that solve the problem - or even better designs free from the problem.

Peter Wone
That is an awesome story about debuggers and very close to my own experience. More and more now I just write it correctly the first time and that improvement came with a very awkward debugger driving it.It focuses the mind.
Brian
That's a nice story but why not use eclipse? it can compile java with a single click, has a debugger and is free? I call BS.
shoosh
I was NEW to Java and didn't know about Eclipse even if it included a debugger in early 2000 which is when all this happened. It's as well that I didn't, or I wouldn't have learned anything.
Peter Wone
not to be too rude, but it sounds like you're blaming your a of self-discipline on having good tools?
Steven A. Lowe
Blaming? The flaw is mine, no doubt. Perhaps you are above the human weakness for the path of least resistance, in which case I apologise for wasting your time.
Peter Wone
Guys, guys! Eclipse wasn't common knowledge in early 2000 (Eclipse foundation wasn't even founded until 2004).It may seem radical but I agree debuggers are evil, as in if you spend more than 10% of your time debugging, chances are there is some crappy design in your code. Good code design doesn't require debuggers to waste your time (this is a lesson learnt from TDD).If programmers like debuggers, fine, but if you enjoy spending most of your time stepping through your own code and find **that** productive then something is seriously wrong with you.
Spoike
A: 

From the studies I have seen, all languages have pretty much the same "average defects per KLOC of untested source". So go for a language with a high complexity factor: 4 lines of Python will do the same as 100 lines of COBOL, so a Python program will have one twenty-fifth of the number of defects as the equivalent COBOL program, on average.

chimp
That is interesting. I wonder if it holds for Perl oneliners, though ;)
Thomas
A: 

I feel that the code syntax of programming languages make certain languages are more prone to certain type of bugs.

For example, in programming languages that have C-style code syntax (like C++ and Java), code blocks are indicated by curly braces, and whitespace used to visually distinguish code blocks are ignored by the compiler. Furthermore, for code blocks which consist of a single line statement, the curly braces can be omitted. This can lead to errors which make the programmer believe that a piece o code means something other than what it really is:

int sum = 0;
for (int i = 0; i < 5; i++)
 sum += i;
 System.out.println(sum);

I'm sure most of us has either spotted a piece of code like shown above, or mistakenly written code like that least once in the past.

The indentation of the code seems to imply that for each iteration of the for loop, the sum variable is increased by i and the current value of sum is printed out to the console. However, the indentation is a deception, and the only line that is part repeated is the line that affects sum, and the println method is only executed once after the for loop ends.

In Python, where whitespace is a required part of the syntax, bugs like the above code cannot happen. For coders from C-style languages (like me), the Python syntax may be a little bit annoying, but it will prevent certain types of bugs from occurring by enforcing a style of coding.

Below is the same code written in Python:

sum = 0
for i in range(0, 5):
    sum += i
    print sum

The code in Python does what the indentation seems to imply: for each iteration of the for loop, the sum is increased and the current value in sum is output to the console.

Perhaps, one language feature that could reduce the number of these subtle bugs in languages that use the C-style syntax would be to enforce the use of curly braces even for single line statements in a code block. Python enforces code blocks by whitespace and that seems to work.

Although my response may not have been exactly what was expected as an answer, I thought it would be still an valid point to address.

coobird
It definitely is!
Thomas
I don't have the indentation problem for two reasons: (1) I almost always put the braces in, and (2) I use tools (e.g. Eclipse) which provide auto-indentation and/or automatic reformatting, which would make such an indentation error immediately obvious.
joel.neely
+1  A: 

Various programming languages (regarded purely as notation) provide different levels and kinds of opportunities to create bugs, and offer different levels and kinds of support for detecting bugs. As an extreme example, it's easier to create a source file that is syntactically correct but semantically meaningless in assembly language than Haskell. The more constrained a language is, the harder it is to "sneak" an error past the parser.

As the language takes on more responsibility for low-level bookkeeping of memory, type-checking, etc., the programmer is presented with fewer opportunities to make easily-avoidable errors and can concentrate more on the purpose of the program than the plumbing. On the other hand, such languages tend to require more learning curve and per-task planning and design; therefore they tend to be scorned by programmers who don't want to take (or don't have) the time to learn a new approach.

But my experience is that education and personal development habits have as much to do with defect level as the language. I've known meticulously careful programmers who seldom made stereotypical errors in C, and kludgers who made horrible messes in Java. Regardless of language, a programmer who makes good use of available libraries and tools is likely to have fewer bugs than one who is constantly reinventing the wheel and expressing everything in terms of primitive data types.

And my experience is such that I wholeheartedly agree with Peter Wone's skepticism re debuggers. Dependence on a debugger seems to pose too much of a temptation to slap something together and then grovel around in memory figuring out what it's really doing. I find myself making much better progress with higher-level techniques, such as TDD.

joel.neely
A: 

Here's a theory of bugs which I just invented (meaning I have no data to back it up).

The number of bugs in a piece of code is a function that's monotonically increasing in the following four parameters:

  • how much you have to think about
  • how hard it is to think about those things (state changing over time adds to this)
  • how much you need to know about other pieces of code to write your code well (i.e. how coupled everything is)
  • how hard it is to come across that knowledge

Some examples of the four factors:

  • amount of thinking: that's basically lines of code. Using a dense language helps here.
  • hardness of thinking: visualizing state change over time adds to this. Things going on in parallel adds to this. Dealing with non-local control flow (exceptions and continuations) adds to this.
  • amount of knowledge: operator overloading and parameters invisibly being passed by reference adds to this (i.e. C++). Interfaces (APIs) with many entry points (i.e. X11) add to this. Good language and API design helps here.
  • hardness of (acquiring) knowledge: ctags and jump-to-function-definition helps here. Invoking an unknown method (you know the name but not where in the source) hurts here. Documentation helps here.

As has been pointed out, there's evidence for the lines-of-code thing. Suppose functional languages are better in this respect than other (dysfunctional? :D) languages; is it because most of them are just denser, or is it because they encourage doing away with side-effects? I don't know...

Jonas Kölker
A: 

The answer is definitely and obviously yes. If it's not obvious, go code in assembler and C all day, then switch to Python or Java.

Longpoke