Even when viewing the subject in the most objective way possible, it is clear that software, as a product, generally suffers from low quality.

Take for example a house built from scratch. Usually, the house will function as it is supposed to. It will stand for many years to come, the roof will support heavy weather conditions, the doors and the windows will do their job, the foundations will not collapse even when the house is fully populated. Sure, minor problemsdo occur, like a leaking faucet or a bad paint job, but these are not critical.

Software, on the other hand is much more susceptible to suffer from bad quality: unexpected crashes, erroneous behavior, miscellaneous bugs, etc. Sure, there are many software projects and products which show high quality and are very reliable. But lots of software products do not fall in this category. Take into consideration paradigms like TDD which its popularity is on the rise in the past few years.

Why is this? Why do people have to fear that their software will not work or crash? (Do you walk into a house fearing its foundations will collapse?) Why is software - subjectively - so full of bugs?

Possible reasons:

  • Modern software engineering has existed for only a few decades, a small time period compared to other forms of engineering/production.
  • Software is very complicated with layers upon layers of complexity, integrating them all is not trivial.
  • Software development is relatively easy to start with, anyone can write a simple program on his PC, which leads to amateur software leaking into the market.
  • Tight budgets and timeframes do not allow complete and high quality development and extensive testing.

How do you explain this issue, and do you see software quality advancing in the near future?

+8  A: 

I think the problem is inherently human. Human brains haven’t developed to cope well with this high degree of abstraction required in software development. Our ancestors had to face tigers in the savanna. Natural selection dictates that they evolved by adapting to this situation.

Most animals, even “intelligent” ones, fail at the most simple tasts that require any kind of abstraction at all. For example, I’ve heard from zoologists that cats are unable to find their food if, in order to get to it, they have to turn their back to the food source. They just can’t make the mental connection that turning in the opposite direction might help them bypass an obstacle.

Human minds can cope with much higher degrees of abstraction. However, the problem of software development is a completely new one. There has never been a comparable situation. Even though humans have known mathematics for millenia, this has never been a factor favoured in evolution (and the time span has been much too short to play any role in evolution anyway).

In conclusion, human minds aren't really well evolved to face this kind of task. It just doesn’t come natural. Therefore, everything has to be found out the hard way. Intuitively, we say “building software is much more complex than building houses” – in a way, that’s right. But it’s most probably due to what I’ve outlined above.

Admittedly, that explanation, even if right, is quite abstract and I do think that software quality will improve substantially in the near future (long before any mechanism such as natural selection can kick in) because we’ll find out more about the processes involved and how to refine them.

Konrad Rudolph
+1 that's more or less about it on the meta level IMHO! I bow to those software engeneers who manage, nevertheless, to create great software, you're superhuman... in a way. or just a bit ahead of evluation.
So humans did evolve enough to build houses? Although I find your explanation nice to read (didn't know about the cats) I think it is untrue.
“So humans did evolve enough to build houses?” – yea, there’s an obvious evolutionary advantage implied here.
Konrad Rudolph
+44  A: 

One major reason is that for the most part, software "engineers" aren't really trained as engineers. One of the most important principles in engineering is to keep designs as simple as possible in order to maximize reliability (fewer parts = fewer things that can fail).

Most software developers that I've worked with over the years are not just unaware of the KISS principle, but also actively committed to making their software as complicated as possible. Programmers by their nature enjoy working with complexity, so much so that they tend to add it if it isn't there already. This leads to buggy software.

Also, most software engineers suck... the elephant in the room?...
I'm glad you put ellipses in there. :)
Makes all the difference :-) there's an interesting schism between what developers often 'officially' say about their art, and what they'll say over a pint of beer after 10 years in the industry... I know which one to have more faith in!(I do actually love programming promise!)
KISS? Yeah Right :-) We'd all love to keep it simple but the bloody users keep coming up with obscure business rules that need X tweaking. Simple software is **only** found in the pages of textbooks or taught on CS courses
That reminds me of a line from a show about Hollywood agents: "this job would be so much easier if we didn't have to deal with the @#*%#! talent".
+1, but you need to add one more thing: Software Engineers as a rule have zero discipline. That is the real difference between us and the rest of the engineering world.
Chris Lively
Agreed! People think that Software is somehow different. It really IS just engineering in the end and us software engineers need to be trained just like other engineers.
and a lot of them suck because they heard that its good money on the radio, and now they convinced someone to hire them, not because they are passionate and interested in the craft
Um... Engineering isn't a magic fairy dust that you sprinkle on a problem to make solutions reliable. Thousands of buildings and bridges fell down before we learned how to build them reliably, which required us to learn fundamental laws to be able to model them. We don't yet know the "laws of physics" for software. Without those, there's no engineering possible.
Sean McMillan
@Sean: the basic principles of engineering have little to do with the "laws of physics", and actually derive from (as you point out) the lessons learned from a massive amount of trial and error. The same sort of thing is possible with software (with or without whatever the "laws of physics for software" might provide) - it just hasn't happened yet. And yes, I know that engineering isn't a magic fairy dust - my last name *isn't* "Strawman".
My university offer Software Engineering and teaches it as an engineering discipline. From what I know and experience there, it doesn't make a slight difference. You don't discover software, you create it. That's why it's so buggy. You discover how to build a bridge through trial and error. Then you simply apply the scientific discoveries to making another one, or a bigger one.
A bridge has only one function. To make an analogy, you need to compare it with a software that has only one function, like notepad. I would say you get the same degree of quality from a bridge then from notepad. On the other hand, a complex software would need to be compared with a complex structure, like a whole city would equal windows. With the money injected every year in city infrastructure maintenance, I'd say a whole city is just as crappy as windows is.
People don't agree on what good code looks like. I think it's this:
@didibus I've crashed notepad many times. I've yet to break a bridge.
+18  A: 

Like software, a house is made of many smaller structures - bricks, doors, roof tiles and so on. Unlike software, however, each of these pieces has already been pre-made and tested long before it reaches the house. Bricks are subject to stress and pressure testing, door hinges are tested thousands of times for durability so the fact that these parts should not fail is a given and we know their limits.

In software, each part is often being used for the first time - that is, it's entirely new code. Maybe we don't know quite how it behaves in all circumstances. This is one reason why I believe so much in code re-use where possible.

Thank you for giving me a great explanation why unit testing is so important +1
+19  A: 

Not a comprehensive list but I can see several reasons:

  • An unexpected microfailure in a physical system is generally not catastrophic, but it can be in a software system. A single uncaught and unhandled exception is not recoverable.
  • It is difficult for a customer to inspect a software system; developers and product managers know this (at some level) and are willing to trade quality for expediency
  • The costs of a software failure are often not as high as the costs of the failure of a physical system, thus it often does not pay to invest the resources to prevent all failures.
  • Many (if not most) computer science programs don't do a good job of teaching programming skills. They focus mainly on the science of computing rather than the craft of programming.

These are not defenses, just observations. Personally, I would recommend using practices that reduce the incidence of error, like TDD, customer on-site, pair programming, continuous build, etc.

+1, I, unfortunately, cant agree agree with your final point :-(
TDD has some studies that show a significant reduction in bugs (65-90%) when it is used at only modest cost increase (15-35%). Pair programming is said to have similar effects, though I don't know of any studies off the top of my head.
+1 for the first argument.
Adam Matan
+3  A: 

In my opinion the following are at least partially responsible:

  • Thorough and complete testing is tedious and time consuming
  • Sometimes its less expensive to ship buggy software than it is to fix it (or, more likely, its perceived to be less expensive)
  • Lack of understanding of the problem being solved. If you don't completely understand what you are solving, its going to be difficult to do so without introducing bugs.
  • Most programmers are pretty bad programmers (my opinion, of course, but in my own experience, I'd say only one in five programmers really know what they're doing)
  • If a problem is complex, its easy to get lost in one aspect and neglect another
  • Some programming languages are too verbose, making it difficult to keep the whole problem in mind at any one time, which allows bugs to creep in (for example, I tend to make more mistakes in Java than in Python, it may just be coincidence, of course, but I feel that Pythons higher level code helps me solve problems in fewer discrete chunks, leaving less room for bugs)
  • Dependencies. I believe that dependencies (I mean calculations and data which depend on one another) are a major cause of bugs - when they're not managed properly anyway (dependents not getting updated when they should etc)
  • A lot of programmers are lazy or distracted causing them to make mistakes. I know I'm guilty of this sometimes.
  • Most programmers aren't rigorous or methodological enough when approaching a problem. Instead of carefully planning out a solution and verifying that it is correct (formally or otherwise), they instead dive in and start coding. I know I sometimes do this even though I know I shouldn't..
  • The tight coupling of operations (instructions, statements, code blocks, functions etc) makes code less dynamic and fluid, which makes it difficult to update code, determine where code should be split up, what code can be reused, what should run concurrently and so on. This is, IMHO, another large source of error and one thats not easily solved with existing code.

And probably many more factors which I have forgotten about.

+12  A: 

A lot of high-falutin' ideas here, but I think the answer is very simple.

I'm not trolling, but the fact that most software engineers a. suck and b. have absolutely no desire whatsoever to develop themselves has everything to do with this. The 9-5'ers, especially in internal software (I work in internal software myself so... :-) just have absolutely no idea what they're doing, nothing in place to specify what is good and what isn't, the blind leading the blind. It's very rare for software to be developed to anything resembling a good standard anywhere. Most software is a big ball o' mud.

Ultimately there are a few good software houses out there, the rest suck. And we are stuck with the suck on the whole... the amazing thing is that anything actually works!!

"Stuck with the suck" is a great phrase. I'll give you 10% of the T-shirt sales.
Not that I disagree with you, but do you think that's all there is to it? The building industry has just as many 9-5'ers with no desire to improve, don't they?
Even if you look at software written by experts (say NASA), it still takes more effort to get to the reliability of modern physical construction. It's more than just sucking or a lack of growth.
Who says NASA programmers are experts?
The astronauts whose lives depend on their flight-control software.
The difference with the building industry is that 9-5 is ok if all you are doing is driving nails. No offense to those who drive nails, but software is a little more complicated then that.
@MusiGenesis - :-) I didn't even think about it when I said it! Haha! @jaif - There is more to it, definitely,I didn't want to do down other people's answers, rather I think it's the main cause of such problems. Also in building thinks are far simpler and more regimented. coders have too much power.
+5  A: 

This problem is definitely not unique to software development. All the reasons you mentioned are valid, but in cases that software is not life or mission critical, much less attention to quality is generally given.

As such, the house analogy is not a good one - if houses break-down, human lives could be lost which is unacceptable. For this reason a much tighter quality control is maintained.

There are plenty of industries that quality is all over the board, and it is usually contributed to value considerations. You pay more - you get higher quality products, and it's the same in software development.

Eran Galperin
There are also industries that rely on software quality: If a car has software, the software needs to work. Same goes with nuclear power, weaponry, etc., but also with health care, process industries... If products become poisonous for people because of software fault, that is bad.
As I wrote, where the software is mission critical (ie, not failing is important for business) the quality usually has higher standards
Eran Galperin
+35  A: 

"Take for example a house built from scratch. Usually, the house will function as it is supposed to. It will stand for many years to come, the roof will support heavy weather conditions, the doors and the windows will do their job, the foundations will not collapse even when the house is fully populated. Sure, minor problems do occur, like a leaking faucet or a bad paint job, but these are not critical."

This is clearly not true -- it's a really bad example. I've seen a large number of houses that are just shacks. Houses that can, and often do, simply collapse in heavy weather. They don't use windows and the door is a sheet of plywood.

To get away with building a shack instead of a house, you have to build your shack where there are no housing codes or standards. Or, you have to evade inspection by claiming you don't "live" there. Or you have to be heavily armed so that the inspectors don't bother you.

[bignose] Indeed. Also relevant is that many of us (including, I suspect, the person who wrote the original question) live in countries where housing is strictly regulated, and poor quality is considered unacceptable even if both builder and purchaser want to cut corners. In the absence of that enforced regulation, housing runs the whole range including ghastly deathtraps; that is what we see in the field of software.

Software quality is like all other forms of quality. The issue is this.

Most consumer products have warranties (expressed or implied). Some even have applicable standards for safety.

Consumer shrink-wrapped software specifically avoids offering a warranty of any kind. It's a sad, shabby business. To avoid warranty claims, they don't let you purchase software; you merely license it, or purchase a right to use. Read your EULA's. There's no warranty. Quality doesn't matter.

In-house software, developed by large IT organizations, has no warranty of any kind either. It's entirely based on corporate politics, internal reputation and influence.

Most of the time, most in-house software developers actually have good reputations and have earned those good reputations by providing outstanding levels of service.

But some software developers are slapping shacks together with cast-off plywood, pallets, blue poly tarps, and the remains of an old travel trailer that they found.

interesting insight.
Also relevant is that many of us live in countries where housing is *strictly* regulated, and poor quality is considered unacceptable even if both builder and purchaser want to cut corners. In the absence of that enforced regulation, housing runs the whole range including ghastly deathtraps; *that* is what we see in the field of software.
And different countries building codes differ hugely: a magnitude 6 earthquake flattens houses in Australia, but in New Zealand and Japan, they stay standing ( because they expect 7-8's). So the regulatory regime is set by expectations, a bit like the very hard DO-178B for aircraft software.
Tim Williscroft

i believe broken windows syndrome is also a real factor here. and depending on where your working and how strict is your company's code review process, a lot of bad codes can go in and build up.

especially when most apps survive longer than their initial expected time line.

+3  A: 

I think it's because every part of a (well-written) piece of software is unique. A house is a big project, but there's a lot of repetition: there are a ton of identical nails, 2x4s, bolts, bricks and so on. In contrast, in a well-written piece of software there's only one of each thing. It's like building a whole house by reusing the exact same brick, nail, and piece of lumber over and over: sure, you make a smaller project, but if that single brick, nail or piece of lumber is shoddy, the whole house will fall down.

Brent Royal-Gordon
Houses are sturdy because of the amount of redundancy. If one nail is out of place, it generally doesn't cause the house to fall. In software, the equivalent of a nail out of place could cause the whole thing to crash.
Software isn't necessarily small either, because it can have millions or tens of millions of lines of code. There are big software projects too.
+3  A: 

Engineering in all other disciplines is a highly mathematical subject -- a structural , aeronautical, or chemical engineer can analyze the problem they are trying to solve using well developed mathematical models.

All attempts to do this for software engineering on a large scale have failed.

The may be due to the relative 'newness' of software engineering, but I think it is more inherent than that.

Rob Walker
+19  A: 

Though I agree with the reasons given so far, there is one very simple explanation missing here that is rampant throughout the software industry: bad management.

I worked as a consultant for many years, and I can't tell you how many places I've worked at where the developers were expected to finish yesterday a project they were given tomorrow. It was either deliver the project on time or lose your job, so we had no choice but write code that was thrown together and not adequately tested. This was especially prevalent where the managers were not in the IT field themselves and had no clue what it takes to write and support quality software.

Yes, exactly. Great projects are defined in terms of quality, not time. That's why open-source works best; there's no deadline to get it done.
Kent Brewster
+2  A: 

Another reason not mentioned so far: even though a software achieve a reasonable good "quality" (in the sense it does do what initially specified), it still can goes horribly wrong.

Why ?

Because the final client may ending up using it with totally different conditions than the ones specified in the beginning, making the software pretty much useless.

Case in point: the computing the return rate of house mortgages, based on 20 years of data, does a perfect job... until you start feeding it with NINA mortgages (No Income, No Asset): it will still keep saying the return rate is fine. Actually... we know all too well by now the real return rate for those mortgage ;)

So the difference with the quality of an house is that you have to evaluate the usage conditions of a software released into production to really asses its quality...

+9  A: 

As others have pointed out, there are plenty of badly built houses, houses that are so infected with fungi after a single year that it's a hazard to your health to live there, houses where the rain leaks in through the roof, houses that collapse in windy weather and houses that fail in countless of other ways. And not just that, there are also countless of structural engineering projects which end hugely breaking their budget, taking years more than planned, and costing three times the original estimate. That is not unique to software engineering.

But another important factor is that building engineers can take a lot of things for granted. When you're building a house, you don't have to implement gravity yourself. The structural strength of steel or concrete just happens, you don't need to do anything special to ensure that it works in leap years too, or if the user opens the bathroom door at the same time as the phone rings.

In software, the only rules are the ones you code yourself. You're building a house out of the empty void, not out of well-known materials like concrete, steel and wood, and without the luxury of a fixed framework supplying gravity, solid ground, a consistent and stable atmosphere consisting of the same mix of gases every day, and keeping a pretty much constant pressure.

Of course, there is also plenty of bad management, plenty of incompetent programmers, plenty of bad practices, and so on. But it's not just that.

Absolutely. Additionally, I think part of the problem is exactly that we insist on thinking about "software construction" in terms of analogies to physical construction, and hand-wave over the important differences that you mention.
Tim Lesher
Surely there are also frameworks and gui-gadgets ready made. But often using a framework adds to the complexity. When a person is programming some logic, he has to take into account many lines of code of the framework, too.
"The structural strength of steel or concrete just happens." This is extremely false, as any building inspector will tell you. These things can only "just happen" when there are strict standards dictating steel and concrete composition, along with frequent testing, and thorough enforcement. Your house is not simple. It is the result of hundreds of years of small refinements to structure and materials. Of course, the builders have the advantage that people keep asking them to build the same thing over and over.
+2  A: 

One of the reasons is that the industry is very young compared to, e.g., masonry (no conspiracy theories, please). Also, ours is a field that changes too quickly. Consequently, we are never at the top of the game :( but we do try.

Dmitri Nesteruk
+3  A: 

I don't feel comfortable with all the house-building analogies. Most people (even programmers) are not aware, that any nontrivial software is usually much more complex than a house. Also the parts of a software system have zero tolerance for errors. A house will not break down, if one brick is slightly off spec, software will.

Houses don't produce error messages nor exceptions. But I can hear the noice from my neighbours if they have a party. So houses or flats are not perfect either, we just don't get error messages.
+5  A: 

Making a house that will stand up to normal weather and aging can be accomplished with simple constraints on selection of material and distribution of load. There are no shortage of modeling systems that can spit out workable designs automatically.

So yea following a few simple rules you too can design a simple structure that does not fall down, so what? :)

I love my new house but its far from bug-free. The builders failed to do the proper flow analysis of water through the piping and the result is aweful water hammer effects when the outside sprinklers are switched on.

The concrete floor in the basement started cracking after only 2 years.

There are several places where cracks have developed in walls.

The blower motor in the furnace failed after 1 year of use.

Sinks not properly cauked from the beginning causing water seepage and big mess in sink cabinents.

After a few days of 90+ deg inside temperature all the rubber door stops in the entire house cracked and needed to be replaced.

Various GFIs were too picky to be useful for powering much of anything and needed to be replaced..

A few florescent lighting fixtures went bad after the first year and needed to be replaced.

God knows if you've ever gone through the process of building a new house even with contractors who have their own QA people you still need to keep up on top of everything and file 'bug reports' regularly (Of which I've filed countless dozens) to end up with a house that resembles the one you ordered.

I like most home owners are no stranger to home depot... Until we have houses that can last 100 years without any problems or "regular maintenance" then I disagree with the basic premise.

Lack of formal training is a universal problem. Complexity is a universal problem. Just because houses are "big structures" does not also mean they are "complex structures".

House building quality varies also by country. In poor countries they live in huts, in rich countries they build big rather good houses. In cold regions they have to build better than in warm regions, because the weather is worse.
+36  A: 

I see a lot of browbeating here, but I think we should acknowledge one point:

Software Development is HARD

It used to be considered a truism that when IBM wrote OS/360 it was, at that point, the most logically complex system ever developed by humans.

Since then we've developed techniques of handling more and more complex systems. Our languages, APIs and tools have added layers of conceptualization and abstraction not dreamed of when OS/360 was hand-cranked together. The trouble is the complexity of what we are trying to achieve has increased in step - or maybe even a little more than.

Now this isn't to decry that there is problems with over-tight deadlines, poor programmers and all the issues brought up here. But the modern world practically runs on the software developed over the past 20 - 30 years and while there's lots of room for improvement we're not doing too badly - all in all.

I think this is probably the closest answer, but I'd like to add one more thing: Software development is in its infancy. While engineers have had hundreds, even thousands of years to perfect their craft, software engineers have had a handful of decades.
Jeff Hubbard
I'm amazed by the arrogance of H/W engineers looking down on software defects. But consider the difference in complexity of any physical device which interacts only within space-time (giving 12 interaction "surfaces": up, down, ...) and any line of code that can affect many more than 12 other items.
David Schmitt
@David: I disagree with your comment. Software and hardware are really not so different. Or rather, they needn't be. The problem here is the temporally sequential code used in MOST programming languages vs the routing of data through a flat hierarchial network. The former is difficult you cannot change one part without it affecting higher up parts. The latter do not have these issues. IMHO
To me, this answer smacks just slightly of elitism. Surely other things are hard, such as putting a man on the moon. I would add a corollary that software is hard, AND it doesn't really matter if it sucks in the way that it does if other hard things do.
+8  A: 

I think it has a lot to do with the malleable nature of software.

Because software is heavily modified over time, it's important for it to be designed such that changes in one part of the software don't affect other parts of the software. Even though principles for addressing this are widely understood in our industry, there are a couple of problems. The first is that there are plenty of individual practicioners who don't understand or care about those principles. The other is that there can be competing forces that cause us to violate those principles (budget, schedule, or even engineering concerns such as performance engineering). So over time software becomes complex and brittle unless you apply constant discipline to prevent that outcome.

Another piece is that because everybody (developers, managers, even customers) understands software is malleable, people have at least some level of comfort with the idea that if there's a bug, we'll fix it with the next release or patch. I've often heard it said that it's more expensive to fix prod bugs than QA bugs, and more expensive to fix QA bugs than dev bugs, etc., and given the economics we should do things like QA and unit testing. And ultimately I agree, but the idea can be carried too far. If you're able to find the bug via unit testing or QA, yeah, you're better off, but there's a cost associated with finding the bug in the first place, and diminishing returns kicks in pretty quickly due to the underlying complexity of the system (e.g. combinatorial explosions in the number of possible configurations). So the economics mandate catching most problems in dev/QA by investing in training, code reviews, testing disciple, etc., but not trying to be exhaustive, because in most cases the costs of fixing user-discovered bugs is reasonable. This is very different than most consumer products, where the cost associated with fixing bugs is great. The ROI for catching problems early is much greater if a late discovery is going to mean a recall or pulling something off the shelves.

Willie Wheeler
To go with the building example: It's like you build a small hut, and after being half done building it the owner comes along and says: "Look, I thought about it and I'd rather have a villa." - So you change the building, but the fundament has already been laid, so you have to modify what's already there. The owner comes back after two months and says "Guys, one bathroom is not enough. I'd rather have three. It's done tomorrow, right?" And so on and so on. Good software engineering relies on permanent iterative re-architecturing over time.
+1  A: 

I can agree with most of the discussion above, but there is one factor I think has been overseen: pricing of software. When MicroSoft produces a mastodont like Word they have to put a lot of effort in the work, but they can spread the costs over millions of users, so the customers get used to low prices. Added to this we see a lot of "free" software, and even people thinking they have the right to get those intellectual efforts for free. So the software developer is caught in a situation where he has to a. take his time, be careful, test thoroughly, you name it .. and not getting the money back OR b. make it quickly to get some reasonable amount for the work. The final result will probably be somewhere in between, but the limits are there.

+2  A: 

The business reason: because the vast majority of software does not need to be high quality.

A low quality bridge or a low quality car can easily kill someone. Thousands even. Low quality software however, will usually just annoy people. It is simply not worth it to spend as much time making quality software in most cases.

Indeed, if you look at the software used in safety critical applications, it is usually of far higher quality. It has spent more time in development, and been more thoroughly tested. It has been engineered in the same way a car or bridge would be engineered.

Dylan White
+1  A: 
  • When building a house, construction plans do not change while the house is built.
  • There are little requirements engineering skills needed by the building company.
  • Usually, there is an professional architect. Architecture is a special profession.
  • Physical structures do have less degrees of freedom than logical structures.
Your first two points are not true. Halfway through building their house, my parents moved the location of some windows around (to get a better view of the lake). Better requirements engineering skills may have picked this up during design.
You are right with the first point.In the second one, I wrote "... by the building company." - It would have been the architects job to do proper requirements engineering.
+2  A: 

One very unfortunate reason: Humans.

Human brains are not really built for elaborate networks and relations of complex abstractions.

Thus, while humans can deal with complex buildings or machinery because these are (literally) concrete by relying on things like physical dimensions and proximity and the natural laws of physics, the same cannot be said for software.

At best, we can use engineering practices to break down problems in such a way that we only have to grasp smaller problems at a time or that we can impacts of our mistakes are limited.

In addition, human communication is ambiguous and ineffective by the nature of natural languages, while software must be precise by nature.

+4  A: 
+1  A: 

Because people get the quality they deserve.

If you buy a DVD player and it doesn't work, you take it back, bang the table, log it on some Internet forum etc.

The manufacturer starts to get a bad press and does something about it.

But if you are running some application and it throws an error along the lines of "The program has encountered a problem and need to exit", we all just accept it, restart, reboot or whatever.

There is no compulsion in this case for the vendor to do anything.

+2  A: 

I think that is the wrong question to ask. Quality software is made and does exist. Some of it is still running reliably years after it was created.

The real question is why is software quality so expensive. It is expensive enough that people requisitioning a system would prefer to keep a buggy system that only just does it rather than pony up for the huge expense of finding and fixing the bugs. Leading to crappy software being released.

Quality software is hard to keep quality too. Imagine a console app that was written years ago and written well? Compared to an interface written in modern tools, that is no longer quality. Technology has moves so fast. For a car it takes decades before it's interface is no longer considered quality (well, not so much now that computers are being integrated into the bloody things).

+43  A: 

I see time and time again that the marketplace does not reward software quality. Apple is a good example; for many years they had a superior product, but people would not pay. (Not just the up-front costs of Apple but also the costs to change from whatever they were using that year.)

I still see many people making buying decisions on the idea that 'more is better'. Both Microsoft and the Free Software Foundation have conditioned people to two pernicious ideas: 1. The product with more features is always better. 2. It is OK for software to fail once in a while. Because the people making buying decisions do not understand that simpler is better, there are tremendous economic incentives to create software that is complex but appears to have nice features.

War story: in the early 1990s, a friend of mine tried to write his dissertation using Microsoft Word. Everything went fine until his manuscript got to around a hundred pages. At that point Word refused to let him change it any more. The eventual fix was for him to double the amount of memory on his PC. In the early 1990s this was an expensive fix.

After it was all over I asked why he had chosen Word. His response:

I liked the alleged features.

People buy software on perception, not on reality. It doesn't matter if the reality of the software is that it is hard to use or doesn't work; if it looks good, people will buy it.

Friends of mine who study economics tell me that this behavior is the sign of an immature market and that when computing matures and the demand for computing stabilizes, things will be different. I am not so sure---I look at the market for automobiles, which have 100+ years of engineering knowhow put in, and I see a much bigger market for cars that make people feel good than I do for cars that are boring but just work. I look forward to a day when buyers reject programs for having too many features and insist on buying simple, boring programs that just work.

Norman Ramsey
During the recent economic up-cycle companies have generated over-engineered products in an attempt to convince us that their product is better. I forsee that during the current economic down-cycle customers will look for 'value' which in turn will encourage companies to simplify their products.
+1, plus more if I could. Quality software sucks because quality is expensive. Really expensive. And people don't want to pay for it.
Greg D
(And I've been a die-hard PC user all my life. Does that make me a hypocrite, or just normal?)
Greg D
Cars are status symbols so I don't think that's a good comparison ... hopefully software isn't a status symbol--for anybody.
If you think the only difference between a quality car and an expensive car is status, you must have only driven one type of car.
Isn't you whole point undermined by the fact that Apple have been rewarded in the marketplace when they started doing things right, just after Steve Jobs returned? Their stock price has gone up greater than 100x in the last 15 years.
Matthew Lock
more correct to say,marketplace does not reward quality
I was surprised to see a mention of Free Software Foundation. What have they done to perpetuate that idea that "more is better" and "it's okay to produce crap"?
@Tshepang: for 'more is better', compare FSF documentation with Bell Labs documentation or BSD documentation for the same commands. The FSF never saw a feature it didn't like. For quality, have a look at the source code and design of some of the signature achievements of the FSF: emacs and gcc. These are incredibly useful tools, and I use them every day, but I would not hold them up as examples of our craft.
Norman Ramsey
@Norman: It's the first time I come across such an observation, thanks. You also state that FSF has conditioned people that "It is OK for software to fail once in a while". Can you give an example of that.
+1  A:'s_Law

I tried around 5 times to make that link a link. I give up. Anyway Gresham's Law extends to any situation where low quality is accepted in place of high quality.

Windows programmer
+2  A: 

Moore's Law causes some of the problems. Unlike Civil Engineering, everything changes quickly, so that expertise, standards, and reliable components are obsolete long before they have matured. Before we can work out how to properly solve the problem, it's gone. Topical example: many-cores.

Civil Engineering has also absorbed new technologies (e.g. pre-stressed concrete), but it's many times slower, and it doesn't require the reinvention of everything.

I also think the fact that software is so malleable that we can reinvent everything, is a contributing factor towards the pace of change. :-) It's more expensive to build a house, so people are more circumspect. However, as someone said, you're not allowed to build a cheap, DIY ramshackle shack (in developed countries).

+11  A: 

Check out this related question.

A lot of responders here are blaming these defects on unskilled software developers.

I think the problem is far deeper and more interesting than that.

A great deal of software comes out of engineering institutions who hire solid people and put enormous energy into getting things right. There are QA departments, methodologies, policies like peer code review, etc.

And still a shocking percentage of projects have serious problems or fail utterly, never delivering anything of use.

How is that possible? We should have this down by now, right?

If you ask the people in charge, they might blame the budget. And of course that's often the final straw, the thing that ostensibly forces the doors to close or the project to ship too early.

But that is often an extremely superficial answer designed to protect careers. After all, there wouldn't be budget problems if the project was planned and executed properly.

But that's easier said than done.

I think devs, even brilliant devs who do this a lot, sometimes fail to fully respect how time consuming and expensive it is to develop complex systems. (Here's a great book that shows you how this happens, among other things).

And running software projects, especially large ones, requires a specialized skillset that goes way beyond MS Project and even formal methodologies like Agile.

Here's an example: you have to be able to explain to stakeholders (who maybe have no point of reference and think software is easy) that what they want is going to take way longer than they expect. And sell that, even if there is enormous pressure - and there will be.

You have to educate delicately (after all, these are smart exec-types who are very sure of themselves), maybe telling a story about the old Waterfall methodology and why software projects fail, and why a counter-intuitive approach like Agile might be necessary, and sorry, not only does that mean your budget is not going to work, it could mean no fixed costs at all.

That can be tough because there is competition out there who will tell them what they want to hear and move forward with a plan that will most likely lead to chaos and defeat. Because people don't understand this stuff.

And that's just one little piece.

Brian MacKay
I really liked your answer. I think what you are describing is a crucial part of the causes to the problem I presented.
Yuval A
Thanks. I'm all about this subject right now. I think there is a real need to educate people here.
Brian MacKay
+3  A: 

Your analogy perfectly explains why software quality is hard. Houses are physical objects whose materials obey the laws of physics. The materials, architecture, and engineering qualities of a house can be physically and numerically analyzed. How else can a civil engineer describe down to the number pounds of rebar how to build a bridge across a windy span able to withstand high speed truck traffic and the bridge does not collapse? Newtonian Physics has rules that never vary. You can numerically prove how strong the bridge will be.

On the other hand, computer programming has the halting problem. No amount of analysis can prove that a given program fulfills its defined purpose. So even if you could assemble some dream team of super-programmers to work on your project. The program they deliver cannot be certified to fulfill your (non-trivial) requirements 100%.

+1  A: 

Houses have time-proven blueprints. There's thousands of houses each exactly the same as the last.

No two pieces of software are the same. The only reason people write software is to do something new or different.

+1  A: 

The quality of a product depens greatly on your expectations of it. For instance, I believe there are no good javascript debuggers on the market. Non of them are capable of what I want them to do.

Other people will say the current debuggers are good, simply because they're not even considering the possibilty that something more might be needed.

+1  A: 
  1. software development is difficult and time-consuming
  2. software developers are not interchangable cogs
  3. quality takes time
  4. you get what you pay for

Ignoring one or more of the points above leads to poor software quality, every time, guaranteed. Though this may not cover all incidents of poor software quality, but I suspect that it chracterizes the vast majority of them!

Steven A. Lowe
+3  A: 

I think alot of you have missed a major point. Software is invisible, we can only see the results from a given set of inputs and the whole process happens with out us able to see what it is doing.

When building a house we can see, touch and feel the process and adjust when its wrong. In software we cant do this, We use some inputs, get some outputs and then check to see if it what we wanted. If not we go back change some code and try again thus leading to quality problems because we might have a fluke set of inputs that just happens to give the right output and we will move on and cant see the errors and bugs under the surface.

+2  A: 

I like the house analogy tossed around. When you build a house, you have a lot of slack in many areas. Even someone who knows nothing about building houses can learn it pretty quickly because the processes and the materials are "simple" and well understood. A piece of rock is not going to suddenly jump up and talk to your face. If you put it somewhere and you give it some support, it's going to stay there - period. Burned ceramics aren't going to let water through. You just have to find some goo to plug the gaps until the guarantee time is over and you're good.

With software, you have much more delicate and hidden dependencies. Since people don't understand this, here is my analogy: Chess has some pretty simple rules (roughly 20). If you need, you can enumerate them on one standard sized piece of paper (A4 or letter or whatever). Despite that, you can't enumerate all possible states of the system nor can you, given most states, tell how the system got there.

The Asian game of Go has even more simple rules (roughly 10). Contrary to intuition, Go has even more complex states and so far, no computer in the world has ever come even close to challenging a decent human player without a handicap.

Lastly, we have Conway's Game of Life. Four rules. 4. Read it again: Four (4). Game of Life is so complex that it's Turing Complete: You can build a fully working computer with it and compute anything that a computer can compute, for example, you can teach it to play Chess. Or to simulate a computer playing Conway's Game of Life. If you can write a program that can write a program that can play Conway's Game of Life. I wonder if She took precautions against "infinite recursive creation loops". Anyway.

Conclusion: While software may look simple, it never is. Every problem has at least an infinite amount of possible correct solutions and an even greater amount of wrong ones. If math allowed it, we would have to multiply this with the possible amounts of misunderstandings between the customer and the developer (multiplied by the levels of indirection between them, i.e. managers, customer relationship people, project managers, your co-workers), add the usual sprinkle of bugs in the environment you have to use and then ...

We wonder how it is even possible that anyone ever wrote a piece of code with more than 10 lines of code that actually works.

Aaron Digulla

For me its the developers, that are the source of low quality software.

Like the early ages of craftmanship, developing software is handmaking software. Most developers tend to pretend, to do everything by hand, build it on their own. But this costs time, and time is money and money makes the world go round.

Projects today have to be fast done and cheap - that bites with the developers view.

I think software developing needs to get to the next phase, the industrial revolution.

+1  A: 

A lot of people seem to be suggesting that software would be more reliable if it had fewer features. After all, most users only use a small subset of the features of many applications. The problem lies in the fact that different users use a different subset of features.


Bugs are a part of the uncertainty and incomplete requirements that tends to happen with IT projects. Software won't work because as someone sees the program running, then she or he will think of, "Well, it should also do this, that and this other thing here," which while it can be done isn't what was originally requested. To take the house example a bit further, software may be viewed as the paint on the walls and furniture in the house which can easily be changed or moved which can lead to some people not being happy with their home since it is missing something that they want.

I don't see software quality advancing due simply to how poorly thought out various IT projects tend to be set up. While in the house things like doorways and floors are nailed down pretty much, there aren't similar things with software as there are usually dozens of different hardware configurations that will work with the software, but what is optimal and how can this be determined based on how a given company will use the servers that perform a set of tasks related to a specific process like Resource Planning or Customer Service.

Software development methodologies like Agile and Scrum exist as a way to try to give someone something close to what they want and then allowing changes that improve the product done repeatedly. So, for example as a house is being built, someone wouldn't change where the staircases go or where the windows will be yet in software these kinds of changes are likely to be common where the UI, being as flexible as it is, has to get refined over many many tries and even then there is a question of what point do you call it "done"?

JB King
+2  A: 

People often draw the parallels between software engineering, which they see as full of problems, and other forms of engineering, which they see as not so problematic.

Your parallel between housing development and software development is not so good for several reasons:

  • You maintain your house day in day, repair it, upgrade it and etc.
  • Your house was built in one spot and was designed for that spot.
  • Your house has much less complex functionality then the average product
  • House building has strict predetermined calculations/rules/regulations
  • House building has been around for thousands of years, so they have wrinkled out the bugs

But the biggest difference between software engineering and other types of engineering is that they have been basically using TTD while the software engineering has so far been heavily on the trail & error process.

I say they use TTD since they have plans, prototypes and tests for those prototypes before going in to massive production and with all of that they still tend to have problems here and there like the leaking faucet or and oil leak from your car caused by poor manufacturing.

My point being that other engineering disciplines also result in bugs and problems but the main difference is that they don't result in a big red X mark with an error description you can't understand.

+1  A: 

Consider this:

  1. Software is very very difficult to inspect compared to a house or a car. Unlike the latter, you just can't glance over a piece of code and spot defects. You have to read and comprehend it.
  2. Software is usually many times more complex than other engineering artifacts, like cars. That's because it is infinitely more flexible. In physical objects, you hit a limit on complexity very early for reasons like manufacturablity, purchasing cost, shipping cost etc. In software, these constraints either don't apply, or do so mildly if at all.
  3. Software, once created, continues to change. Cars and houses once manufatured stay largely the same. But software artifacts can practically morph into something else in a few year's time. And every change has a potential to break existing funcationality, and mostly it does.

In other words, engineering software is much harder than engineering a car or a house.


In software engineering it's possible to put the dunny on the ceiling.

GeneralFushException: Duck!

Software has to be sold to managers and/or regular people. If you're regular person you can tell good house from bad one, or working TV from broken one. However, non-tech person can't distinguish between good software and bad software.

When the ones who pay for the product can't tell the difference, the result drifts to the bottom of the pool in terms of quality.

ilya n.

I disagree. The state of software development is exactly the way your house example is like. On first glance, everything in the house looks fine (maybe even work fine). Simple stuff works without a hitch. But when a really big winds come, some houses lose its roof. Better houses suffers minor damage but still live on. Like software does. On first glance, everything in the software looks fine (maybe even work fine). But when you pushed it to limit (For a web server, gives it a high load testing). Some softwares crash altogether. Better software do weird things but still works fine.

Some houses works fine during the first few years. However, as time goes by, things start to break. Like software does. Most software works fine during the first few years. However, as time goes by, new drivers gets installed, new OS patches got patched, new applications gets installed, some (bad) software simply stops working. Like for examaple some software designed for Windows XP simply wouldn't run in Windows 7.

To conclude, yes, there is no perfect software like there is perfect house. Most of the time, it depends on the amount of work that is being done to create the software/house. Lots of time and lots of money it's most certainly going to be good. Done hastily and cheaply it's most certainly going to be bad. Well, just like a house.

Hao Wooi Lim
+1  A: 

It's hard. There are X number of ways to solving many issues. Programmers rely on a lot of third party libraries, which will abstract away from you what is really going on.

When doing web development you have many browsers and potential settings to worry about. Desktop development has different operating systems, and settings to worry about.

People get hired based off answering questions and not writing code. There was a member on my team on a very large, highly visible web application, and she had no idea how to write code. She lasted six months. She never would have made it past the interview if someone would have simply had her write a reverse string method on a white board.

Also, the ever evolving nature of technology, means theres always something new and amazing around the corner. I'm not expert, but I imagine there are only a few ways to build a skyscraper or a highway. There are tons of different ways to build a web application.

I'd also highly recommend reading Joel Spolsky's awesome article about Leaky Abstractions

Jack Marchetti
+2  A: 

Fred Brooks, of mythical man month fame, had an interesting article a few years ago titled No Silver Bullet. This was just about the time that OO software and design was being talked about as the next silver bullet to all of our software design problems.

The basic idea was that software can be broken into two types of complexity, accidental and inherent.

Accidental complexity is using x86 assembly language rather than python to write a processing script.

You can do it with x86 assembly language, but, you'll spend a lot more time.

Accidental complexity can be solved with different languages, especially if you can get a language which maps well to your problem. The large number of languages show that we work hard to solve this problem.

The other sort of complexity, inherent complexity, is what is hard. This is not solved by languages etc and is what causes us the biggest headaches. This is where the leaky abstractions bite us, and where the library calls work in the order A, B, C, D, but, in spite of the documentation which say that you can call A, B, D, C, it fails later when you call F.

Also, of course, some problems are just hard at a basic level (US Taxes, for example) and no language helps here.

I'm not sure, in the end, that there is a good solution.

Bruce ONeel

Well, my belief is that it's a matter of changing from Monolithic Apps to Modular Apps by using something similar to the Lego Process which can be checked out here

Thomas Hansen

I have to disagree with your rant. Considering the dynamic nature of software, I think it is quite amazing that stuff works as good as it does. Just look at the thousands of different hardware windows has to support.

That would be like building the same house design in a thousand different types of soil, elevations, mountains, water, and it has to keep from collapsing.

And to use another analogy, in a computer system, unlike in the real world, the "laws of physics" change based on the operating system you are building in. Further, sometimes these "laws of physics" can also be buggy / inconsistent, causing your software to behave in an unexpected way.

In the real world you would never find that north and south pole suddenly flips whenever the wind happens to blow a certain way in combination with the sun being in a certain level in the sky.

Things like this happen all the time in software, because of the extreme dynamic nature and complexity, when large systems interact with each other it is nearly impossible to predict all of the potential points of failure.

Because each system may have "physics" which behave very differently. In the real world there is only one law of physics and it behaves consistently, it can be accurately measured, and allows us to use mathematics to predict future behavior in various scenarios.

Roberto Sebestyen

Look at the expectations for software. All the if's and buts. Here would be the demands on the home:

  • I want the bedroom to be 20' X 20' unless I was asleep and need to go to the bathroom, then I want the bedroom to shrink so the bathroom is closer and all the furniture doesn't get crusehd.
  • During a tornado, get rid of all the windows.
  • The garage should have a turntable big enough to rotate a Hummer in .2 seconds.
  • the walls should be able to change colors based on the 'skin' I choose
  • make it scalable, I may want to invite all my facebook friends
  • adjust shower temp when toilet is flushed
  • the plumbing should never break since 'nothing has changed'
  • TVs, newspapers and magazines shouldn't show advertising if I pay my mortgage

Oh, and try to do as much of this as you can with open source building materials.

Jeff O

related questions