I would like to hear what kind of design decisions you took and how did they backfire. Because of a bad design decision, I ended up having to support that bad decision forever (I also had a part in it). This made me realize that one single design mistake can haunt you forever. I want to learn from the more experienced people what kind of blunders have they experienced and what did they learn from them.

I'm sure this will be a lot of help to other programmers by helping them to not repeat those decisions.

Thanks for sharing your experience.

+12  A: 

It wasn't my decision (I joined the company somewhat later) but somewhere I worked took i18n a bit too far, including translating all their log messages.


  • More painful to add new logging
  • More cost for translation
  • Logs are harder to read afterwards


Jon Skeet
I'm doing some i18n here, and trying to figure out what goes to the user and what goes into the logs. I want all user-facing output localizable, but I want the logs to remain the same. In the process, I'm having to do more figuring out of where an exception goes than I like.
David Thornley
Let me guess, your American? And if not, you only speak english?Use internationalisation where new log entries in english, if another language is available you display the users language. HINT: Using error codes helps here as it means you can always grep/scan logs no matter what the language.
@Jacob: I'm English, but only speak English. But this was for a company where the entire engineering base was in England, so having log files (which are for diagnostic purposes, not user visible information) in other languages would just be a waste of resources. I agree that using error codes instead of text allows on-the-fly translation - but it's still more work than just using a single language to start with. It's a matter of reducing work by identifying where something that *sounds* useful is actually not going to provide any significant value.
Jon Skeet
+22  A: 

From one of my mistakes i've learned that DB normalization shouldn't be followed blindly. You can, and in some situations you MUST flatten your tables.

I ended up managing loads of tables (via models) and performance wasn't as good as it could be with a little flattening for tables.

I could not agree more... I am a software engineer and we are told to always normalize. What a crock of shit. It was only because the teachers havent tried working with truely complex and performance dependent DB's.
The real napster
I might add that normalization also can be very positive of course.
The real napster
I fully agree with napster above, over-normalization really bit me in a major project for our company
Alex Marshall
A related blog post by Jeff titled Maybe Normalizing Isn't Normal: http://www.codinghorror.com/blog/archives/001152.html
Normalization (within limits) is great for putting stuff into a database, but slow for getting things out. Get your transaction processing database reasonably normalize, and denormalize your data warehouse.
David Thornley
I think programmers are too quick to denormalize for inadequate reasons, but yes, slavish adherence to the rules of normalization is a big mistake. In general, one of my great frustrations in software development is when someone says "We must do X", and when I point out all the problems this will cause, they reply, "That's irrelevant. All the experts agree that X is good, therefore we must do X, always, no exceptions."
My approach to normalization is always straight-forward. I always normalize, BUT if i see a potential performance boost on little flattening - i always test and benchmark and in most cases it pays of to flatten.
But normalisation is FUN! :) I'm serious, I enjoy designing data structures. What I would say is that whilst it's easy to de-normalise a normalised schema, the reverse is NOT true. You need to KNOW the rules before you break 'em.
Keith Williams
Probably, but creating data structures is only the process in the beginning and using/maintaining those data structures takes most (read: almost all) of the time, hence usually denormalization comes only by demand.
You probably didn't normalize enough.
I'm pretty confident I normalized enough to write this answer .)
+28  A: 

Using a single char in databases for statuses, etc. No point in this at all, the overhead of using a longer char() or nvarchar2() is miniscule compared to the network and parsing incurred by any SQL call, yet the chars always end up rather obfuscated, or running out (not for statuses, but other things). Far better to just put the human readable version in, and to also have in your Java model (in my case) an enum with matching values.

I guess this is a form of premature unnecessary and blind optimisation. As if using a single char will save the world these days. Apart from Y/N booleans in databases that don't support booleans/bits.

+1 Completely relevant to me!
+1 We have just had a meeting on this very issue. We came to the same conclusion.
What if your customer works with such abbreviations and doesn't want to abandon them?
For existing systems you need to be compatible (I'd still create a Java enum of proper values, with a <code>MyEnum fromChar(char c)</code> method) of course. For new designs, just don't go there!
Some databases support enums, which are both compact and readable and also serve nicely forbid unexpected values. If you can, use those.
Karl Bartel
I've done this, and whilst you're right in what you say (A is for "Approved", but V is for "Returned but not yet assessed", for example), I wouldn't call it a BAD decision as such - maybe a little inconvenient, but not tragic. I just put a lookup table on the values, so the queries now return the enum-compatible value.
Keith Williams
Almost as bad: Using the BIT type in MS SQL Server, before discovering that it cannot be part of an index.
You made that decision? That's pretty bad, because most of us just had it forced upon us.
Peter Turner
+5  A: 

Taking the quick road to getting some code working, rather than the right road (bit general, but we'll call it an abstraction and therefore a 'right' answer).

+25  A: 

Configurability in an application is nice. Too much configurability is a nightmare to use and to maintain.

Travis Beale
Yes. True. It's idealism to make everything configurable and tell boss that we'll never need to change a single line of code again.
+17  A: 

Chosing Microsoft Foundation Classes (MFC) for writing a Java IDE.

Oliver Weichhold
Owwww. That would make my brain hurt.
Greg D
That was not a bad decision in 1999. AWT was ugly and slow then.
+48  A: 

Ignoring YAGNI, again and again ...

Jorge Córdoba
True for most, but there are also folks who could use a bit less YAGNI. Neither extreme is the best place to be.
system PAUSE
+7  A: 

Throwing in some 'funny' easter eggs into some code I wrote before going on vacation for 2 weeks. I thought I'd be the only person to read it when I got back, it'd get me chuckling and ready to re-code it.

Needless to say, my boss wasn't impressed when he reviewed it while I was away, and he was even less impressed when one of the 'easter eggs' was involving his face funnily cartooned in ASCII.


Daniel May
IMO, that's "Good work Sir!"
Very recently, I was mocked by my team for trace messages like, "addin' th'value (p) t'yer table!" I said look, they made me work on Talk Like A Pirate Day, they deserve what they get.
Arr, your loggs be lookin' for a keel'haulin!
Robert P
+8  A: 

Not defining the deployment mechanism/model as early as possible.

Austin Salonen
+34  A: 

C++, diamond-shaped multiple virtual inheritance. You get the idea.

Alex B
I need to create a new account to upvote this again...
Yes... painful experiences...
Ed Swangren
I just had an ugly flashback
Neil N
This falls in the general category of "Using a language or system feature because it sounds way cool rather than because it actually helps you write a better program."
@Jay it actually seemed like a good idea at the time.
Alex B
@Alex: Sure. Most dumb pieces of code I've written seemed like a good idea at the time. Who says, "This is a really bad idea. Let's do it!" The obvious exceptions being, (a) The boss said to do it this way and overrode my objections; and (b) "This is a really bad idea. We don't have time to do it right now. But we'll have time to fix it later."
+5  A: 

I didn't take enough time to assess the business model. I did what the client asked, but 6-12 months later we both came to the conclusion it should've done differently.

OMG Ponies
+4  A: 

Back in the university I was working on my senior design project. Another guy and I were writing a web-based bug tracking system. (Nothing groundbreaking, but we both wanted to get some web experience.) We did the thing with Java servlets, and it worked reasonably well, but for some silly reason, instead of opting to use Exceptions as our error-handling mechanism, we chose to use error codes.

When we presented our project for a grade and one of the faculty asked the inevitable, "If you had to do it again, what would you do differently?" I instantly knew the answer: "I'd use exceptions, that's what they're there for."

Greg D
Ahhh the joys of reinventing the wheel! :-) That's funny.
I call it intentional flaws, just so that you can improve it in the next iteration.
Exceptions are for handling exceptions only. Too many people abuse exceptions by turning everything into an exception.
@jacob - I agree with your sentiment that exceptions should be used for things that you can predict (ie exceptional conditions) but from what I have seen (and not being a Java programmer) Java seems to use exceptions for everything under the sun. So not using exceptions in Java code could be considered going against the flow of the language.
Peter M
+19  A: 

Thinking I could be Architect, Developer and PM all on the same project.

2 months of sleeping 3 hours a night taught me you just can't do it.

So stop sleeping so much! oh, wait... you mean thats NOT normal...?? Hmm, I gotta get me some other people on this project...
+3  A: 

I implemented a sub-section of an application according to the requirements.

It turns out that the requirements were bloated and gold-plated, and my code was over-designed. I should have designed my sub-section to only work with what I was adding at the time, but plan for adding all the other stuff without including generic support for it from the outset.

+1  A: 

At my last job, I wrote some largish projects using an in-house scripting language which didn't support classes, lists, and the only "include" mechanism was including one file in another.

In hindsight I could have written it in .net or python and half my extensibility issues would have vanished.

What sort of madhouse has a in-house scripting language :)
@whatnick, probably most software companies do.
+4  A: 

Not my choice of method, but created an XSLT to convert a row based XML file into a column based HTML report.

It only worked in IE, was completely impossible to decode how it worked. Everytime we needed to expand it, was impossibly difficult and took ages.

In the end, I replaced by a tiny C# script which did the same thing.

I've done that too. I implemented an email templating engine using XSL and found it difficult to read and maintain.
Yep. Replaced someone's huge tree of XSLT files with a few simple VB.NET functions. Very satisfying, especially when the next customer change request that came along would have been impossible to do in XSLT.
Christian Hayter
I've found that most programmers consider XSLT a bad choice, simply because they dont *get* it. Its extremely useful for a small set of problems, much more efficient than many other solutions. On the other hand, it is used WAY too often, and mostly NOT in that small set of problems...
+4  A: 

My company has a waterfall-like development model, where our business users and business analysts will define requirements for projects. On one of our "big" projects, we got a stack of requirements, and I noticed a number of requirements contained implementation details, specifically information related to our database schema used by our accounting system.

I commented to the business users that implementation is my domain, it shouldn't be contained in the requirements. They were unwilling to change their requirements because, after all, they are THE BUSINESS, and it only makes sense for accountants to design accounting software. As a lowly developer who is too far down the totem poll, I'm paid to do instead of think. As much as I fought it, I couldn't persuade them to re-write the requirements -- there is too much paperwork and red tape around changes that its just too much of a hassle.

So, I gave them what they asked for. At the very least, it sorta works, but the database is weirdly designed:

  • Lots of unnecessary normalization. A single record containing 5 or 10 fields is split across 3 or 4 tables. I can deal with that, but I'd personally like to have all 1:1 fields pulled into a single table.

  • Lots of inappropriate denormalization. We have a table which stores invoice data which stores more than invoice data. We store a number of miscellaneous flags in the InvoiceData table, even if the flag isn't logically related to the InvoiceData table, such that each flag has a magic, hardcoded Primary Key value and all other fields nulled out in the InvoiceData table. Since the flag is represented as a record in the table, I suggested pulling the flag into its own table.

  • Lots more inappropriate denormalization. Certain app-wide flags are stored as columns in inappropriate tables, such that changing an app's flag requires updating every record in the table.

  • Primary keys contain metadata, such that if a varchar primary key ends with "D", we calculate invoices using one set of values, otherwise we calculate it with another set. It would make more sense to pull this metadata into a separate column, or pull the set of values to calculate into another table.

  • Foreign keys often go to more than one table, such that a foreign key ending with "M" might link to our mortage accounts table, whereas a foreign key ending with "A" might link to our auto accounts table. It would be easier to split the data into two tables, MortageData and AutoInsuranceData.

All of my suggestions were shot down with much wailing and gnashing of teeth. The app works as-designed, and while its a big ball of mud, all of the nasty hacks, special cases, and weird business rules are sarcastically and humorously documented in the source code.

Goodness, hope your CV is nice and up to date for a quick escape before the big ball of mud succumbs to gravity!
+15  A: 

Not developing a proper DAL, and having sql everywhere in my code, just to get something "quick" up and running. Later on, as the project started to expand, and requirements changed, it became a nightmare. I didn't know what a DAL was at the time.

... glad I'm passed that, although I still see programmers with 20+ years of "experience" doing this.

Can't remember where I read it, but there's a difference between 20 years of experience, and one year of experience repeated 19 times.
+8  A: 

Every single time I create technical debt, write procedural code, skip writing tests, etc. because I'm rushing. Almost inevitably I find this creates pain for me down the road.

+1  A: 

I once designed the business tier of a client server application so that all calls would be asynchronous. I thought it would make it easier to manage scarce resources on the server side (it was 1997 and there were major bandwidth constraints). Turned out it didn't make much difference in how we managed the server but made the client hellishly complicated.

Needless to say there was a very quick refactoring about 4 months into the project. And I learned that simple architectures that play to the strength of your tools are always the best.

Jeff Hornby
+12  A: 

My single worst design decision? Back in the 1980's I was working on a project where we got the bright idea to create a kind of template for our data entry screens which would be interpreted at run-time. Not a bad decision: it made input screens easy to design. Basically just create a file that resembled the data entry screen, with some special codes to identify what was a label vs what was an input field, and to identify whether input fields were alpha or numeric. Then I decided to add some more special codes to these files to identify what validations should be performed. Then I added more codes to allow conditional building of the screen, field X only included when some condition was true, etc. Then I added more codes to do some simple processing of the inputs. Etc. Etc. Eventually we had turned our screen template into a new programming language, complete with expressions, control structures, and an i/o library. And what for? We did a ton of work to re-invent FORTRAN. We had a shelf full of compilers for languages that were better designed and better tested. If we'd devoted that much effort to building products where we actually had some expertise, that company might still be in business today.

That's both funny and tragic :)
The sad thing is that this approach is sometimes the best way to go. The customer can pick either "screen can be changed at a moment's notice" or "screen does everything including make the tea" but not both!
Christian Hayter
I have nothing against using templates or other generic code. The mistake was in turning a piece of generic code into a language-within-a-language.
I have seen this exact thing done...in 2004! All the business logic is spread over around fifteen configuration tables with several half baked attempts at dynamic "languages" thrown in for good measure (see Greenspun's Tenth Rule)!
Rock and or Roll
Dour High Arch
@Dour: Exactly. I absolutely agree with the thrust of that article. Though frankly I think the examples are poor. e.g. a file browser built within a web browser has important capabilities not available with OS commands, most obviously, the ability to be accessed over the Internet without opening your system to all the vulnerabilities of remote logins.
Don't you mean COBOL rather than FORTRAN?
Now that we have HTML, CSS, and Javascript, we can build screens that can be changed at a moment's notice *AND* make the tea!
Kevin Panko
@Kevin: Well, they can make you a cup of Java, I don't know about tea. :-)
+6  A: 
system PAUSE
Yes and no, can you predict in what direction it's going to change? I have experience of painfully complex systems which proved totally inadequate for the first reuse, which didn't fit into the predicted genericity...
^yep, I'd rather deal with YAGNI than that crap.
So you think you should have spent the 2 weeks upfront?
Catch phrase abuse is particularly easy to succumb to. Everyone at my work talks about "Best Practices" as if they're so easy to pin down. I have to remind them that "Best Practices" really means "Practices that work well with these assumptions" (much like everything else in life).
That example is not YAGNI at all. DRY is part of YAGNI, and without it you cannot stay responsive to change.
Stephan Eggermont
Stephan, the example shows a glib and inappropriate abuse of the catch-phrase, which was my point. DRY (with its variant OAOO) is also a good principle, but quite separate: http://c2.com/cgi/wiki?OaooBalancesYagni. However, I cannot find *anything anywhere* to support your claim that "DRY is a part of YAGNI." Mustard goes well with hotdogs, but that doesn't mean mustard is a part of hotdogs. If you could clarify, perhaps with references, perhaps I will understand.
system PAUSE
Great point. YAGNI is a good philosophy for stuff that there's at least a decent chance you'll **never** need, not for stuff you clearly **will** need, just not right this second.
+8  A: 

Wrong human resources

Trying to make something right and great with wrong people!
Even if they're in the role of a PM.

Robert Koritnik
+6  A: 

Using ASP.Net Themes when just a regular ol' CSS folder would've done just fine.

Lol at that, yes!
This answer could be shortened to "Using ASP.NET"
Skins are useful for setting up the default CssClass.
+25  A: 

"WIll do it later"
"Later" never comes.

Later never comes.
It never does.
Manos Dilaverakis
It has been said, If you don't have time to do it right now, what makes you think that you'll have time to fix it later?
+1. All those bits of code that got left behind after re-factoring, because "we haven't got time to clean it up now, we'll do it as we go along", and then next year you realise that half your code is using the new methods, and half is still clinging onto the old crap :/
Keith Williams
We call this "iteration never"
Chris Lively
Unless you work for the government or a defense contractor. They work so slowly that you *will* find gaps in the schedule to clean things up. The downside is that the next release will be due to the customer demanding a really stupid feature that belongs in a different app, so your cleaned-up code will only get used after the design has been broken beyond repair.
I've worked for the government. And yes, you have absurdly long schedules. But you can't use that time to improve the code. You have to spend it doing the paperwork and filling out the surveys.
+13  A: 

Reinventing the Wheel


what else is worser than this

commented out the problematic lines, and it turned to be the login call. :(

Ramesh Vel
Do you remember that huge bug in Debian's SSL, on and a half year ago? ...
Arthur Reutenauer
+3  A: 

trying to use all the new technologies (to learn new technology) even though it doent even require..

+4  A: 

Rewriting working code. Hell the result works great, it's got less bugs and it's more maintainable.

But the customer could care less and it cost my days of dev time...

Nope, dont write crap code in the first place (:
Any mention of the word "maintainable" is a process smell.
+1  A: 

Tight coupling of components that, with hindsight, had very little to do with each other. Said components have since been copy-pasta'd into oblivion by other devs wanting to use only a small part of the functionality. :-(

Christian Hayter
+2  A: 

Trying to utilize the Nth element of a circular buffer [N-deep] between 2 processors. Now I never-ever use more than N-1 elements to keep it simple and reliable.

The issue: a circular buffer containing no more than N-1 elements can be realized completely threadsafe (pure producer/consumer). When I optimized it for N elements, the queue sometimes toggled from full-to-empty (data loss) or from empty-to-full (invalid data).

Trying to find this in a complex system (1 corruption on every 100Mbytes of data transfer) is harder than finding a needle in a haystack.

+1  A: 

Listening to Joel and trying to extend a piece of software instead of rewriting it.

+4  A: 

Designing without a specification.

Specs arent always possible
+7  A: 

Believing customers know what they want and then doing too much before checking with them.

Anders K.
+9  A: 

Doing too much design. Creating lots of UML diagrams, particularly Sequence diagrams for every single operation, much of which in the end turned out useless. At the end it turned out that significant amount of time could have saved by skipping unnecessarily detailed design/diagrams and starting coding directly.

Jahanzeb Farooq
If the question was, "What is the most regrettable design or programming decision you have ever seen made?" as opposed to mistakes we'd made ourselves, I'd put "UML" near the top of my list. Right below "the Windows registry".
+2  A: 

Using flash to build a site because the "designer" wanted a carrousel of photos (that was years ago, jQuery didn't exist). Later it turned out that the designed wanted to change everything once a week because he changed his mind about the design... What a maintenance nightmare.

The Disintegrator

At my previous job, I was responsible for building an automated-testing framework. Background: We already had a "prototype" which was pretty good and had many tests written for it. It was written in TCL.

For the new project, I was supposed to adapt the prototype to fit a new project. I really disliked TCL, and had just learned Python, so I was clamoring to apply that new knowledge somewhere. So, of course, I decided to re-write the prototype in Python. Since we wanted to do lots of really cool new things (support hierarchical tests better, have all sorts of extra mechanisms to easily create new tests), I justified my decision by saying that writing all this new stuff in TCL would be a nightmare.

In the end, most of the new features (which were way too difficult anyway) were never used. We ended up reimplementing the entire prototype from scratch for almost no reward.

The best part? About a year and a half later, we had to drudge up the old prototype to run some tests on the old project (foreseeable that this would happen). I was worried we'd have to adapt it to some of our new tools (one of the reasons I opted to reimplement in Python). Turns out it took about 2 hours of work.

Edan Maor

Failure to fully determine specs before starting a project on a client's server. I said PHP (meaning >= 5.2), they gave me PHP 4, I said, "I need a database," (when they finally replied), they said, "Ok, you create the table, and we'll put it in our database..." (I also failed to mention the desire for Apache and not IIs). It ballooned out of proportion, it cause several sleepless nights, and it is one of the worst piece of dung I've ever built. The only benefit I received was that I gained much better understanding of PHP 4, something I did not want to begin with.

If I could go back and do it again... I wouldn't.

Christopher W. Allen-Poole
+4  A: 

Using SQL Server Intergration Services (SSIS).

I dont wish it on my worst enemy.

After building several SSIS packages over the past two months, only to come to find out that the packages I developed are not distributable & and undeployable. Specifically in a non-web, non SQL Server licensed environment.

It's a very bad situation to be in, when you have less than 48 hours to re-write your SSIS packages in pure .NET POCO code or miss your targeted deadline.

It amazes me that I was able to rewrite three SSIS packages (that took me two months to test and develop), within 12 hours in pure .NET code, with OLEDB Adapters and SQL Adapaters.

SSIS is not distributable and will not execute packages from a client machine if it does not have a SQL Server license installed on it (Specifically the DTSPipeline.dll). This would be great to know up front. I do see the disclaimer now (in fine print) on MSDN. That does no good when you have example code all over the internet using SQL-LICENSED machine only code. Basically, you have to create a web service that will talk to your SQL server, to run your SSIS packages programmatically. You cannot execute them from pure .NET code, unless you do have a SQL license installed on the executing machine. How unrealistic is that? Does Microsoft really expect SSIS to be used from machines that require SQL server installation? What a complete waste of two months.

My company will never again use SSIS because of this small print "gotcha".

Perhaps you should avoid using "fine-print" software altogether!Talend for example is an open-source ETL IDE.
Joe Koberg
+2  A: 

Sticking to older technology because it seems too much hassle to let your clients upgrade to a new .NET framework version, but it actually will take more development time to create the software because you can't utilize some (time-saving) components of the newer framework version.

+1  A: 

Using OO and polymorphism when a procedural approach would have worked better.

Paul Nathan