views:

4157

answers:

48

As we program, we all develop practices and patterns that we use and rely on. However, over time, as our understanding, maturity, and even technology usage changes, we come to realize that some practices that we once thought were great are not (or no longer apply).

An example of a practice I once used quite often, but have in recent years changed, is the use of the Singleton object pattern.

Through my own experience and long debates with colleagues, I've come to realize that singletons are not always desirable - they can make testing more difficult (by inhibiting techniques like mocking) and can create undesirable coupling between parts of a system. Instead, I now use object factories (typically with a IoC container) that hide the nature and existence of singletons from parts of the system that don't care - or need to know. Instead, they rely on a factory (or service locator) to acquire access to such objects.

My questions to the community, in the spirit of self-improvement, are:

  • What programming patterns or practices have you reconsidered recently, and now try to avoid?
  • What did you decide to replace them with?**
+5  A: 

Like you, I also have embraced IoC patterns in reducing coupling between various components of my apps. It makes maintenance and parts-swapping much simpler, as long as I can keep each component as independent as possible. I'm also utilizing more object-relational frameworks such as NHibernate to simplify database management chores.

In a nutshell, I'm using "mini" frameworks to aid in building software more quickly and efficiently. These mini-frameworks save lots of time, and if done right can make an application super simple to maintain down the road. Plug 'n Play for the win!

arabian tiger
-1 I can't stand the proliferation of IoC and frameworks. Decoupling good, IoC and frameworks = needless complexity
Paul Hollingsworth
How can you praise decoupling and yet still hate on IoC and other frameworks? That's what many IoC frameworks and design patterns do to begin with.
arabian tiger
+49  A: 

The use of caffine. It once kept me awake and in a glorious programming mood, where the code flew from my fingers with feverous fluidity. Now it does nothing, and if I don't have it I get a headache.

windfinder
Ugh. Same here.
unforgiven3
You need to drink even more coffee. If that doesn't work, take up smoking.
MusiGenesis
What's next ... some illegal drug.
Brad Gilbert
Brad: You don't need those when you have Python: http://www.xkcd.com/353/
Peter
Nice Christmas Story reference! :-)
Steve Echols
+1! I kicked the habit back in March, and that was probably the worst two weeks of my life (and I spent a fortune on ibuprofen), but now I'm OK. I even indulge myself in the occasional relapse... and the caffeine *really works* now.
gustafc
I broke the habit and then picked it up again, several times (This is now my third cycle). There's nothing quite like coding in the cold mornings with a warm mug of coffee!
Matthew Iselin
I'm with Mathew - I kicked the caffeine habit, but couldn't ultimately kick the psychological addiction to coding with a warm cup-o-something ready to hand... and warm water just doesn't do it.
Software Monkey
Try tea, it has done wonders for me. I can now compile code in my head.
Secko
"Looks like I picked the wrong week to quit amphetamines."
ShreevatsaR
I stopped drinking coffee (long-time decaf drinker) in November, and it was the most painful and sleepy month of my life. decaf is a myth, brothers and sisters. :-) Now I'm trying to kick the diet-coke habit and be completely caffeine-free. It's tough.
Chris Kaminski
I think smoking is a disgusting habit, but I will admit there is no better way to make yourself stop and *really* think about the problem at hand then going out for a smoke break. So, even though I think it's disgusting, I still do it.
tj111
Coffee is not the problem, it must be used correctly -> Caffeine: A User's Guide to Getting Optimally Wired @ http://scienceblogs.com/developingintelligence/2008/02/optimally_wired_a_caffeine_use.php
adamcodes
+71  A: 

Hungarian notation (both Forms and Systems). I used to prefix everything. strSomeString or txtFoo. Now I use someString and textBoxFoo. It's far more readable and easier for someone new to come along and pick up. As an added bonus, it's trivial to keep it consistant -- camelCase the control and append a useful/descriptive name. Forms Hungarian has the drawback of not always being consistent and Systems Hungarian doesn't really gain you much. Chunking all your variables together isn't really that useful -- especially with modern IDE's.

Nazadus
What about in dynamically-typed languages, such as Python or JavaScript? I still find it helpful to use Hungarian notation in these languages so that, when looking at variables, I know what type of variable to expect (if there is a type to expect - of course, it would be foolhardy to treat a dynamically typed language exactly like a statically typed language.)
Daniel Lew
I do similar except: fooTextBox and string's are just hopefully apparent: numberOfEntries => int, isGreat => bool, etc.
rball
+1 for getting rid of Hungarian notation. I agree with rball; fooTextBox, fooService, fooString when its really necessary.
blu
Does p_, i_, l_ prefixes for parameter, iter and local variables in dynamic language counts as a hungarian notation? In a language where both classes and variables are named in lowercase such notation can save you a lot of brain cycles. When you see a line stating:set.new(args, ...) somewhere in the method, you assume that set is a class... unless someone decided to use set as a local/argument variable. with p_set.new() everything is clear at the first glance.
wuub
@ wuub: I would argue that with proper naming, you shouldn't need to prefix anything.
Nazadus
Back in the day of slow processors and 16K, 64K even 640K memory, I kept variable names short to decrease the parsing the compiler had to perform. These days, it's a moot point. So I've axed prefixes and suffixes, and go for most descriptive. Some habits die hard for me though, especially for (int i =....)
Steve Echols
By the way what you mentioned is not actual hungarian.
Antony Carthy
+22  A: 

This is a small thing, but: Caring about where the braces go (on the same line or next line?), suggested maximum line lengths of code, naming conventions for variables, and other elements of style. I've found that everyone seems to care more about this than I do, so I just go with the flow of whoever I'm working with nowadays.

Edit: The exception to this being, of course, when I'm the one who cares the most (or is the one in a position to set the style for a group). In that case, I do what I want!

(Note that this is not the same as having no consistent style. I think a consistent style in a codebase is very important for readability.)

Daniel Lew
Someone gave this a downvote, but I think its a practical perspective. What is the best code styling? Not important. Look up and down in the same file and duplicate.
Frank Schwieterman
The best code styling is whatever the standard is for that shop.
David Thornley
Thats why I love the auto format options in Visual Studio. It does not matter how the other developers wrote the code I just do a quick format and its exactly how I like it... most the time.
corymathews
@cory: doesn't that mess up the ability of your version control software to show you the difference between versions of the file you're just reformatted?
Steve Melnikoff
Which is why I'm kind of attracted to learning python... to think I just have to worry about what my tabstops are set to, and not bracing styles. It's kind of compelling.
Chris Kaminski
Coming from PHP (same-line brace layout) to C# (own-line brace layout) to Python (no braces) I have to say that reading PHP hurts my eyes now. I agree that it's better to go with the flow I just have a difficult time reading PHP anymore.
Evan Plaice
+30  A: 

I thought it made sense to apply design patterns whenever I recognised them.

Little did I know that I was actually copying styles from foreign programming languages, while the language I was working with allowed for far more elegant or easier solutions.

Using multiple (very) different languages opened my eyes and made me realise that I don't have to mis-apply other people's solutions to problems that aren't mine. Now I shudder when I see the factory pattern applied in a language like Ruby.

molf
+8  A: 

I used to be big into design-by-contract. This meant putting a lot of error checking at the beginning of all my functions. Contracts are still important, from the perspective of separation of concerns, but rather than try to enforce what my code shouldn't do, I try to use unit tests to verify what it does do.

Frank Schwieterman
I was taught to program like this. Of course, I was being taught by a HS math teacher, so I suppose it makes sense that he wanted his functions all self-verifying.
windfinder
+13  A: 

Wrapping existing Data Access components, like the Enterprise Library, with a custom layer of helper methods.

  • It doesn't make anybody's life easier
  • Its more code that can have bugs in it
  • A lot of people know how to use the EntLib data access components. No one but the local team knows how to use the in house data access solution
blu
+1  A: 

Hungarian notation - It just adds noise. With modern IDEs and well written, tight code it's not necessary, at least not in statically typed languages. Unfortunately, most of the teams I've worked with still insist on using it in some form.

Todd Ropog
You got beat out by this poster: http://stackoverflow.com/questions/1089327/what-programming-practice-that-you-once-liked-have-you-since-changed-your-mind-ab/1089363#1089363
Daniel Lew
+40  A: 

The overuse / abuse of #region directives. It's just a little thing, but in C#, I previously would use #region directives all over the place, to organize my classes. For example, I'd group all class properties together in a region.

Now I look back at old code and mostly just get annoyed by them. I don't think it really makes things clearer most of the time, and sometimes they just plain slow you down. So I have now changed my mind and feel that well laid out classes are mostly cleaner without region directives.

Scott Ferguson
I hate region's. People on my team use them frivolously. I call them "bad code hiders".
rball
They're definitely a code smell.
Frank Schwieterman
They are the most phenomenal waste of time.
MusiGenesis
I hate the term "code smell"... Regions can be useful for certain things, like sectioning way interface implementation or code that does not need to be modified often (if ever) and is just getting in the way, but yes, they are often abused
Ed Swangren
I HATE regions. I am currently maintaining code where function is almost 500 lines and to manage it, the smart developer has put chunks of code in 10 to 15 regions.
SolutionYogi
@Solution Yogi: I don't think regions are the real problem in your case :-)
Ed Swangren
agreed, ctrl+m,l is your friend
nailitdown
I think regions can be fine if used sparingly.
Gregory Higley
I agree with the sentiment that regions are mostly counterproductive, and I completely disable them in VS2008:Tools -> Options -> Text Editor -> C# -> Advanced -> 'Enter outlining mode when files open'
ElectricDialect
+1 why use regions when the editor will collapse members as needed. I reckon they are the product of and OCD with code.
Preet Sangha
@Preet: I would completely agree with you but for one thing: comments are treated as a separate entity and must be expanded/collapsed on their own. If method outlining worked like a region around the whole thing I think the need for regions would be eliminated altogether in 99% of the cases.
Ed Swangren
I disagree. Regions can save time if they're used sparingly and consistently. You have to have your code laid out in a structured manner first for it to work though (which most programmers are too lazy to do in the first place). I usually have 5 regions, constructors, properties, events, methods, members. It saves me a ton of time not having to scroll through tons of crap that I don't need to see and helps me dissect a class mentally so I can better visualize the individual parts and their interactions.
Evan Plaice
+8  A: 

I would use static's in a lot of methods/classes as it was more concise. When I started writing tests that practice changed very quickly.

rball
+23  A: 

Obsessive testing. I used to be a rabid proponent of test-first development. For some projects it makes a lot of sense, but I've come to realize that it is not only unfeasible, but rather detrimental to many projects to slavishly adhere to a doctrine of writing unit tests for every single piece of functionality.

Really, slavishly adhering to anything can be detrimental.

yalestar
It works out pretty well for barnacles.
MusiGenesis
Test coverage has to be proportional to the benefit. Anything you do really has to show a benefit. 100% coverage isn't going to give you all that much. difference from 80 or 90 in a form that isn't in a life support / missile launch scenario.
Spence
@MusiGenesis - Brilliant!
Bryan Anderson
+1 reliance on unit testing as opposed to testing.
Preet Sangha
+94  A: 
  • Trying to code things perfectly on the first try.
  • Trying to create perfect OO model before coding.
  • Designing everything for flexibility and future improvements.

In one word overengineering.

wuub
Wait, I always get it right on the first try. :)
jeffamaphone
The real money's in getting it subtly wrong the first time and letting it out into the wild. Then, when people are used to the gimped version, swoop in with arrogant showmanship and fix the bug/inefficiency to reap extra glory! ;)
Eric
@eric +1. Yep. works every time ;-p
wuub
@jeffamaphone - No, only Jon Skeet gets it right the first time.
j0rd4n
+11  A: 

In C#, using _notation for private members. I now think it's ugly.

I then changed to this.notation for private members, but found I was inconsistent in using it, so I dropped that too.

JulianR
I'm still using _notation and think it's great.
Arnis L.
I hate _notation; I use ThisNotation for public members and thisNotation for private members.
Callum Rogers
I prefer `notation_`
rlbond
I hate it too. It confuses me :(
yelinna
I disagree. It makes it so much easier to manage names. Use PascalCase for properties or public/internal members, _UnderscorePascalCase for members that are exposed through a property, and camelCase for parameter names in methods/constructors and private members. The 'this' keyword is only necessary if you need to pass the reference of the current class outside of the class or you need to access an auto-generated member within the class (such as name, controls, etc...).
Evan Plaice
+89  A: 

Single return points.

I once preferred a single return point for each method, because with that I could ensure that any cleanup needed by the routine was not overlooked.

Since then, I've moved to much smaller routines - so the likelihood of overlooking cleanup is reduced and in fact the need for cleanup is reduced - and find that early returns reduce the apparent complexity (the nesting level) of the code. Artifacts of the single return point - keeping "result" variables around, keeping flag variables, conditional clauses for not-already-done situations - make the code appear much more complex than it actually is, make it harder to read and maintain. Early exits, and smaller methods, are the way to go.

Carl Manaster
I agree, when combined with data types that automatically clean themselves up, such as autoptr, scoped_ptr, CComPtr, etc.
jeffamaphone
Code clean up is what try { } finally { } is for
banjollity
I agree very strongly with this.
Gregory Higley
@banjollity: except for languages that don't support finally { }. And note that even in languages that support it, finally { } is not ALWAYS guaranteed to execute.
Chris Kaminski
@banjollity, Chris: In C++, cleanup is what the destructor is for, and except in extreme circumstances (exit(), a destructor throwing an exception during stack unwind, squirrels cutting your power) it is guaranteed to run.
David Thornley
Agreed. *Replace Nested Conditional with Guard Clauses* ftw!
Jonik
happened to me too
Mauricio Scheffer
Couldn't agree more.
Christian Schwarz
+33  A: 

Waterfall development in general, and in specific, the practice of writing complete and comprehensive functional and design specifications that are somehow expected to be canonical and then expecting an implementation of those to be correct and acceptable. I've seen it replaced with Scrum, and good riddance to it, I say. The simple fact is that the changing nature of customer needs and desires makes any fixed specification effectively useless; the only way to really properly approach the problem is with an iterative approach. Not that Scrum is a silver bullet, of course; I've seen it misused and abused many, many times. But it beats waterfall.

McWafflestix
Tell that to my customer... I'm in the middle of writing some useless "I'm a programmer with a crystal ball so I know exactly how my low-level design will look like in 6 months" specification document :)
Igor Brejc
+8  A: 

I stopped going by the university recommended method of design before implementation. Working in a chaotic and complex system has forced me to change attitude.

Of course I still do code research, especially when I'm about to touch code I've never touched before, but normally I try to focus on as small implementations as possible to get something going first. This is the primary goal. Then gradually refine the logic and let the design just appear by itself. Programming is an iterative process and works very well with an agile approach and with lots of refactoring.

The code will not look at all what you first thought it would look like. Happens every time :)

Magnus Skog
+15  A: 

Utility libraries. I used to carry around an assembly with a variety of helper methods and classes with the theory that I could use them somewhere else someday.

In reality, I just created a huge namespace with a lot of poorly organized bits of functionality.

Now, I just leave them in the project I created them in. In all probability I'm not going to need it, and if I do, I can always refactor them into something reusable later. Sometimes I will flag them with a //TODO for possible extraction into a common assembly.

JamesWampler
There's a good quote (I can't find the original at the moment) which was something along the lines of "don't even think about creating a generic routine until you've needed to solve the same problem 3 times.
Dave Rigby
"Three strikes and you refactor" - *Refactoring* by Martin Fowler. **The Rule of Three**, pg 58.
Nick D
+3  A: 

Prototyping in the IDE. Like all newbies I have learnt that jumping into the code is a bad idea. Now I tend to abandon silly ideas before even using a keyboard.

Nippysaurus
+16  A: 

Designing more than I coded. After a while, it turns into analysis paralysis.

Paul Nathan
I occasionaly invoke the phrase "If you find that you are thinking too much, stop and do. If you find that you are doing too much, stop and think."
Neil N
That is nice, but how much is too much?
Hamish Grubijan
Too much dependence on UML (Useless Modeling Language). It **occasionally** has its uses. But once I see someone start to draw class diagrams and preach to the benefits of "how awesome it would be to generate code from the diagrams" I lace up my running shoes. Plus, Visual Studio has a build-in interactive class diagram generator that does it all automatically and works like the object explorer on crack.
Evan Plaice
+126  A: 


//Coming out of university, we were taught to ensure we always had an abundance 
//of commenting around our code. But applying that to the real world, made it 
//clear that over-commenting not only has the potential to confuse/complicate 
//things but can make the code hard to follow. Now I spend more time on 
//improving the simplicity and readability of the code and inserting fewer yet 
//relevant comments, instead of spending that time writing overly-descriptive 
//commentaries all throughout the code.


Luke Baulch
+1. I was about to post this same answer. I found some of my old programming assignments on an archive disc a few weeks ago. It all looked the same. There was almost a 1:1 ratio of lines of comments to lines of code.
Mike
Totally agree. I used to comment *every single line*. Now I'm working on a project in which modules I write are regularly modified to a week or two later. With comments over blocks rather than lines of code I get a better picture of what I was thinking when I come back. Plus, commenting "int a = 0;" is totally pointless ;)
Matthew Iselin
Sounds like you commented *incorrectly*, not too much. Code does not speak for itself. No. It really doesn't. Read the latest NT Insider for a good rant about this. If you think comments will be redundant then you are either wrong or you are doing it wrong. Universities don't teach correct commenting it seems (or bug tracking, or version control... \*sigh\*). There are way too few comments out there. (and fewer good ones)
Thomas
Code Complete has good tips on commenting, and the data to back it up.
Thomas
Comments should be used to describe *why* the code does what it does (if it's not obvious), not *what* the code does. A possible exception is a crazy bit twiddling / language hack, like Carmack's magic number 0x5f3759df.
Chris Simmons
@Thomas: I personally think the problem is that teaching good commenting is not something a university can show students. Almost all programs at schools are one-off things; students don't get to experience looking back at code they wrote a year ago and not understand it at all. Also, lower-level classes teach really simple coding concepts - commenting at this level is almost necessarily tedious, because of what is happening. In other words, it's like trying to teach someone to swim in a wading pool; it's just not the right context for them to understand the motions.
Daniel Lew
The code shows <i>how</i> the program works, the comment describes <i>why</i> the program is designed as it is.
Callum Rogers
My take on comments is that you should strive to only have comments on your API's. By this I mean, header files that get distributed, interfaces to your modules etc.. If you find yourself wanting to comment any other code then you should try name the variables and functions better to make the intention more revealing. However, this isn't to say that this is always possible, but what you should aim for.
gommo
@chris: thanks for making me google that pointer. Those are *programmers* . I feel like a typist after reading the fast square root function's history.
voyager
For multiple line comments I recomend /* and */:)
yelinna
@Thomas: NT Insider article link?
Dave Jarvis
@Dave Jarvis: I don't know if NT insider even puts their articles online. It's free to subscribe to the paper copy though. I highly recommend it even if you like me don't program in Windows at all.
Thomas
+58  A: 

The "perfect" architecture

I came up with THE architecture a couple of years ago. Pushed myself technically as far as I could so there were 100% loosely coupled layers, extensive use of delegates, and lightweight objects. It was technical heaven.

And it was crap. The technical purity of the architecture just slowed my dev team down aiming for perfection over results and I almost achieved complete failure.

We now have much simpler less technically perfect architecture and our delivery rate has skyrocketed.

Bruce McLeod
+14  A: 

The use of a DataSet to perform business logic. This binds the code too tightly to the database, also the DataSet is usually created from SQL which makes things even more fragile. If the SQL or the Database changes it tends to trickle to everything the DataSet touches.

Performing any business logic inside an object constructor. With inheritance and the ability to create overloaded constructors tend to make maintenance difficult.

eschneider
+6  A: 

Initializing all class members.

I used to explicitly initialize every class member with something, usually NULL. I have come to realize that this:

  • normally means that every variable is initialized twice before ever being read
  • is silly because in most languages automatically initialize variables to NULL.
  • actually enforces a slight performance hit in most languages
  • can bloat code on larger projects
Nippysaurus
Sometimes the consequences of NOT initializing all class members can really bite you in the a$$ though.
muusbolla
Unless you're using prototype based language that creates new instances by cloning. Initializing all members can really save you a lot of trouble.
wuub
+21  A: 

Perhaps the most important "programming practice" I have since changed my mind about, is the idea that my code is better than everyone else's. This is common for programmers (especially newbies).

Nippysaurus
+6  A: 

When I needed to do some refactoring, I thought it was faster and cleaner to start straightaway and implement the new design, fixing up the connections until they work. Then I realized it's better to do a series of small refactorings to slowly but reliably progress towards the new design.

chuanose
can remember the number of times this has bit me....
Preet Sangha
+11  A: 

I first heard about object-oriented programming while reading about Smalltalk in 1984, but I didn't have access to an o-o language until I used the cfront C++ compiler in 1992. I finally got to use Smalltalk in 1995. I had eagerly anticipated o-o technology, and bought into the idea that it would save software development.

Now, I just see o-o as one technique that has some advantages, but it's just one tool in the toolbox. I do most of my work in Python, and I often write standalone functions that are not class members, and I often collect groups of data in tuples or lists where in the past I would have created a class. I still create classes when the data structure is complicated, or I need behavior associated with the data, but I tend to resist it.

I'm actually interested in doing some work in Clojure when I get the time, which doesn't provide o-o facilities, although it can use Java objects if I understand correctly. I'm not ready to say anything like o-o is dead, but personally I'm not the fan I used to be.

Greg Graham
+28  A: 

Never crashing.

It seems like such a good idea, doesn't it? Users don't like programs that crash, so let's write programs that don't crash, and users should like the program, right? That's how I started out.

Nowadays, I'm more inclined to think that if it doesn't work, it shouldn't pretend it's working. Fail as soon as you can, with a good error message. If you don't, your program is going to crash even harder just a few instructions later, but with some nondescript null-pointer error that'll take you an hour to debug.

My favorite "don't crash" pattern is this:

public User readUserFromDb(int id){
    User u = null;
    try {
        ResultSet rs = connection.execute("SELECT * FROM user WHERE id = " + id);
        if (rs.moveNext()){
            u = new User();
            u.setFirstName(rs.get("fname"));
            u.setSurname(rs.get("sname"));
            // etc
        }
    } catch (Exception e) {
        log.info(e);
    }
    if (u == null){
        u = new User();
        u.setFirstName("error communicating with database");
        u.setSurname("error communicating with database");
        // etc
    }
    u.setId(id);
    return u;
}

Now, instead of asking your users to copy/paste the error message and sending it to you, you'll have to dive into the logs trying to find the log entry. (And since they entered an invalid user ID, there'll be no log entry.)

gustafc
What's the likelihood of the user giving you the actual error message, vs your logs producing the issue?(Very low in this particular case, but users almost never quote the error messages!)Do they even read them?
Arafangion
I admit the chance is low that a random user sends you the error message, but the chance is non-zero (trivial example: sometimes you use your own app), and some users actaully learn with time what to copy/paste. I'm not saying you shouldn't log (you should), but when the app is broken, it **is** broken. Showing an error message is far better, far more honest to the user than pretending that the user's first name is "error communicating with database" (or even worse, `null` or the empty string).
gustafc
There's an NullReferenceException on line two
oɔɯǝɹ
Thanks, oɔɯǝɹ, I fixed it. (Although it was a bit lulzier with it there: All this trouble to avoid exceptions and other "crashes", and still it unconditionally crashed.)
gustafc
+2  A: 

A few:

  • Started using braces in the same line rather than on a new line (if (... ) {)
  • using camelCase instead of non_camel_case
  • stopped using printf() for debugging
  • started relying on third party libraries rather than writing every bit from scratch

jrh

Here Be Wolves
Why not use printf() for debugging? I moved from over-using the debugger to using it very very rarely.
Thomas
The argument against debuggers is that they tend to get people to just fix the symptoms, not the cause.
Thomas
+10  A: 

Abbreviating variable/method/table/... Names

I used to do this all of the time, even when working in languages with no enforced limits on lengths of names (well they were probably 255 or something). One of the side-effects were a lot of comments littered throughout the code explaining the (non-standard) abbreviations. And of course, if the names were changed for any reason...

Now I much prefer to call things what they really are, with good descriptive names. including standard abbreviations only. No need to include useless comments, and the code is far more readable and understandable.

Rhys Jones
abbr's r tribl. rly.
Paul Prewett
Yes, gotta love these types of declarations: void Foo(x1,y,x2,y2,p,r,j)...WTF?!
Ed Swangren
A: 

I used to write few routines. Each routine did a bunch of stuff.
Now I break the tasks into many, short routines, where each routine do one specific thing (whenever possible).


Also, routine's arguments declaration style, for a long arg. list:
before

int foo (char arg1, int arg2, float arg3, double arg4)

now

int
foo (
  char arg1,
  int arg2,
  float arg3,
  double arg4  )

that's, of course, a matter of taste.

Nick D
That style aint bad if your argument names are long, or require comments.
Arafangion
I used it when I have more than 3 arguments. Sometimes even with 2, if they are template types.
Nick D
+7  A: 

That anything worthwhile was only coded in one particular language. In my case I believed that C was the best language ever and I never had any reason to code anything in any other language... ever.

I have since come to appreciate many different languages and the benefits/functionality they offer. If I want to code something small - quickly - I would use Python. If I want to work on a large project I would code in C++ or C#. If I want to develop a brain tumour I would code in Perl.

Patrick Gryciuk
+5  A: 

Perhaps the biggest thing that has changed in my coding practices, as well as in others, is the acceptance of outside classes and libraries downloaded from the internet as the basis for behaviors and functionality in applications. In school at the time I attended college we were encouraged to figure out how to make things better via our own code and rely upon the language to solve our problems. With the advances in all aspects of user interface and service/data consumption this is no longer a realistic notion.

There are certain things which will never change in a language, and having a library that wraps this code in a simpler transaction and in fewer lines of code that I have to write is a blessing. Connecting to a database will always be the same. Selecting an element within the DOM will not change. Sending an email via a server-side script will never change. Having to write this time and again wastes time that I could be using to improve my core logic in the application.

+33  A: 

Commenting out code. I used to think that code was precious and that you can't just delete those beautiful gems that you crafted. I now delete any commented-out code I come across unless there's a TODO or NOTE attached because it's too perilous to leave it in. To wit, I've come across old classes with huge commented-out portions and it really confused me why they were there: were they recently commented out? is this a dev environment change? why does it do this unrelated block?

Seriously consider not commenting out code and just deleting it instead. If you need it, it's still in source control. YAGNI though.

bbrown
I comment out the old code during refactoring, but only until I verify that the replacement code works. Once the new version is fully functional, I delete the old commented lines.
muusbolla
Indeed - I also comment out code, but only for a few days. If I come back and I've realised there is a bit I've missed, it'll get deleted before the new code is worked on.
Colin Mackay
I say check in the commented code once, THEN delete it. There are many times when you test various different bits of code, and you don't want to check in broken code...
DisgruntledGoat
That's a fair tradeoff.
bbrown
Not to mention that version control is your friend.
David Thornley
+1 I worked with a programmer that insisted on commenting **all** of the code that he had refactored or rewritten. It would drive me crazy because sometimes I would have to scroll through 1k+ lines of crap to find what I was working on.
Evan Plaice
A: 

Catching only exceptions you know of in high availability services.

This is one place where I disagree with my own company's advice. The theory is that you should catch only exceptions you know of since you have no guarantee over what the 'bad' thing that happened is. If memory got corrupted or if the CLR itself got wedged, you're not going to recover.

However, when I worked on high availability services, I found that there were often cases where I wanted to express "Catch as many errors as you can and keep going". Yes, in theory we could have seen exceptions that we couldn't handle but with well tested code on a environment you control (and with not much native code in the mix apart from what the system provides), this turned out to be a better option than only catching exceptions you knew about.

The CLR team's stance on this is "Don't let your thread execute in an unknown state" while my stance is "If you know your scenario, this is probably ok". It may not be ok if you're running a bank website but in most cases, this will give you better availability and not force you to wonder why your app is restarting so frequently.

You can see both sides of the debate at http://blogs.msdn.com/clrteam/archive/2009/02/19/why-catch-exception-empty-catch-is-bad.aspx

Sriram Krishnan
+1  A: 

Header files shall not include other header files.

I used to be strongly opposed to the idea of headers including other headers - based on a bad experience early in my engineering career. Having the headers included explicitly in the order needed right there in the source file seemed to work better.

Now - in general - I'm of the mindset that each header file shall be self-sufficient, i.e., not require other .h files to be included before it in the source file. Especially when developing in C++...

Dan
+2  A: 

Requiring all code to be clean code, even if it is already working.

In academic environments there is such a focus on clean code that the temptation afterward is big to always clean up ugly code when you come across it. However, cleaning up working code has a number of downsides:

  • The time spent cleaning it up doesn't add any value to the product at that time, while that same time spent debugging or doing feature development does add value.
  • There is a risk of breaking already working code. Nobody is so amazing that they never introduce bugs when refactoring. (I had to eat some humble pie when my bugs got shipped to the customer.)

Ofcourse, once that piece of ugly code needs new features, it often is not a bad idea to refactor it. But this is the point: refactoring and clean up should only happen in combination with feature development.

Joeri Sebrechts
Who the hell ups this? Crappy code is what leads to problems.
Stefan Valianu
@Stefan: I used to think exactly the way you do, but real-world experience on large systems proved otherwise. If it ain't broke, don't fix it. See http://www.infoq.com/news/2010/06/decision-to-refactor for an in-depth reasoning.
Joeri Sebrechts
+2  A: 

Creating stored procedures for accessing data. Hell to maintain (especially if you develop on test server and have to maintain other server), and you end up with gazillion stored procedures called NewInsertStoredProcedureLines, NewSelectStoredProcedureLines... Now that it happily resides hard coded in the app, makes me a happy camper.

Domagoj Peharda
A: 

Accessing the database directly.
In my older code, I use querys and datasets extensively. Now I use an ORM for most things. It gives me much cleaner code and better reusability. Typically I now only access the db directly in small programs, or when needed for performance.

SeanX
+1  A: 

The most significant change I've made is my approach to N-tier. I had been a believer in the separation of logic along physical tiers and building middle-tier "application servers". Going back to windows DNA using DCOM, MTS and COM+, then later on using .NET Remoting. At the time it had seemed reasonable from a security and scalability perspective to build systems this way. But having done it enough times to find that the added complexity (which is significant), network communication overhead, deployment issues, developer training, and the reality that security was never increased (because we never actually locked down firewalls between servers) has lead me to conclude that its seldom justified or warranted.

I'm still much in favor layering, and doing so in such a way as to allow tiering if it becomes a requirement, which I'm continuing to find, it seldom does.

JNappi
+2  A: 

Writing docblock method descriptions that simply reiterated what the method name already told you. The bad old days:

/**
 * Returns the logger obj
 * @return log_Core
 */
public function getLogger() 
{ ... }

Now:

/**
 * @return log_Core
 */
public function getLogger() 
{ ... }

Of course, well-named functions help.

DavidWinterbottom
+1  A: 

I had two changes of mind through my career as software developer I was taught in school and university.

Like many things in life these changes come from experience and observation and those two are contradictory (just like life!).

More or less the first one describes why/when to use "big systems" over "small systems" and the second describe why sometimes "proprietory systems" have advantages over "standard systems".

I know it's a little long/philosophic answer, but you can skip to the "in conclusion"!


ONE: "Small/Indie" software is equally good as "Big name/Standard" software.

I always wondered why companies use big name software like Microsoft, SAP, Oracle etc. that cost a lot of money to develop for and licences.

I learned a valuable lesson from someone that rather payed A LOT OF MONEY for using an Oracle DBMS instead of MySQl, which would have been sufficient for the cause because it was a very small amount of data to be stored in the database for the software project.

Basically when you use "Big name/Standard" software like SAP, Oracle or Microsoft you want to buy "security" that is best summarized in "30 years from now I will still find developers for SAP".

Smaller companies can go bankrupt and you have a problem maintaining your software system for a longer period. Maybe the "small/indie" software will do the job but you can't be sure to have it supported the next year.

I've seen it numerous times that a software company (even bigger ones) goes under and you suddenly have problems to get support and/or developers (for a reasonable price) on the market for your software system.

In conclusion: There are good reasons like security or support to use "Big name/Standard" software, even if they are expensive and have their own problems.


TWO: Software language/concept/system X is the only right way to do things.

In my younger days I was a purist. Everything had to be this or that with no grey areas in between. E.g. I did all stuff in C++ (MS Windows, then Java (Windows/Web), then PHP (Linux/Web)etc... even ColdFusion (Windows/Web) or ABAP (SAP).

Now I don't think there is the only "right way" to do things. I'm now more a generalist than a purist. Also I'm very sceptical of large libraries which are provided by Java etc... or systems like software layers for PHP etc.

Also I'm very sceptical of the OO-mantra that has been accepted everywhere it seems. OO is great in its own ways, but it's not THE solution to every problem. I live by the KISS (keep it simple, stupid) principle and I often find it very hard to learn all the classes/functions of a certain language to just do simple things for a small website project. E.g. I'm always wondering why JSP is used for small simple projects that could be done with PHP in a fraction of the time.

So today I'm very sceptical of large/slow/overhead software systems... often it is better to do stuff yourself for small projects than overkill everything with a large functionality that yet again has to be tailored down to suit your needs.

Most of the time I'm faster in developing a website with database connectivity from scratch (e.g. PHP) than implement it with an (expensive?!) and complex and HUGE library (e.g. JSP) because most of the features aren't even useful.

For example: You want to use weblog software X on your website, which is pretty cool because of the built-in functions like RSS export, web services etc.etc. BUT there is a serious overhead in learning all the library functionality and conventions of the weblog software... yes, when you finally have understood it, you can use all the sweet functions and features... but in about half the time you could build the 10% of the features you really need from scratch.

In conclusion: Keep it simple, stupid works. Many times a simple (even if 'cruder') solution is better than a complex (but 'nicer') solution. Use the tools best suited for the situation not a fixed mantra.

capfu
+2  A: 

TDD and unit tests in general. At some point I was the advocate of TDD at my workplace, but over time I learned it really does not bring anything to the table, at least with a statically typed language.

Don't get me wrong, I still think automated functional tests are very important to have.

Nemanja Trifunovic
Have you tried retro-fitting unit tests to some old code which wasn't designed to be testable? What TDD brings to the table is intentional effort to make your code testable, which (usually) means more manageable.
Igor Brejc
+7  A: 

Checked Exceptions

An amazing idea on paper - defines the contract clearly, no room for mistake or forgetting to check for some exception condition. I was sold when I first heard about it.

Of course, it turned to be such a mess in practice. To the point of having libraries today like Spring JDBC, which has hiding legacy checked exceptions as one of its main features.

Gregory Mostizky
+1  A: 

Compact code.

I used to love getting any given function down to the absolute essentials, and often had nested function calls to reduce the line count.

After having to maintain code a few years old, I realised that reducing the line count simply made the code less readable, and taking shortcuts only resulted in pain down the track!

Farid
+1  A: 

Documenting the code with extensive inline code comments. Now I follow Uncle Bob's view that the code should be self-documenting: if you feel the need to write a comment about certain piece of code, you should refactor the code instead to make it more legible.

Also, code comments tend to get out of sync with the actual code they are supposed to describe. To quote Uncle: "the truth is in the code, not the comments".

Highly recommended book: Clean Code: A Handbook of Agile Software Craftsmanship

Igor Brejc
True, but sometimes a few short comments can save someone who is skimming the implementation a lot of time
RobS
In math intensive code, a comment to an external reference can be a life saver.
ceretullis
+1  A: 

Never commenting code, hoping to always rely on the notion that code should describe itself.

When I first started programming I quickly adopted the idea that extensive comments are useless, and that instead code should be written in such a way, so as to describe it self. Then I took it to an extreme, where I would never comment code. This works well, at times, for code representing a business domain, because the detailed documentation needs to be somewhere else (like a DSL, or document) and the meanings of class members are obvious. However when developing more 'frameworky' code it becomes more difficult to infer meaning. This is true of myself looking back at my own code, not to speak of others needing to use it. I certainly use the comments for .NET Framework classes, and other frameworks, why shouldn't I write them for my own frameworks? Normally, I only comment classes, or methods if they have non-obvious characteriscts, or have certain dependencies on parameters, and have special types of behavior.

Moreover, I realized that commenting certain types of classes facilitated my thinking process. When I am able to verbalize the purpose and characteristics of a class, I may also rethink its entire existence.

In effect, on the spectrum between no-comments to essays for each code block, I have inched away from no-comments, toward reasonably and effective use of them. In the future, when the language itself allows for the declaration of more rules, use cases, etc., such as DbC, more use of expressions over statements, the need to comment will diminish even further. In the meantime, comments remain useful.

eulerfx
A: 

No duplication/code reuse I fell for this big time. Duplication is fine if it creates less work overall than the work needed to remove the duplication. In some ways this is a type of over architecture.

Jonathan Parker
+1  A: 

Writing my code in spanish

SourceRebels