views:

90643

answers:

415

This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.

The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).

So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.

Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.

+85  A: 

The world needs more GOTOs

GOTOs are avoided religiously often with no reasoning beyond "my professor told me GOTOs are bad." They have a purpose and would greatly simplify production code in many places.

That said, they aren't really necessary in 99% of the code you'll ever write.

Max
I agree. Not necessarily that we need more gotos, but that sometimes programmers go to ridiculous lengths to avoid them: such as creating bizarre constructs like: do { ... break; ... } while (false); to simulate a goto while pretending not to use one.
Ferruccio
Especially when you're taught what GOTOs are for an entire semester and how to use them, then the next semester a new lecturer comes along chanting the death of the GOTO statement in a folly of unexplained and illogical rage.
Kezzer
I agree as well, one of my old lectures would go mental if you ever thought about using them. But coding to avoid them may end up being worse than using them.
Mark Davidson
I've used GOTOs in switch statements to have logic jump all over the place, and had no problem with it (apart from the fact that I got FxCop to actually complain about the complexity of the method in question).
Dmitri Nesteruk
I have seen only 1 example of a good usage for the last 5 years, so make it 99,999 percent.
Paco
I've never had to use a goto for anything. Anytime when I actually thought goto might be a good idea, it was instead an indicator that things weren't flowing properly.
PhoenixRedeemer
No no no no no. So much production code is so wildly obfuscated and unclear already. You would be giving more tools to the monkeys.
Steve B.
I don't think I can come up with a single good use of GoTo in a .NET application... can you give an example of a good use of it?
BenAlabaster
Goto is very useful in native code. It lets you move all of your error handling to the end of you function and helps ensure that all necessary cleanup happens(freeing memory/resources, etc). The pattern which I like to see is to have exactly two labels in each function: Error and Cleanup.
Jesse Weigert
The explanation I've heard is that GOTOs make the stack non-deterministic. If you got to a line with a GOTO, there's no way of telling how you got there. Makes debugging much harder.
dj_segfault
As the years have gone by the need for GOTOs goes down and down as languages add constructs that remove the need for some uses. I'm down to about 1 GOTO per year now but there are times it's the right answer.
Loren Pechtel
Nice to see that this did indeed generate a great bit of controversy!
Max
I find goto's are not very readable. I despise them in SQL, so why would I use them anywhere else?
Jeremy
@Jeremy, Can you do goto in SQL? SQL is a declarative language. Which db vendor has SQL that knows a goto?
tuinstoel
@tuinstoel, MSSQL has supported it since at least 6.5. I use it a lot to begin, commit/rollback transactions in stored procedures.
Jeremy
@Jeremy, Don't you mean T-SQL instead of SQL?
tuinstoel
To my knowledge in assembly/machine language all branching are forms of goto. What does your high level language get compiled into? Nothing wrong with the occasional "low level style" shortcut if it is done properly.
Andy Webb
Continue = goto for loops;Break = goto for blocks;switch = goto madness;Goto is obviously not a problem if used with some sense then.If you are using an OO language and you use Goto for Error and Cleanup then you scare me. RAII and counterparts should be considered your friends.
Greg Domjan
+1 for controversy :). Oh, I know what GOTO's are, I started with BASIC like many of you. We need more GOTO's like we need DOS 8.3 filenames, plain ASCII encoding, FAT 16 filesystems, and 5 1/4 inch floppies.
postfuturist
Just found this: http://stackoverflow.com/questions/84556/whats-your-favorite-programmer-cartoon#301419
Cameron MacFarland
A good example of goto: http://stackoverflow.com/questions/416464/is-it-possible-to-exit-a-for-before-time-in-c-if-an-ending-condition-is-reache#416555
FryGuy
I used goto quite a bit in C programming - generally as a finally block. I have a file handle I need to close, memory I need to free etc, so at the point where I would return early, I just set a return code and goto the cleanup: label.
Hamish Downer
Gotos are also commonly used to code up state machines. You can use an enumeration, a switch statement, and a loop to achieve the same effect. However, all that really does is mask the true structure of your control flow (and slow things down a bit).
T.E.D.
Goto can be OK. My rule of thumb. If a good programmer, who doesn't often use Goto, is prepared to defend it - then it's OK. And it probably is a once a year thing if that. Dmitri, sounds like FxCop is right and you're wrong.
MarkJ
This thread considered harmful. Edsger Dijkstra is rolling in his grave. :)
Darcy Casselman
Agreed. I am struggling to translate numerical code from Fortran into F# because it lacks an efficient goto construct.
Jon Harrop
The problem with GOTO's are that they are like giving a little alcohol to a recovering alcoholic. Incredibly dangerous for programmers coming over from BASIC who are unstructured happy.
Austin
People who think gotos are evil have never programmed in C, or if they have, they did it poorly. Gotos are the *best* way to do error handling in plain C, and repeating Dijkstras quote dogmatically only demonstrates ignorance.Please read this before complaining about gotos: http://eli.thegreenplace.net/2009/04/27/using-goto-for-error-handling-in-c/
catphive
To add on to catphive's point about using goto's in C, here's a discussion about gotos by the Linux kernel developers when one man jumps the gun on a goto and proceeds to recommend avoiding it at all costs: http://kerneltrap.org/node/553/2131
Coding With Style
Actually, the discussion of the use of goto in Linux made me change my mind if goto is indeed harmful in development. I've learned not just to trust what you've taught :).
OnesimusUnbound
I needed gotos in C because it has no equivalent for Java's "continue loopname;"
luiscubal
I once got sent home from college for telling someone to use a GOTO :P
ing0
Events are the modern GOTO statement. You arrive from anywhere, anytime, with extra baggage of data that GOTOs never had.
Tom A
I've always learned not to use GOTOs because they create spaghetti code and are for the lazy (that if you do use them, something is wrong with your flow). However, JUMP statements, which are essentially GOTOs, are very useful in assembly.
Dennis
"They have a purpose and would greatly simplify production code in many places. That said, they aren't really necessary in 99% of the code you'll ever write." +2 if I could, sir, that could not have been written better.
Jake Petroules
Sorry but I'm very very glad to have not seen a GOTO statement since porting a QuickBasic program to C#. Give me a break statement anyday.
wonea
+13  A: 

Stay away from Celko!!!!

http://www.dbdebunk.com/page/page/857309.htm

I think it makes a lot more sense to use surrogate primary keys then "natural" primary keys.


@ocdecio: Fabian Pascal gives (in chapter 3 of his book Practical issues in database management, cited in point 3 at the page that you link) as one of the criteria for choosing a key that of stability (it always exists and doesn't change). When a natural key does not possesses such property, than a surrogate key must be used, for evident reasons, to which you hint in comments.

You don't know what he wrote and you have not bothered to check, otherwise you could discover that you actually agree with him. Nothing controversial there: he was saying "don't be dogmatic, adapt general guidelines to circumstances, and, above all, think, use your brain instead of a dogmatic/cookbook/words-of-guru approach".

Otávio Décio
Yes! His ideas about Heiarchical data structures are academically elegant and totally useless.
Charles Bretana
Well, I like Celko but I agree with you re: surrogate primary keys!
Mark Brittingham
Agree in part, surrogate keys are definitely more convenient when accessing data, but I try to identify a natural key as well and usually set it up as a constraint. So why not both?!
tekiegreg
I have no problems with natural keys to be used for convenience, but primary keys should be immutable. I once had a system that used SSN's as PK's, and sometimes persons wouldn't have one (as children) and then they would. Try to change a PK, what a mess...
Otávio Décio
I can agree with the concept that once your autonumber keys get mismatched, there's no way to fix them. But the solution isn't "natural" keys; the solution is never to expose the keys to your users.
Kyralessa
I wish I could go back a few years on my current project and tell myself not to use a natural key. Now we're stuck with it and kludging around it. +1
Marcus Downing
@ocdecio: Fabian Pascal gives (in chapter 3 of his book, as cited in point 3 at the page that you link) as one of the criteria for choosing a key that of **stability** (it always exists and doesn't change). When a natural key does not possesses such property, than a surrogate key must be used, for evident reasons, to which you hint. So you actually agree with him, but think otherwise. Nothing controversial there: he was saying "don't be dogmatic, adapt general guidelines to circumstances, and, above all, **think**, use your brain instead of a dogmatic/cookbook/words-of-guru approach".
MaD70
One of the classic mistakes is to assume that just because a candidate natural key, such as SSN, is by definition unique, that you will receive unique values. People may lie or make mistakes and you then have a chance of collision when the "real person" comes along.
Andy Dent
+62  A: 

Respect the Single Responsibility Principle

At first glance you might not think this would be controversial, but in my experience when I mention to another developer that they shouldn't be doing everything in the page load method they often push back ... so for the children please quit building the "do everything" method we see all to often.

Toran Billups
How is that controversial?
Vinko Vrsalovic
Agree, but not very controversial?
Ed Guiness
it's controversial because the ugly mess that most people call MVC is mostly a 'do everything'
Javier
Really? I actually thought that MVC was the opposite to that.
Leonardo Herrera
Upvoted for lack of controversy!
spender
This answer seems to stir up a bit of controversy on its controversial-ness. ;P
strager
I Agree RE: MVC - really hard to limit method bloat on the controllers
Harry
Re MVC: If method bloat is the issue then make more controllers, they shouldn't be bloated with methods it doesn't feel right if that happens, feels like the controllers try to do more than they should.
Pop Catalin
If you don't think this is controversial, you probably don't know how far you can go with this. :-)
hstoerr
+45  A: 

If I were being controversial, I'd have to suggest that Jon Skeet isn't omnipotent..

Gareth
+1 up vote, all though I think you've upset his fans ;)
Shane MacLaughlin
Yes, apparently this is a very controversial view
Gareth
BLASPHE---!!Um, I mean, yes, I quite concur.
Mike Hofer
It does appear that writing a book on C# doesn't also mean you know everything about VB ;)
ChrisA
I think you might want to bring yourself up to date on the Jon Skeet facts. Remember:"Can Jon Skeet ask a question he cannot answer? Yes. And he can answer it too."He is omnipotent!
Totophil
At first I thought you said John Skeet isn't impotent.
John D. Cook
@Totophil: Interesting comment when you consider: Jon Skeet asked this question (and he posted an answer...)
James Curran
@John D. Cook: Well, he isn't: http://moms4mom.com/users/111/jon-skeet
Brian Ortiz
+5  A: 

In my workplace, I've been trying to introduce more Agile/XP development habits. Continuous Design is the one I've felt most resistance on so far. Maybe I shouldn't have phrased it as "let's round up all of the architecture team and shoot them"... ;)

Giraffe
That's good. Along the same lines is casually insulting people in the name of "truth". That particular virus seems to have a reservoir in grad schools, like the one I attended.
Mike Dunlavey
+20  A: 

I work in ASP.NET / VB.NET a lot and find ViewState an absolute nightmare. It's enabled by default on the majority of fields and causes a large quantity of encoded data at the start of every web page. The bigger a page gets in terms of controls on a page, the larger the ViewState data will become. Most people don't turn an eye to it, but it creates a large set of data which is usually irrelevant to the tasks being carried on the page. You must manually disable this option on all ASP controls if they're not being used. It's either that or have custom controls for everything.

On some pages I work with, half of the page is made up of ViewState, which is a shame really as there's probably better ways of doing it.

That's just one small example I can think of in terms of language/technology opinions. It may be controversial.

By the way, you might want to edit voting on this thread, it could get quite heated by some ;)

Kezzer
Could you highlight your controversial opinion... is it "viewstate is bad" or something else?
Ed Guiness
Nope, it's "ViewState is enabled by default, when I really don't think it should be, but having it disabled by default required custom controls"
Kezzer
I expect anyone who has worked on ASP.NET would agree with this. We have a page to search a third party system that has some LARGE drop down lists on it. The ViewState doubled the already 200Kb page size.
pipTheGeek
I don't think that experienced webforms developers will find this particularly controversial...most of us will agree with you!
Mark Brittingham
Yup, we encounter the page size doubling from time to time, and sometimes even more. The page renders slower, more bandwidth is used, and it's a nightmare to track down problems when you're viewing the rendered page source.
Kezzer
The intersting thing about this is that in the majority of cases ViewState is not needed at all!
etsuba
Don't throw so much crap on a page if Viewstate is really a problem. You probably have a design problem if you really have that much viewstate stuff on a page.
Paul Mendoza
Have you tried programming without ViewState? I can promise you that 5 minutes with JSP will make you *run* back to ViewState. Seriously, the ViewState is *NEVER* the problem, the problem is the developer using the ViewState!
Thomas Hansen
@Paul, I insanely agree! Don't throw so much crap in your page if you're having ViewState problems - go back to design!
Thomas Hansen
Try ASP.NET MVC, it's a joy to program with.
Dave
You do not have to turn ViewState off for each and every control. You can do it in the @Page directive.
xanadont
+749  A: 

The only "best practice" you should be using all the time is "Use Your Brain".

Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)

EDIT: Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?

Steven Robbins
This is exactly what I was going to write, so instead I'll just say amen!!
xando
+1, agreed completely... though I don't think this is a very controversial statement.
Kon
It doesn't sound controversial, but the amount of times I get a "WTF?" face from people when I question the use of a particular tech/method/whatever in a meeting is quite alarming :)
Steven Robbins
Yeah I gotta second that - it's "think for yourself", basically.
Dmitri Nesteruk
This is not controversial, it is true ;-).
Gamecat
Not only is it not controversial, but it's not true. I'm happy to use my brain, but there's a lot to be gained from looking at people smarter than you and saying - This smart person does this thing this way and I'd be wise to listen.
seanyboy
For example - every time I use someone elses library or implement a solution using a pattern - then I'm "jumping on a bandwagon." The most amazing thing about modern development is the fact that we can re-use the things other smarter people have created.
seanyboy
I think you are missing the point entirely seanyboy.. the point is not to ignore any other opinions or technology, it's to evaluate them yourself and apply them where you feel they will be of value, rather than blindly implementing something because AN Other said it was the way to do it!
Steven Robbins
You obviously have to make judgement calls about the techniques and technologies you use, but this should not mean that you *never* use other peoples techniques or technologies.
seanyboy
@beepcake and @seanyboy - I think you are heatedly agreeing with each other :)
Ed Guiness
Indeed we are.. I never use the word never ;)
Steven Robbins
we probably are.
seanyboy
BANDWAGONS BEGONE! Software isn't powered by popularity. It's engineering. Every technique has pros and cons for any given purpose.
Mike Dunlavey
I think what Beepcake meant was that many people apply "best practices" unthinkingly, either because they didn't understand WHY they're good, or because they once got enthusiastically convinced by the reasoning and never stop to think about whether it really applies universally.
Michael Borgwardt
Absolutely spot-on. +1 and sorry it can't be +10
Brent.Longborough
To really spice up the controversy, if your brand of "best practice" includes a slavish, single-minded devotion to any single programming language, platform, editor, IDE or technological trend, you are part of the problem.
dreftymac
Excellent write up!!
featureBlend
If it weren't against the rules I would create 10 more accounts to vote you up on this one. I see this all of the time and it's depressing.
nlaq
@Nelson LOL, I just wish I got rep for all these up votes :-)
Steven Robbins
ahem - unit testing... ahem - pair programming... ahem - scrum...
mson
Hell yes. 'It's 'not right' to do something...' why? 'Because it's bad practice.' But why? 'It's not the right way to do it' etc. etc.It's all right to characterise something as bad practice, but be able to back it up. Hey - good life strategy overall. :-)
kronoz
full agree. vote up! :)
ecleel
This is a good statement but it's not controversial...
TM
Someone use newest version but oldest feature of that technology.
In The Pink
This answer reminds me of the fable of the five monkeys: http://www.contactandcoil.com/Articles/StandardsfortheSakeofStan.html
Scott Whitlock
"...people shouldn't just blindly jump on something without thinking about WHY this 'thing' is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?" It is also important to apply this to your nightlife
JoeCool
"Best practices" is in fact a meaningless term most of the time, as it is primarily used as a debating cudgel to try to claim that certain practices are better, without any actual technical evidence that this is the case. 99% of the time if you ask someone where the "best practices" they're talking about are documented (or who exactly defined them to be "best practices"), they'll go red in the face and change the subject.
dirtside
The phrase I tend to use is, "Blindly following Best Practices is not a Best Practice."
Dave Markle
@Dave, great advice. I blindly follow it every day.
Daniel Daranas
Yee-haw! Cowboy coding for the budweiser-sipping win. When people disregard the opinions of others and "use their brain", that's when I sign in my resignation.
bzlm
This is so true now that Google Go has gone viral.
Barry Brown
Try to offer an improvement to best practice - but make sure you're not refuting it just because you're engaged in a battle with your own ego ('I know better than best practice because I'm a genius' / 'what do they know anyway' ... etc)
codeinthehole
seanboy, by arguing you just prove that it's a controversial opinion
Vitalik
While I agree with the generalization of David's post, I also find that those that argue against using a best practice or other peoples code is more often than not because it goes beyond their comprehension or skill level. I often hear we use the KISS principle here and I often hear that from those against using best practices or patterns and or frameworks.
OutOFTouch
Congratulations, +600 score now!
Daniel Daranas
@seanyboy I call that using your brain also ;-)
Hannes de Jager
@bzlm:Notice that "using your brain" requires having one. I agree that some people shouldn't try to rely on something they don't have (or care to *turn on* before using).
slacker
To me it's not controversial at all. First comes brains/common sense and then design methods/patterns anything.
kudor gyozo
couldn't agree with you more.To me, that is the difference between a good programmer and NOT a good one.
Adil Butt
+6  A: 

I firmly believe that unmanaged code isn't worth the trouble. The extra maintainability expenses associated with hunting down memory leaks which even the best programmers introduce occasionally far outweigh the performance to be gained from a language like C++. If Java, C#, etc. can't get the performance you need, buy more machines.

marcumka
I think you overestimate the amount of memory management that occur in modern C++. C++ now uses the RAII idiom everywhere. Memory leaks aren't really much of a concern or an issue anymore.
Doug T.
if you can't track memory leaks, you're not worth to use high-powered tools.
Javier
agree with Doug; with some simple rules of thumb, memory leaks are mostly elliminated.
Javier
... and performance is a much-misunderstood subject.
Mike Dunlavey
Sometimes raw performance matters.
David Thornley
Not to mention that not all programs run exclusively on a recent version of Windows.
David Thornley
I completely agree. Using a non-memory-managed language is like taking a shortcut through a minefield rather than going a slightly longer route on a comfortable and well paved road.
glenatron
And sometimes you need to take the shortcut, no matter what. I need all the performance I can get, in what I'm paid to do. This is not true for most people.
David Thornley
Should I buy more machines to all users of the software I write? There are millions of them, and all of them want their programs to run fast.
Nemanja Trifunovic
but ... but ...... I don't think that's controversial, is it?
hasen j
Hey how about not worrying about performance until it actually becomes an issue, and then when it does, profile, Profile PROFILE. It is at that point when it's legitimate to decide whether to take that shortcut through the minefield. It's a cavalier waste of money and time to decide before necessary
Breton
I firmly believe that we don't need airplanes, we can always use cars, right...? And if we need to cross the open sea, we could just use a boat, right...?
Thomas Hansen
Hi.My name is Larry.It's nice to meet all of you. :) I thought I was alone in this world, then I find all of you who think just like me... As you'll see in MY answer to this question. I'm a HUGE fan of C/C++, and feel that if you can't do C/C++ right, then don't do it at all. C# is NOT required.
LarryF
Pipe-dream reasoning. Earth calling marcumka
Seventh Element
**Right tool, right job.** Go try and code that kernel or NIC driver in C# and get back to us. Yes, there are plenty of folks who stick with the language they know, but your unqualified answer is overly broad. (And that from a Java developer!)
Stu Thompson
As if C# doesn't have memory leaks...
Matthew Flaschen
If we had really well written frameworks to run managed code on, then I'd say you have a good point. Sadly, the .NET framework gets more bloat heaped onto it with every release, and the truth is that C++ remains about the only way for a developer to write at a reasonably high level and be assured of [the ability to attain] good performance.
Mark
Memory leaks are not possible in C++ if you use the right techniques:Use RAII/Smart pointers instead of raw pointers/handlesIn the worst case, use Valgrind
blwy10
+15  A: 

I really dislike when people tell me to use getters and setters instead of making the variable public when you should be able to both get and set the class variable.

I totally agree on it if it's to change a variable in an object in your object, so you don't get things like: a.b.c.d.e = something; but I would rather use: a.x = something; then a.setX(something); I think a.x = something; actually are both easier to read, and prettier then set/get in the same example.

I don't see the reason by making:

void setX(T x) { this->x = x; }

T getX() { return x; }

which is more code, more time when you do it over and over again, and just makes the code harder to read.

martiert
Agreed. Getters and setters violate encapsulation just as much as exposing objects directly does. There is no real point to them (except maybe in an external interface).
Ferruccio
There's actually a good reason to use setters: You can do some checking on constraints before assigning the new value to your variable. Even if your current code doesn't require it, it will be much easier to add such checks when there's a setter.
Jorn
I was very glad there was a setter on a variable once when I had to make sure some processing was done when it changed.
David Thornley
Actually, I think Ruby has something that gets you both - it's called virtual attributes. It allows you to have checks on your assignments and still be able to access the data as if it were a public member.
Cristián Romo
Python lets you do that as well.
sli
Setters allow you to add contention in multithreading environments. Just lock when you set. Of course, it is not always the case that your code will end up being accessed by multiple threads, or is it?
David Rodríguez - dribeas
But this being 2009, who's still using an IDE that does not create the getters and setters on the press of a key...?
Arjan
It's not just that I have to write the code, but the getters and setters obfuscates the code itself by, in 95% of the time of my applications, taking up space and just being plane ugly.
martiert
I guess C# gives you a easy way to have both, is this Java?
rball
I had / have this opinion in some cases, but, one VERY important fact for me is that you can't 'override' a public variable. If the class in question is final, sealed, whatever - cool... AND if you're basically saying extenders should never be able to do anything on set / get ... ever ...
Gabriel
In many languages you can change a public field to a property without requiring any changes to code that consumes it. You would, however, force a recompile (in non-interpreted languages at least), which adds some constraints if you're shipping opaque libraries to external customers.
Richard Berg
And you set a breakpoint on a public field how, exactly? Setters are brilliant for exactly this reason - you can easily see what code is influencing a value.
Mark
You *must* use getters and setters when you code to an interface!
Thorbjørn Ravn Andersen
1. Use an editor that shortens the process2. Using setters and getters are much more safe than directly accessing the variable:what if you write a class with a variable inside: counter, and incorporate it into code (maybe in 100 classes) and now suddenly decide that counter cannot be negative ?using setter can help solve problems like these...3. Sometimes exposing variables can be dangerous;eg: Exposing TOS in a stack class
Salvin Francis
@Richard Berg In VB6 you could change a public field to a property and vice versa without requiring any changes to code that consumes it, not even a recompile. It's one of the few areas where VB6 was IMHO better than .Net
MarkJ
@Thorbjørn -- not necessarily. Just because the designers of C#/Java decided to disallow fields in interfaces doesn't make it an inherently bad idea. Direct access is the dominant idiom in languages as diverse as C and Ruby.
Richard Berg
@Mark -- set a data breakpoint. Your CPU has hardware interrupts for this exact purpose. Getting it to work in a managed language is a little challenging, but not any harder than the problems inherent to soft-mode debugging generally.
Richard Berg
@Richard Berg: I don't get you - direct access *is* a dominant idiom for C, but definitely not for Ruby - actually, without reflection, there is no way in Ruby to do direct access. What Ruby does is give you an extremely easy way (`attr_accessor :x`) to generate getters/setters for an attribute which are syntactically transparent; i.e. you'd still use `p.x` and `p.x = 3` instead of `p.getX()` and `p.setX(3)`, but they're still methods. "Direct" instance variable would be `@x`, and you can't use a dot notation with it (i.e. `p.@x` is ungrammatical).
Amadan
+584  A: 

I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.

For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax

For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.

Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.

Learning
my first language in school was called "visual logic" which did just that
I taught CS, and teaching/learning is a lot easier with what I call a "nanny language" - i.e. a language that assumes you're a klutz. Beyond that, I agree with you.
Mike Dunlavey
I feel the same. We were taught Java in Uni but it was taught in a very functional way. I think inheritance was one of the last things we were taught in the "Learn how to Program 101" class.
smack0007
Java was invented so that any minimum-salary halfwit could "do programming"; as a result, many do.
Brent.Longborough
The first year curriculum at my university has recently changed from Java to Scheme. The faculty finds that the students learning Scheme are better equipped in later years (and they supposedly pick up java quickly).
Albert
Interesting. I thought java was selected for teaching mainly in environments with few CS faculty - so those making the decisions merely chose what they knew was popular. I haven't encountered many people who actually felt strongly that Java was a good choice.
Joshua Swink
Counter proposal - C++ is the WORST first language to teach, IMO.
Huntrods
Mike Dunlavey
@Huntrods: Personally, I think as soon as someone understands the basic concept of programming, it should be on to C++ with them. Yeah, it's hard, but it makes ya tougher ;) I did most of my learning on C++ and nothing after was challenging.
PhoenixRedeemer
To elaborate. I thing C is one of the best "first" programming languages - EVER. I also think C++ is the b*****d child of a very large committee, and is among the WORST languages ever. It's a horrible "version" of C, and a worse "OO" language, IMO.
Huntrods
What the heck - might as well put that last comment in a response...
Huntrods
lots of C and .NET fanboys here, you know it
01
My alma mater works closely with companies like Intel, so it teaches C first, then C++, etc. We even learn assembly on a PDP11. No sandboxing for us...
Uri
I wish more schools taught Scheme (or similar) as a first language. I've learned more from watching the first two of the Abelson and Sussman lectures from MIT than any of the "Intro to Programming" courses I've attended.
romandas
I was taught Java at uni, but I never took it seriously, being a WinAPI coder back then. The C++ course was pathetic though. Ugh, uni memories :(
Dmitri Nesteruk
I think C should be the first language taught, because it makes you need to understand more of "what's under the hood"... Once you can code C well, then have the second language be something very OO. Then something very functional. After that everything is easy.
Alex Baranosky
C I can understand as a first language, but despite its similarities I don't think C++ makes a good second language. I'm not sure there's an awful lot to be learned, in general, from learning C++, other than C++ itself. It takes a lot of effort to learn and provides little benefit.
Calum
I agree that every student should learn C, but in school Pascal is probably better---as it has much better(clear?) structure.
AnSGri
let's not forget that memory-leaks *can* and *do* happen in Java (just ask me : I have several examples). But, I strongly agree that using third party libraries without considering custom implementations can lead to disastrous results. I've seen it happen on numerous occasions.
Ryan Delucchi
Java is so portable and have great graphic library that college kids can write game with.
Tom
Stop bullshitting people! If they don't wanna learn programming then that's up to them. Throw them to the wolves and see how them fare. The first programming language I was thought at my collage was Haskell and it made me a better programmer. It was difficult at first but you learned something new.
John Leidegren
I teach at University-level and I think a object-oriented language is a good first language and Java was one of my favorites. But now-days I actually prefer Python, because it is a real script-language, fantastic syntax, multi-paradigm and Java have become harder to handle for beginners.
P-A
I think that one requirement for a first language is that it be hard to write obfuscated code. All too many programmers who have C or C++ as their first language write illegible code. That being said, I also believe that every programmer should learn C, just not as their first language.
DLJessup
I'd honestly say, I like JavaScript as a first language. You can start off learning functional constructs, without worrying about inheritance models. Splitting a first year with HTML + JavaScript, and the second half going through a low-level language, like C. Higher languages can be done later.
Tracker1
when I was a lad, back in the 80s, it was Pascal as a first language, then C, Modula II and Ada. OO was being built by Bjarne Soustrup
Quog
I think Java is the best first language to learn simply because it has the best introductory book associated with it. Head First Java is so much better than other books that teach object-oriented programming to beginners that it pulls Java up above other languages.
Bill the Lizard
Everyone needs to know the mother language!
Daz
Everyone should learn [pet language] first, because of [pet feature(s)].Personally I don't think what first language you choose is very important, it's far more important that it's not the only one you ever learn. Having a broader outlook leads to better developers.
Richard Nichols
I teach at university level, and to the best of my knowledge, we've never given an object-oriented language as a first language in our courses. We currently give a subset of C that excludes pointers and memory management, to exclude language-specific details from the education. It is reasonable in the sense that it doesn't attempt to teach OO design at the same time as basic imperative programming, but C is relatively low-level and is in some sense encumbered by it's closeness to the hardware, so I don't really think it is an ideal first language. We're switching to Python now, though.
Lucas Lindström
I learned in this order C, C++, Java, .NET....I like the C++ before GC langs so you know enough to be grateful for the collector.C is a great first language. You're just learning about control flow and loops...it's not like you're talking to the hardware at that time.
dotjoe
@Lucas Lindström: I do think that C should be used as a first language, but do not castrate students by not teaching pointers. I've seen that many students not able to grasp pointers in the first month, were never able to understand them.
voyager
When I started my first programming job 20 years ago, I was the only one of my group who had never coded with punchcards. Everybody thought that C coders were coddled with CRTs and magnetic media. Just because something is old does not mean that it is the best choice for a first language. I think it's reasonable to pick an environment with fewer barriers to entry.
David Chappelle
I agree... this is a great answer!
Kwang Mark Eleven
I totally agree with Albert. My former university's first language is Scheme and it's great.Before university my first language was C and that had a detrimental effect on many of the newcomers
Luis Filipe
Bah. I learned QBasic in elementary school, Visual Basic 6 in high school, and C++ in college, though we had to rough it for a year before we learned anything about the STL -- implemented our own linked list classes, etc. People will hate on me, but QBasic is great for noobs once you teach them that goto = bad, functions = good. Then it's on to C++, teach it as C with some nice added features.
rlbond
First teach MIPS assembly, then Scheme, then Java or whatever...
James M.
The control structures are the same in C, C++, C#, Java, so why spend time learning printf and std::cout instead of Console.WriteLine and Label.Text ? It's more motivating when you know you learn something you can use, instead of something obsolete.C and assembly can be learned afterwards, just to learn how the computer works, and not learning their API.
csuporj
@voyager - I agree. I taught myself C after my first language (Perl from a book and the Internets). I completely didn't grasp pointers for a while, and after a month of headache it suddenly all made sense, and it was totally worth it.
Chris Lutz
@csuporj: C and C++ may be old languages, but they are still widely used. I think that disqualifies them from being obsolete.
Cristián Romo
I think that for a first language needs to be object oriented and needs to truly show the pupil what the difference is between reference and value, i.e. pointers. C++ should be the first language for beginners.
kzh
The best first languages are those that don't bog down the beginning programmer with arcane syntax (missing semicolons, misplaced braces, missing parentheses). BASIC is a more forgiving language to edit than C++, for example. Most beginners are probably still learning to type.
Loadmaster
C++ is a great first language! Because it is a hybrid OOP language, you can learn more from it than what you could from Java. You can do some procedural, object oriented and generic programming with C++. Memory management is a plus too! Once you understand the mechanics of memory management you will understand why it is so costly to place some new in your Java code. The STL is pretty much straightforward too not to mention fast.
Partial
1. C2. C++3. Java
Salvin Francis
I think Lazy K should be the first language students learn. Programming is hard, in engineering courses we weed out undesirables maliciously, and while there may be shortages of engineers and programmers both, I've never met an engineer who couldn't "engineer" where as I've met dozens of professional programmers who can't program.
marr75
I didn't understand how much value java and other garbage collected applications add until i had to work with manually managed applications. I agree on that. But Both C/C++ and Java are important to learn.
kudor gyozo
I learnt VB6 first by reading a book thinking that it is a software like MS Word. I also didn't knew what is programming at that time. And then slowly got to know other things. But surely I was able to gulp things quickly in initial stage which I'm sure I couldn't have done that fast in any other language at least without anyone else's help.
Ismail
I vote for C# as the first language to learn.
Dmitri Nesteruk
Somebody asked me (about 12 years ago) which would be the best 'first' language. I said Java. My coworker said Assembler. He argued, people who are not smart enough for assembler would get frustrated easily and do something else instead, which would mean less trouble for the remaining programmers...
Thomas Mueller
If your first exposure to programming languages isn't until CS 101 then you're doomed. The best first programming language is one you discover and learn for yourself out of passion. For me that was C.
burkestar
+46  A: 

Okay, I said I'd give a bit more detail on my "sealed classes" opinion. I guess one way to show the kind of answer I'm interested in is to give one myself :)

Opinion: Classes should be sealed by default in C#

Reasoning:

There's no doubt that inheritance is powerful. However, it has to be somewhat guided. If someone derives from a base class in a way which is completely unexpected, this can break the assumptions in the base implementation. Consider two methods in the base class, where one calls another - if these methods are both virtual, then that implementation detail has to be documented, otherwise someone could quite reasonably override the second method and expect a call to the first one to work. And of course, as soon as the implementation is documented, it can't be changed... so you lose flexibility.

C# took a step in the right direction (relative to Java) by making methods sealed by default. However, I believe a further step - making classes sealed by default - would have been even better. In particular, it's easy to override methods (or not explicitly seal existing virtual methods which you don't override) so that you end up with unexpected behaviour. This wouldn't actually stop you from doing anything you can currently do - it's just changing a default, not changing the available options. It would be a "safer" default though, just like the default access in C# is always "the most private visibility available at that point."

By making people explicitly state that they wanted people to be able to derive from their classes, we'd be encouraging them to think about it a bit more. It would also help me with my laziness problem - while I know I should be sealing almost all of my classes, I rarely actually remember to do so :(

Counter-argument:

I can see an argument that says that a class which has no virtual methods can be derived from relatively safely without the extra inflexibility and documentation usually required. I'm not sure how to counter this one at the moment, other than to say that I believe the harm of accidentally-unsealed classes is greater than that of accidentally-sealed ones.

Jon Skeet
I believe the default in C++ is to make all methods non-virtual, so C# was hardly taking a step in the right direction. I'd call that returning to their C++ roots.
duffymo
C# isn't really rooted in C++ though - it's rooted in Java, pretty strongly. IMO, of course :)
Jon Skeet
That's not controversial - that's common sense :)
Brian Rasmussen
I realize that the link between C# and Java is certainly stronger than C++, but if we were drawing an inheritance diagram they'd both claim C++ as parent (arguably grandparent for C++).
duffymo
+1 from me. I very rarely have to remove a sealed modifier (and I make everything sealed by default, unless it is immediately clear that it cannot be sealed).
Andreas Huber
My understanding is that you are saying we should be extra careful when designing object hierarchies, but I don't understand how sealing classes by default would help to achieve this.
Leonardo Herrera
"I believe the default in C++ is to make all methods non-virtual, so C# was hardly taking a step in the right direction" how is that logical?? i miss the connection. making methods nonvirtual by default in c++ is a Good Thing (imho) +1
Johannes Schaub - litb
I think the counter-argument could be generalized to: a class that derives from a base class without overriding anything further up in the hierarchy can be done relatively safely, etc.
Chris Smith
i think this is an anti-pattern. Classes without inheritance are just modules. Please don't pretend to know what all future programmers will need to do with your code.
Steven A. Lowe
Inheritance and immutability don't go well together. If I want to know for sure that an object is immutable, I must know that it is not derived from, since a derived type can break that contract.
Jay Bazuzi
Given your reasoning, it's difficult to disagree. However - if I wished to use your class for a purpose which you didn't intend, but through some clever overriding/application of your base methods/properties it will suit my purpose, isn't that *my* prerogative rather than yours?
BenAlabaster
+1 from me too. Its about avoiding implicit assumptions - which always come back to bite you. An explicit statement is always more accurate.
devstuff
@balabaster: If you do that and then I want to make a change, it's very likely to break your code. As a code supplier, I don't want to put customers in the position of having fragile code. (Not that I'm actually a code supplier etc. This is in theory.)
Jon Skeet
I agree that inheritance should be guided, but sealing all your classes by default doesn't guide you it's a road block, removing inheritance entirely
Jeremy
Even so, I should understand the risks in deriving from a non-frozen class. Any changes you make in an unsealed class carry the same penalty, so all you're doing by making everything default-sealed is making it harder to use your code in my own way.
Jeff Hubbard
Agreed in principle, although I hated the sealed-by-default behaviour of methods when I was using early C# (at Microsoft, actually) because sometimes I would want to intercept calls to some library class's method, but couldn't just subclass it because they didn't make the methods virtual.
Joe
If a inheriting class changes behavior of the method it is wrong. Period. It does not fulfill the substitutability principle. There is no need to make a class sealed, just shoot the offender.
David Rodríguez - dribeas
One problem with having everything sealed is that it kills proper unit testing. Because methods in the .NET framework are sealed, it's almost impossible to test classes that use .NET framework classes like DirectoryEntry (which uses external resources), without writing a wrapper first
Erlend
I agree, and I would expand the scope to say that all programming language constructs should default to the "safest" or "no additional work required" state (not the opposite). Also, there should always be an optional keyword for the default whenever there is a keyword to specify a non-default.
Rob Williams
You can not mock sealed classes, except if they implement a certain interface which is used by all users of that class instead of the sealed class.(Bye Bye folks, I will descent into hell soon, as I dared to down vote Jon Skeet...)
EricSchaefer
I vastly prefer mocking of interfaces instead of classes anyway, so it's never been an issue for me.
Jon Skeet
AOL. Interface based programming is underrated anyways...
EricSchaefer
Why not get rid of defaults all together force the developer to make a decision if it's sealed or not, same should go for public vs private.
JoshBerke
@Josh: Yes, that's definitely an interesting idea. There are some options where I don't want to have to be explicit - e.g. "nonvolatile" would be silly. How about "writable" as the opposite of "readonly" for static and instance variables though? Hmm...
Jon Skeet
LBushkin
Strongest argument I've seen for classes NOT to be sealed by default is that it would adversely impact the ecology of software libraries (commercial and internal). Too few people take the time to consider how their classes can be inherited - it's hard to get this right. Most will stick with the language default. Software changes relatively slowly (even when you have the code) and there will be a lag in getting inheritability changed. Finally, will people really spend more time designing for inheritance? Or just blindly add "overrideable" when the find a case where they decide they need it?
LBushkin
@LBushkin: The fact that people don't take time to consider things properly (and that it's hard to get it right) is exactly why the default ought to be the *safe* option. Give people the shotgun *unloaded*, and make them load it themselves if they want to.
Jon Skeet
A: 

My controversial view is that the "While" construct should be removed from all programming languages.

You can easily replicate While using "Repeat" and a boolean flag, and I just don't believe that it's useful to have the two structures. In fact, I think that having both "Repeat...Until" and "While..EndWhile" in a language confuses new programmers.

Update - Extra Notes

One common mistake new programmers make with While is they assume that the code will break as soon as the tested condition flags false. So - If the While test flags false half way through the code, they assume a break out of the While Loop. This mistake isn't made as much with Repeat.

I'm actually not that bothered which of the two loops types is kept, as long as there's only one loop type. Another reason I have for choosing Repeat over While is that "While" functionality makes more sense written using "repeat" than the other way around.

Second Update: I'm guessing that the fact I'm the only person currently running with a negative score here means this actually is a controversial opinion. (Unlike the rest of you. Ha!)

seanyboy
What if you're unaware of when a condition is false? And where has Repeat come from? While works on the English basis of "while this condition is true, do this"
Kezzer
You could replace all constructs with goto.
Gamecat
Not only do I like WHILE but I would also borrow Nemerle's UNLESS and put it into C#.
Dmitri Nesteruk
a language designed for mediocre or unexperienced programmers gets only mediocre and unexperienced users.
Javier
I haven't seen Repeat...Until since BBC BASIC! VB now has Do...Loop, Repeat...Until and While...Wend should both be removed.It bugs me though when I see, Do While Not ... instead of Do Until ...
pipTheGeek
The first question I usually ask when I see a While loop is "Will it break during the loop or after the check?" The reason for this is I've used a language or two before that immediately broke out of the loop when the condition returned false.
Dalin Seivewright
This is nonsense. Neither repeat nor while will break in the middle so your argument is absurd. Basically the developers need to be instructed in the use of break/exit/goto to exit a loop early. As for testing condition at the beginning/end both have their uses.
Cervo
Also do { statements } while (!condition) is the same as do { statements } until (condition) so I don't know what the complaint is.
Cervo
It's not controversial, just wrong :-P
Dour High Arch
Actually, I'm not sure if it's the same or not, but I never use do ... while blocks, so I think perhaps I agree with you. :)
skiphoppy
"One common ... flags false" - How common is this? In what language? Perhaps the answer for those who have this idea when it's false is "RTFM!". This is just a bad solution looking for a problem it can't find.
duncan
A while with a repeat is a if <not condition> then repeat until conditionnot a while + bool
Marco van de Voort
+281  A: 

Design patterns are hurting good design more than they're helping it.

IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.

And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.

Michael Borgwardt
Nice one. The Java XML DOM library - factories upon factories - is a good example of massive overengineering at the cost of simplicity of use. (There are benefits, of course, but...)
Jon Skeet
Isn't the Java XML DOM library just a transliteration of the JavaScript library (I don't do JavaScript)?
Tom Hawtin - tackline
Even streams in Java are a bit more complicated than they really have to be due to the many decorator patterns.
Brian Rasmussen
Absolutely agreed.
Max
I actually like the Java IO streams, the decorator patterns does make sense there - my biggest problem with it is with a class outside the strict pattern application: FileReader, which is a convenience class lacking the basic feature of allowing you to specify the encoding.
Michael Borgwardt
I kinda agree - knowing anti-patterns is more helpful in it's way than knowing DPs
annakata
I kinda agree too, except that I would say it's not design patterns themselves, but their misunderstanding and overuse.. A design pattern, to me, is nothing more than a attempt to create a ubiquitous language or common set of definitions, for things we all use anyway, to streamline communication.
Charles Bretana
I like learning about design patterns in the sense of "this is how someone solved this problem." Sometimes their solutions will inform to a small extent my design decisions. I don't think they should be used for a template from which to write code from.
Doug T.
Charles, I think the "language" aspect is simply not working, because the patterns are too abstract to allow everyone learn more than a few, which makes the language pretty useless. I do like Doug's idea of viewing them more like case studies - but then the abstraction is actually harmful.
Michael Borgwardt
Amen. One more damage done by Smalltalkers, together with "extreme programming"
Nemanja Trifunovic
You should use them whenever you can. Yes, it hurts design, but when you be on next job interview you will remember better those stupid names and you can say you used it. win-lose-win. i for example cant remember what Bridge Pattern is, so I need to use it more. I bet its useless, but ...
01
"I actually like the Java IO streams" - that means you dont use it or you are masochist. I used to like it too, but then I start using it. It would be cool if they did some factories so you dont have to copy'n'paste.
01
Design Patterns fail not because they are meaningless or far too varied. Design patterns fail because people make the arrogant mistake of equating idioms in their insular little language fiefdoms with grand ontologies that describe and explain the Universe.
dreftymac
who the hell likes Java IO Streams? I never used them, the only thing I know, everytime I try to read a simple fricking text file, I have to browse through the stupid API to figure out which classes I need and which constructrs I should use to just read the content into a string.
hasen j
I do use them, and I'm not a masochist. It's neither difficult nor inelegant if you understand the concept. Having the same API for network and file IO is great, and "simple text files" are anything but - too bad 90% of all programmers don't understand that not everything is ASCII/Latin1.
Michael Borgwardt
I disagree. My personal opinion is that no matter what you are developing, either look for existing patterns you can use, or develop some of your own as it can keep consistancy across your apps or even across 1 app, particularly if it's a large application.
Jeremy
Hm, I think I'll have to disagree with this one, but probably for a controversial reason. ;)I think design patterns help design *on average*, because they guide the people who'd otherwise be likely to end up with lousy design towards something less lousy.
jalf
but really good programmers should know not to rely too much on design patterns for the reasons you state. So I see design patterns more as a help to average programmers than as something that affects (positively or otherwise) a *good* programmer.
jalf
A design pattern is simply a commonly accepted solution to a given problem. Your prejudice is against their perception and use, not the patterns themselves. Would you suggest civil engineers throw out trusses?
Bryan Watts
Agreed. Design patterns are, IMHO, duct tape to fix language deficiencies.
Dan
Disagree (although I accept that there are drawbacks and they are commonly misused... Antipatterns might be even more useful). So +1...
AviD
Read "A Timeless Way of Building" by Christopher Alexander ;) Patterns are a good thing, but people use them to justify many bad things. The GoF book set the industry back 10 years imo.
Gwaredd
What really grates with me is when the fashionable dev uses the pattern then names the class after the pattern, with no clue as to its actual use. E.g. ControlVisitor - right it visits controls, and then what?
Gaz
+100. I've had many co-workers, when asked "how are you going to do X", reply with "Oh, I'm going to use the Visitor pattern" or whatever, as if that was an actual answer to my question.
MusiGenesis
Disagree; I think it's the misuse and misapplication of design patterns for the sake of using them, not the design patterns themselves. If you do something slightly outside of the original intent, name it something new and don't abuse and confuse the existing patterns.
Ryan Riley
Disagree: Patterns are all about communication of intent. The exact implementation detail isn't why you use a pattern; telling maintenance programmers the intent of your thinking in a concise manner is.
Scott Stanchfield
I disagree. Bad designs are bad not because of using patterns, but because of bad design...
Kwang Mark Eleven
I've read a couple of books dealing with design patterns and they all shout the same thing at the beginning of the book: these are guidelines, they are not dogmatic. Use them to your advantage, and when they apply; modify them to suit your needs if you have to. To me, this answer is the akin to saying "The entirety of X language sucks cause I've seen bad code written in X." It's not the patterns, it's the people who wield the patterns like a giant hammer and see every problem as a nail.
Hooray Im Helping
Design patterns are a "natural fit" sometimes, when the problem itself suggests a particular pattern, or when you know you would have used a similar technique to solve the problem even if you'd never heard of design patterns before. When they are not a natural fit, *don't use them*.
Todd Owen
If design patterns are hurting your design then your not using them right.
Swanny
Sort of agree - sort of disagree. The problem I have with patterns is not the patterns themselves - it is that they are derived from a few fundamental principles that *very few people* seem to actually understand. Patterns aren't magic. Once you understand the underlying concepts of the OO style that generated the patterns, most of them are as common sense as a for loop. Unfortunately, the people that push them the most tend to not understand them.
kyoryu
Down with MVC! Long live Front-Ahead Design!
DR
+3  A: 

Debuggers should be forbidden. This would force people to write code that is testable through unit tests, and in the end would lead to much better code quality.

Remove Copy & Paste from ALL programming IDEs. Copy & pasted code is very bad, this option should be completely removed. Then the programmer will hopefully be too lazy to retype all the code so he makes a function and reuses the code.

Whenever you use a Singleton, slap yourself. Singletons are almost never necessary, and are most of the time just a fancy name for a global variable.

martinus
I have noticed a definite inverse relationship between design/coding skill and skill in using a debugger (which is not the same as having debugging skills).
Ferruccio
Rauhotz
Copy/paste is an instant Red Flag in my opinion. If code is duplicated, it should either a) be factored using OO methods; or b) model-driven/generated/dsl-defined.
Dmitri Nesteruk
I agree, all the code you see in stackoverflow should not be tested code because if it is tested it is copied from an IDE and copying from an IDE should be impossible:) So please post only untested code on SO!
tuinstoel
@tuinstoel: So maybe it should be "copy but not paste"? :)
Jon Skeet
martinus
There is no way testing can replace the usefulness of debuggers and debugging.
Tim
Singletons look really mental when bound to WPF too (all that x:Static stuff).
Dmitri Nesteruk
Ok, so you remove all debuggers, and all alternate systems for debugging. (if the easy way is bad, then the hard ways must be worse, no?) Then in testing you discover a bug. Now what do you do?Cancel the project?
Charles Bretana
@charles, when I discover a bug I try reproduce the behavior in a unit test. Then I fix it. If you need a debugger it is just a sign that you need better tests or refactor the code that it is easier to understand.
martinus
Sometimes I have to maintain and extend programs that make extensive use of complex pointer arithmetic. You can pry my debugger from my cold, dead hands. And if any developer mentions "global" in the same room I am, he can consider himself slapped.
Leonardo Herrera
@Jon Skeet, if only copy is possible I can't paste from SO:)
tuinstoel
Right.. Get rid of debuggers - so that you can't see the results of your code until then end, rather than step your way through to see exactly WHERE the problem crops up. I'll take debuggers over dozens of "temporary, interim display statements" *ANY* day.
David
Debuggers can be excellent for understanding how current code is working (I generally don't need them much for my own independent code), and cut/paste is part of refactoring.
David Thornley
Without any way to debug it, how can you tell what to change to fix it? are you prescient? if so, why did you put the bug in there in the first place? "Debugging" and "Debuggers" are by defintion, the tools we use to figure out what is causing the bug. Without them, you can't fix any bug.
Charles Bretana
Except perhaps by random shotgun approach and a LOT of luck (Just change something, test again, and repeat until bug goes away...)
Charles Bretana
And outputting variable values or "I am Here" statements to a text file IS a debugger too!
Charles Bretana
"Debuggers should be forbidden." -- and how do you find bugs that are not yours but come from the library/platform?
niXar
Wow. This is like saying "if a hammer can't do the job, it isn't worth doing." Seriously, how would you track a memory overwrite originating outside of your object with unit tests?
Mark Brittingham
Probably the term "Debugger" is just wrong. I have yet to see a tool, that removes bugs from (de-bugs) my program.
Simon Lehmann
@simon: `rm` or `del` will remove all bugs. Granted, it also removes the rest of the program, but such is the price for a bugless program :)
Will Mc
IMO, you can only discover bugs with unittesting, not locate them. After you found a bug with unittesting, you use debugging/debuggers to find where the bug actualy is located
Ikke
Steve Macguire uses the entirety of chapter 4 of "Writing Solid Code" to promote the idea of stepping through new or changed code in a debugger. It's good advice. Debugger *abuse* is a different story. I've seen that too, but wouldn't propose doing away with the tool because some will abuse it.
JeffK
+1: definitely a controversial opinion based on the comments on this post :)
Juliet
Hmmm... When I started writing basic on a dumb terminal back in 1979, we didn't have a debugger nor did I have a copy/paste, but that doesn't mean I wrote better code back then.
Kluge
First two seem almost hypocritical...
Pablo Fernandez
Um, and I did use a singleton in code I was working on last night. And I'm not slapping myself. And I might use another singleton sometime in the future too. There are reasons for global state, although darn few are good ones.
David Thornley
Eek - no copy and paste? What happens when I decide I need to move a block of code from one class to another? You're gonna make me retype it all out by hand? No debugger...yeah, I could probably work around that, it would be a pain though. I could probably live without Singletons too.
BenAlabaster
@balabaster that's cut -)
martinus
Loren Pechtel
Those are the strangest, most archaic things I've read in a while. Chances of writing bugs are just as great when manually typing as when copy/pasting, There's nothing forcing someone to write good code if they have no debugger, and although a singleton may bot be necessary, does that make it bad?
Jeremy
Sounds like someone who doesn't understand debugging. How can my copying and pasting my own code be bad? As far as copying and pasting others, I think you need to test it, understand it, and reduce it to what is necessary for your application before using it in your project.
bruceatk
martinus
Regarding Singletons, that may be wrong in languages like C# or Java, but with less OOP strict languages like Javascript or Scala, using singletons is okay. In JS every object is a singleton! (and classed using prototypes, at least in JS 1.x) And Scala has a singleton type called object.
Alcides
There's nothing wrong with Singletons in themselves. I suspect you're upset with some particular abuse of them, not the concept.
chaos
@martinus, maybe it is for you, but I copy and paste my own code all the time. I've never had a problem having to go back and fix stuff. I've been doing it for almost 30 years. I see no practical reason to change now.
bruceatk
I want to up-vote your Singleton comment and down-vote your Debugger comment... You need the debugger to figure out why that core dump exists in the first place (only trivial code is 100% testable).
Tom
@martinus, it's not that copy paste is bad but in your example, the programmer should use a common function, rather than duplicating a chunk of code. That way if the function has a bug, you fix it in one place. But there's copy paste scenarios where you wouldn't use a common function line one liners
Jeremy
How can I fix a driver without a debugger... Write a unit test that reproduces... Wait.
Edouard A.
Fundamentalisme isn't the way forward!
Seventh Element
I agree with getting rid of copy-paste as long as you can still cut-paste. Cutting and pasting code is essential to refactoring and keeping the code in a clean state.
Sergio Acosta
Are you nuts? <wink> I'll vote you up just because I disagree so strongly (and that makes it controversial - to me anyway). I need those tools. It would be similar to punishing everyone by taxing junk food because some people can't control themselves.
Doug L.
I agree that you shouldn't need a debugger for app code you wrote. But you need one to make sense of corefiles, you need one for driver work, and you damn well need one to make sense of weird, uncommented and undocumented code some other bloke (who has long since left the company) perpetrated. There's not only *creators* out there, but *maintainers* as well, and the debugger is our best friend.
DevSolar
+251  A: 

Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.

I think that a method should be created wherever you can name one.

Matt Secoske
Agree, but not too controversial?
Ed Guiness
perhaps not... hopefully not... unfortunately I usually see long methods, so practice vs preaching?
Matt Secoske
I had an office-mate who practice this and his code used to drive me nuts. Nothing ever got done where I expected it: it was its own form of "spaghetti code." Also, research has shown that longer methods do not produce more bugs. With that said, each method should do 1 task: longer isn't better.
Mark Brittingham
completely agree... I break methods on logical boundaries, usually where I can say "this block of code does THIS" and name the method accordingly. Sometimes I have one line in method, sometimes 20... just depends on what it /does/.
Matt Secoske
I think long methods are a sign of a cluttered mind and a lazy programmer. Generally speaking I think large methods actually are comprised of smaller algorithms which should each actually be contained in their own methods, to enhance organization of code and readability.
Jeremy
had to debug a 500 line function? :-)
billybob
@billybob: how did you know? :-)
Matt Secoske
AKA: Your method should only do one thing, and only one.
thenonhacker
indeed... SRP to the rescue!
Matt Secoske
Im new to programming and have leaned towards the single responsibility priciple but now im so not sure i agree, searching for each of the functions and scrolling back up and down the page is the worst part of debugging!
xoxo
Sounds like uncle bob! My $0.02, not all methods can be small or contain many small methods to do complex tasks but I agree, you should try.
asp316
I think this is one of the fundamental problems with most code I've maintained. Splitting up functions encourages code reuse tremendously.
brad
Partially agreed. I feel that if you write only small methods, you may end up with way too many methods that do -almost- the same thing but are different enough to seperated and not put into generic methods. Theres a point where having one long function is better than having 100 small functions.
Dalin Seivewright
Not at all controversial, but I agree. Small methods == sanity.
Scott A. Lawrence
When you've got a sequential set of tasks in a function. Break them up into paragraphs by wrapping them in some scopes { }. This at least maintains the order of the function.
Scott Langham
Too many little functions get tricky to navigate, and if refactored from the one place they're called, they lose their context and become harder for a human to parse. A one liner comment at the start of the scope can say what its purpose is.
Scott Langham
@Scott Langham - everything loses its context at some point. ie: A zip(string) function inside a Address class has an entirely different meaning than zip(string) function inside a Compress class. That does NOT mean you shouldn't break out an contextually grouped piece of code from another method.
Matt Secoske
I prefer methods to be less than 10 lines long. The smaller the better. Each method does only one thing, on one level of abstraction, and its name describes what it does.
Esko Luontola
Not sure about this one. I've seen long, well-documented methods that really do a lot in a clean way. I'd rather just follow line-by-line than jump all over the place trying to understand why the developer made 30 methods to do a task with a single path. However, a method should never repeat itself, that's where a loop or a private method should make an appearance.
User1
I think I could suggest a more controversial version of this rule: People who write long methods probably started out by writing short methods, and then added lines of code incrementally, until they had code that was long, and, that if people do it that way, (a) they have a distinct code smell left behind, and (b) the guy who picks up your code is going to hate you. Not that you care.
Warren P
Nothing controversial about that, it's a fact. Small methods have only advantages.
Exa
Using simple, internal functions to breakup complex algorithms is a good thing. However, classes should expose minimal methods to not overwhelm its consumers.
burkestar
+413  A: 

Getters and Setters are Highly Overused

I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).

I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!

UPDATE:

This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).

First of all: anyone who uses public fields deserves jail time

Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.

Many people think:

private fields + public accessors == encapsulation

I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.

Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):

There is a reason that we keep our variables private. We don't want anyone else to depend on them. We want the freedom to change their type or implementation on a whim or an impulse. Why, then, do so many programmers automatically add getters and setters to their objects, exposing their private fields as if they were public?

Pablo Fernandez
I wouldn't say it's encapsulation on its own - I'd say it's a first step on the road towards encapsulation.
Jon Skeet
You'll find many people (and even books) claiming that doing just that is encapsulation
Pablo Fernandez
Oh agreed. I'm just saying that it's not wrong to require properties instead of public fields - it's just wrong to leave it there :)
Jon Skeet
This could be restated as "mindless getter/setters." I've found that most of the time you only need a getter.
Leonardo Herrera
The advantage of a setter over a public variable is that you can put a hook into a setter. Other than that, mindless getter/setters are no better than public variables.
David Thornley
getters and setters define the interface of your class. It allows you to add logic to the get/set later on, if required. Therefore preferable to public fields.
Richard Ev
I'd vote this one up twice if I could. *IF* there is logic in a setter then it makes sense to use it. Otherwise, there is no point: a public property is equivalent to a public variable. Also, Leonardo's comment is good: many times you only need the getter.
Mark Brittingham
I've tried this, based on the same opinion. I think you're incorrect because in your 500 setters in your code you'll find 5 that need some kind of initialization code, and only altering those makes the code inconsistent, which is a different kind of complication. Agree that they're annoying, though.
Steve B.
In .NET properties are handled differently than fields in certain situations such as databinding so you are kind of forced down the public property road like it or not.
Daniel Auger
@Richard: Not every field in your class has to be a part of the public interface. I never had logic in my getters or setters, I don't see that as a good practice at all, but thats just me.
Pablo Fernandez
@Steve I don't think they are a need at all. Also, I don't place logic on my getters/setters.
Pablo Fernandez
@Daniel: Agreed. Also in Java you have to adhere to the JavaBean spec, (something pretty similar) in order to let 3rd party code (like hibernate) access to your fields. In this case, its a necessary evil, as you said
Pablo Fernandez
I think the language should just automatically make fields private and create overrideable getters and setters, and optimize it such that it's not performance-worse than accessing the fields instead of calling the accessors.
skiphoppy
I like properties with getters/setters. A public field can't restrict you from setting it where a property can. You can also apply logic in your getter/setter. People say use either where it makes sense, but I say keep constant and use 1 or other, so I choose getter/setter everywhere. flexability
Jeremy
I think getters/setters are completely anti-OO. Why should I be able to poke at an objects internals (sure, getters/setters provide some control, but your still trying to access the internals directly or close to directly). The OO way (IMHO) would be to ask the object itself to perform the action.
Dan
@Dan: That's pretty close to the truth. Some getters/setters are needed but we usually do "getA", "getB", "getC" from a domain object, and then perform the calculation elsewhere (generally in the service layer), that is not OO at all.
Pablo Fernandez
This is why our Russian architects usually declare public variables instead of using Getters and Setters!
thenonhacker
I prefer public variables which can then be turned into properties with no changes to the caller over getters/setters. Even still, they should be avoided if at all possible. Real OO would encapsulate the need to get/set away.
Dan
@Dan: one reason I've heard is that turning fields into properties forces the client code to be recompiled.
Jimmy
@Dan: in some languages you not only need to recompile the client to use a getter/setter but you also need to change the syntax. And if getters/setters are "not OO", how is directly modifying the object better than asking the object to modify itself?
Mr. Shiny and New
At least if you're adding "mindless" getters and setters, you have a structure that encourages you to do logical things - whether you've thought of them yet or not. Sometimes I'm too lazy to turn a field into a property, and it's enough to discourage me from bothering with setter logic.
JasonFruit
Getters and Setters, at least in .NET, are more usable from a databinding perspective. As far as I'm aware most .NET databinding frameworks don't work with public fields.
Erik Forbes
I hope you don't write library or api code :-) It's easy for you to later turn public variables into methods if you own the code. If other people use your stuff and rely on its signature, not so much.
LKM
I love the solution that Python takes. All fields are public, but you can add the getter and settter later if you want to...
Bartosz Radaczyński
While getters/setters can (and usually do) start out as using the same data type as the underlying data, the fact that they are there allows the developer to change the underlying data type at some later time without affecting the API in any way. In short, they hide the actual implementation. ...
RobH
... And isn't hiding the implementation what encapsulation is all about?
RobH
I agree but would say it stronger: you absolutely should use public members. Alas, as other folks point oit, the java bean spec makes this difficult in java -- yet another reason not to use java. Also, as someone else mentioned, python has the best solution for this.
Using public members exposes the implementation, which is the total opposite of encapsulation, since it makes future changes without affecting dependent code difficult or impossible. I agree with others that the object should do as much of its own member data access as possible, but ...
RobH
... where other objects need to access an object's properties (getting/setting), you cannot allow them to do it directly (i.e., by exposing public data members) and still call it OOP.
RobH
Pretty late I know. There are some languages where you do not have a choice either you provide accessors or the stuff is not accessible. That's the base line in Smalltalk...
Friedrich
This is plane and simply wrong. The idea behind encapsulation is that it provides the ability for the implementation of a class to evolve with affecting client code. That is precisely why you would want to hide a field behind a property.
Seventh Element
I'd give this two "up" votes if I could. It gives me screaming fits when I see people doing this - or even, as I saw recently, have their IDE do this *AUTOMATICALLY* for every data member...
DevSolar
I'm currently trying to move away from getters/setters altogether for two reasons: 1) immutable types can use public readonly fields and 2) most UI frameworks that require bindable properties provide a type for that (e.g. DependencyProperty in WPF).
Ryan Riley
Encapsulation is more about data *protection* than data *hiding*. Using get/set with private fields protects them. If you're doing simulation work, data hiding is a great idea, but in non-simulation programming, data protection is key.
Scott Stanchfield
I completely disagree. There have been countless times when something like getNumber() which started as {return number_;} but later turned into an immense calculation. Public members destroy encapsulation and make changing implementation impossible.
rlbond
This is not about never using getters, and certainly NOT ABOUT USING PUBLIC FIELDS. The statement is clear, many people think is right to make a getter and a setter for every field in every class, this does break encapsulation
Pablo Fernandez
@Pablo: I favor getters/setters for every publicly available field -- even if there's no logic in the getter or setter and it's just a straight copy -- and I'll tell you why:
Randolpho
It's not about encapsulation, it's about preparing for the future need for encapsulation. It's always 100% easier to add logic to a getter/setter that's already there than it is promote a field to a property with a getter and setter. Sure, it's simple if you control all of the code.
Randolpho
But what if you don't? If your class is exposed externally and you promote a field to a property, you just broke the contract for that class, and everything that depends on it must be recompiled. But if you already had a property and you modify the logic of the setter, only your code must change.
Randolpho
That's why it's best to always use properties with getters/setters than it is to just use fields. There's little or no performance benefit to just using a field; simple getters and setters get inlined by the JIT anyway. So there's no harm to do it now, and you get a huge potential benefit later on.
Randolpho
@Randolpho, read the whole answer. No one favours public fields. I might rewrite it soon since it's getting so many people confused. Exactly your last comment is the reason why we have getter/setter overuse
Pablo Fernandez
I also like hash oriented programming!
nothingmuch
@Mr. Shiny and New: Its not. I just mean that if I absolutely MUST poke around in the internals, then at least give me a language with properties. I'm still against using properties though, as, just like getters/setters, I do not think that a properly designed OO system needs them.
Dan
One benefit of getters/setters is the added abstraction allows you to do more than just get/set a value, but there are better ways to tackle this so that it doesn't get in the way 90% of the time when you just want encapsulation.
Wahnfrieden
Also, python does just fine leaving absolutely everything public.
Wahnfrieden
+1. I *hate* getters and setters when they are only `getX() { return x;}; setX(_x) { x = _x }`. I can't see why public variables aren't the same.
LiraNuna
@Leonardo, interesting, I very rarely write just a getter, but quite often write just a setter
finnw
This is Allen Holub's classic rant: http://www.javaworld.com/javaworld/jw-09-2003/jw-0905-toolbox.html
Grandpa
Richard E has it right - there are times you really want a class with all its members public, but the *ability* to override those members with a method call if needed. promiscuous getters and setters are great for that.
Martin DeMello
@Grandpa: <sarcasm>Thanks for bringing out that old link.</sarcasm> The conclusion that Allen Holub's rant always leads me to is that he thinks objects should know how to render themselves. Anyone want to write a webapp where the objects know how to render themselves for CRUD? I didn't think so.
byamabe
getters and setters is the way to specify a field in an interface, and you should -- tadaah - code to interfaces.
Thorbjørn Ravn Andersen
@Ravn The idea of interfaces is not exposing the implementation. So why would you make a getter and setter of each of your fields in an interface?
Pablo Fernandez
I strongly feel the opposite. Public fields are flat out wrong. Public properties are fine, as long as they are representing the public contract of the class as intended. Later on when the class evolves, public properties give you a significantly better opportunity to refactor the internals while keeping the public interface intact. It doesn't matter if a propert just sets/gets a field without any other logic. That's not the point. The point is later on that field may need to change, no longer exist, or any other possibility. If it's merely a field, you're screwed.
Matt Greer
@Matt: it´s OK to disagree (this is supposed to be controversial), but please read the whole answer and not only the title. I've wrote "anyone who uses public fields deserves jail time" in bold. You are missing the point
Pablo Fernandez
I want a language that lets me directly declare a member variable as *publicly read-only* but *privately mutable*.
Loadmaster
@Loadmaster ruby has *attr_reader*, you should give it a try
Pablo Fernandez
Man, this is controversial! +1
Vikrant Chaudhary
@Loadmaster: C# 2.0 onward -- `public int Value { get; private set; }` like that?
Rei Miyasaka
@Rei: Did not know that. It would make for an interesting trivia (or interview) question. But it's still not quite what I'm looking for, since the getter is still a method.
Loadmaster
@Loadmaster Why's it bad that it's a method? The stack operations are optimized out before runtime, and it's entirely declarative, so for all intents and purposes, it's a field and not a property/getter-setter. Never thought of this as an interview question; I put this to practical use every day!
Rei Miyasaka
http://www.idinews.com/quasiClass.pdf
sbi
+178  A: 

I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.

Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.

Steven Robbins
Really? I've never encountered that. I mean, where are you supposed to keep all the third party binaries then? I know, we should develop everything in-house! This way, we will never have to store third party binaries!
DrJokepu
Normally the anti-binary brigade don't have an answer for this, or they just say something along the lines of "just reference it in <blah> directory and make sure the devs have it there" :)
Steven Robbins
I think it's more common to have "no generated binaries" - i.e. build X should build its in-house dependencies, rather than relying on the results of a previous build being checked in. There are pros and cons here.
Jon Skeet
Sure, I agree that generated binaries are pretty much a nono, and I think that's where people get the seed of the idea from that unfortunately mutates into "no binaries in source control".
Steven Robbins
Funny, I've always felt against source-controlling generated binaries, but I usually get overruled. It hasn't killed anybody yet, that I know of.
Mike Dunlavey
I check in generated binaries that are tricky to compile or require a compiler that might not be on everybody's machine (at $1000 per seat, we are not buying copies for devs who don't need it).
Joshua
I agree with @Jon and @Joshua. Especially important for big teams that share dependencies. And, for 3rd-party stuff: also check-in a document containing the URL, license key, contact info, etc. so that future team members can upgrade it.
devstuff
This idea is probably started by people using Visual Source Safe which would probably barf fairly quicly if you add dlls.
Martin Brown
my reason for forbidding generated binaries is that you're never sure exactly what they're built from. If someone does an svn update, then changes some source, rebuilds, then checks in the binaries without the changed source, you can't debug/reproduce it. Third party libs are fine to check in.
dj_segfault
Another good solid idea which got corrupted through thoughtless application. When people spout rules without being able to explain the justification you know it's a bad time for everyone.
duncan
Sure! There's something to be said for one place to find all the things you need. If a project needs these dependancies to build or run, why would you not put it all in one place for a developer to find?
Jeremy
Checking in binaries is bad because: it does not scale (spend an hour checking out the source tree, or fifteen minutes committing a single changed file), you can't diff, you don't know where it came from, and there are better places to put binaries (see Ant + Ivy).
Rob Williams
@Martin Brown, works fine for me. We have a very large VS Solution in progress here, and with 15 or 16 projects under the one solution, sometimes it's just easier to check a benchmarked DLL into VSS for the others, rather than everyone spend hours compiling all the projects in the solution.
Pat
To anyone saying checking in binaries is bad because people might check in the binary without checking in the code, it doesn't help me if I can get neither the current binary nor the current code.
Jimmy
echoing dj_segfault, I've been at companies where the binaries that were checked in were actually PATCHED and checked in, and there was pretty much no way to tell what the code was actually doing without decompiling it.
Will Sargent
Agreed with those that exclude generated binaries.. I have bin and obj in my ignore list for tortoise myself (along with .user and others)...
Tracker1
@Pat, I've done things like that, where I have a "Releases" folder, where I will put builds meant for distribution specifically.
Tracker1
I think the converse advice comes from merged binaries being corrupted, eg if two people edit the same image and commit and merge without thinking, you end up losing the image. (Of course you just get it back from an earlier revision but it can be a hassle.)
DisgruntledGoat
Nice thing for .Net platform is that 3rd party vendors usualy ship .pdb and .xml files along with their dlls. I suppose that in Java and other technologies same practice/possibility exists ?
m1k4
Checking in third-party libraries and images is fine, but I draw the line there. The rest of the versioned files should be source code (in text form).
Loadmaster
Binaries that are checked in are acceptable if: (a) they are resource or image files that are part of a build rather than a build product, and (b) they are not archives (zips), (c) anything where you need to read and review diffs, or do merges, should not be stored in a binary form where any alternative exists whatsoever.Delphi form files (.dfms) for example, can be stored as binary or text. Changing the IDE to output text dfms results in a "diff" that is human readable, and that is another great reason to avoid binaries.
Warren P
+683  A: 

Most comments in code are in fact a pernicious form of code duplication.

We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.

I think eventually many people just blank them out, especially those flowerbox monstrosities.

Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.

On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.

Ed Guiness
I'm all for comments that describe methods, parameters or particularly complex chunks of code, but comments like "loop through the list" are just pointless. I seem to remember back in the mists of time being taught that for every line of code I wrote I should write 2 lines of comments :S
Steven Robbins
Amen to that. I still shiver when I remember that Perl script I received 10 years ago, where *every* *single* *line* was preceded by a comment describing (often wrongly) what the next did:# Increase $a by 1$a = $a + 1;
niXar
The code should tell you how...the comments should tell you why...
Richard Ev
Sometimes people go truly overboard with comments - often to cover up the weaknesses of their (deficient) algorithms. What I personally hate is *lack* of good documentation, especially in complex code that warrants it.
Dmitri Nesteruk
I use comments in code quite sparingly, and always in spots where the intent of the code isn't as clear as it should be. I also use comments to document public library methods so that instead of looking at the code the user can read a quick synopsis of what it should do and return.
thaBadDawg
This seems to be people misunderstanding the differences between school and work. Teachers want pupils to explain what they were trying to do so they can correct the code to match the intent. Once one is writing code that will be read by peers the purpose and content of comments is different.
duncan
If you can't understand my code without comments, there's something wrong with my code. Adding comments may mitigate the problem, but doesn't fix it.
Jay Bazuzi
The book "Refactoring" (by Martin Fowler) identifies comments as one of the "code smells": if the code needs comments, it isn't clear enough and needs to be refactored.
ShreevatsaR
Amen, brother. I think that if your code needs comments you're doing it wrong.
Sara Chipps
The way I approach commenting is that you should comment what you want to achieve, before you write the code. Code does not always illustrate what the programmer intended to do, but the comment can, which makes it easier on a maintenance programmer, particularly if he's trying to fix a bug.
Jeremy
My university had a lecturer who handed out an assignment and let everyone know you lose marks for not commenting every function, except the obvious ones for which you lose marks for commenting. Which actually made you think a little more... Was the only one who did btw
billybob
I once worked on a 300 line function that was 50% comments, of which every comment described the line of code like "increments i, if x is true then". It's like the programmer was told he needed to comment every line, so he did.
Cameron MacFarland
@Jeremy: what if you replaced "comment" with "unit test"? It has the same function + you can easily verify that it holds true.
Jay Bazuzi
@Jay Bazuzi, I agree that the unit test verifies that it holds true, but the unit test doesn't show a maintenance programmer what's the code SHOULD do, and you can unit test until you're blue in the face but we all know that every app still has bugs, so unit tests aren't perfect either.
Jeremy
I have actually used a emacs macro that made all the comments invisible.
Hemal Pandya
Simple rule I use when commenting: Don't comment *what* you did, comment *why* you did it. I can see what you did; the question is typically why in the world you would want do it that way (and there are often non-obvious reasons)
LKM
+1 for comments describing "why" instead of "what"; I can figure out what if I know the language and API.
cliff.meyers
If you're using the ubiquitous language of your business (http://domaindrivendesign.org/discussion/messageboardarchive/UbiquitousLanguage.html), you don't need to comment the "why," because it will be self-evident. If it isn't, then you need to reconsider your design.
Michael Meadows
using the word 'pernicious' is sufficient cause for an upvote. (being right is another sufficient cause)
Leon Bambrick
A comment is an apology. http://butunclebob.com/ArticleS.TimOttinger.ApologizeIncode
Esko Luontola
If i could, i'd +2 !
Benoît
A comment is an apology, sometime an apology IS NEEDED, but try to advoid having to comment if possible.
Ian Ringrose
Not that it's all that applicable any more, but perhaps some of the Comments Are Good mantras stem from the days (for those who were there) when real coding was done in assembler ;-) Sometimes, it wasn't clear you were incrementing a counter :-D
DCookie
Well I'm going to be the dissenting voice. I like 'what' style comments *at a high level* because I can see what code is supposed to do without needing to flip back and forth between various functions and unit tests or mentally separate the descriptive code from the utility code.
Whatsit
@ShreevatsaR: you are badly misquoting (no offense). I have Martin Fowler's "Refactoring" open above my keyboard right now. Let me quote from the "bad smells" chapter. "...comments aren't a bad smell: indeed they are a sweet smell... [but] they are often used as a deodorant. It's surprising how often you look at thickly commented code and notice that the comments are there because the comments are bad."
MarkJ
I was taught to think of a comment as saying sorry to whomever is reading your code. You're apologising that the code could not be clear enough for easy reading, and thus placing a comment in to explain yourself. There is no other reason to comment in code.
Robert Massaioli
Yes! Generally needing to comment means the code is badly written, rather than write comments, reduce the size of your functions.
Jacob
I think in a complicated algorithm comments saying things like "If we get here then either X or Y" help a lot with understanding how the code works; this is true however good the code is. Yes, if the code is good then someone reading the code can work out for themselves, but not instantly if it's a complicated algorithm.I also think you shouldn't need to read the code to work out how to use a function. Ideally just the name of the function and its arguments should be enough, but realistically that will often not be true and a comment can be very useful.
Mark Baker
// next line exits the function and returns True.
nilamo
return True; // and never False
nilamo
Literate Programming (Knuth) comes to mind, but the code and the comments are important, but literate programming gives you a way to treat the equally. (Some tools and assembly required.) If a man can not treat two wives equally he should only have one.
Don
@Shhnap -- let's say you're working with a large team on frontend website code though. Or working with third-party APIs. In both these cases you eventually run into browser bugs or quirks or issues in someone else's code. Comments are in this case needed to say "sorry about this other person's code". Or -- "sorry that I didn't take all day to figure out a beautiful elegant workaround for this CSS issue" when there is a well-known hack that works perfectly well to address the problem, assuming whoever leaves the hack adds a brief comment to explain it?
Ben
@Richard Ev, I wish I could vote your comment up 10 times.
Renesis
@Richard EVAnd the VCS should tell you when, by whom, and what the code is.. All making up the perfect news story of who, what, when, where, why, and how when someone asks you about the code.
Zak
@MarkJ: Thanks! I was quoting from (faulty) memory; apologies for misquoting. Looks like I remembered only the vague idea… [I had missed noticing your comment for some reason.]
ShreevatsaR
Yay, I made the upvote to 666. :)
Albert
+322  A: 

The use of hungarian notation should be punished with death.

That should be controversial enough ;)

Marc
Nope, not controversial enough. Let them rewrite the complete works of shakespear in hungarian notation: a verb prefixed with v, a noun prefixed with n etc.
Gamecat
eh i dunno, i like it for some objects, like textbox = txtFirstName, etc
Shawn Simon
http://www.joelonsoftware.com/articles/Wrong.html
Ikke
OK, that's controversial. Sure HN can be abused, but hating it is one of those judgemental attitudes that, IMHO, comes from ignorant profs and bloggers who sound good.
Mike Dunlavey
My understanding is that the original intent was more along the lines of military naming (e.g boot-parade, boot-combat) which has value in some cases and it got corrupted through usage into something else of lesser value. So it depends what you mean when you use the term, as in so many things.
duncan
my boss would love you. I on the other hand ... :-D
J.J.
Very controversional! My dev team argues that too, but I prefer hungarian notation. I can tell the variables data type and scope just by looking at the code. I actually think none hungarian coding is sloppy and the only reason for not doing it is lazyness, no matter what your actual argument is :)
Jeremy
I simply disagree. You clearly can go overboard board with it, but in C for example, the lack of a using a preceding "p" on a pointer should in and of itself be punishable by death.
Tall Jeff
+1, although with reservations: IMHO full blown Hungarian obscures code readability but use of some basic rules - such as p for pointers, does quite the reverse. It's a question of balance
Cruachan
System Hungarian notation is devil, Application Hungarian notation is a really neat solution. In Apps HN the name denotes not the type but information of the contents, screen_x, paper_x, document_x to denote coordinates referring to the screen, paper, document, all of which are ints
David Rodríguez - dribeas
I like it when an interface starts with I. Like IComparable, IEnumarable, IEquality...
tuinstoel
@"every body in favour of HN" - wait till you guys discover implicitly typed variables!
SDX2000
+ 1 to HN prifixes identifying Nontrivial types (e.g. a textbox widget) special class of variables (pointers, interfaces etc)- Infinity to all other HN forms.
SDX2000
I never understood the need for HN in strongly typed languages like C. One, you should be able to remember what are you using a particular variable for. Two, using HN you lose some abstraction. Do you really need to know the type 'ItemCost' to know if it can be used in the 'CalcItemTax' function?
Kronikarz
You can have my 'm_' prefixes on my member variables when you pry them from my cold, dead, fingers!
Jim In Texas
Joel Spolsky wrote a nice post referring to HN: http://www.joelonsoftware.com/articles/Wrong.html
sharkin
That IS a good one! But, I think there are FAR worse offenses than Hungarian...
LarryF
Dumb da da dumb dumb dumb
Seventh Element
Yeah, but I don't write code for me, I write it for all the other people who *aren't* me. Personaly, I prefer HN, but don't currently use it, because it's not in the frameworks, it's a difficult one.
@Gamecat: I love you.
Matthew Scharley
What would you do when you have more than 25 controls in GUI and you have to refer often to each one of them ... Even if you are the one defining there names - for sure you will never remember each one of them ( anyone claiming the opposite lies ). The only answer is to have it: txtControlName1txtControlName2 .txtControlName3 ...and Autocompletion will save you
YordanGeorgiev
Leves felét egy kutya. :)
Jan Remunda
My opinion is that hungarian notation is easily misapplied and it is easy to change the code and not update the name of the affected variables, resulting in misleading variable names.
Kwang Mark Eleven
HN describes how to name local variables and private fields, not public classes and methods. The I in IEnumerable isn't HN.
Justice
@Justice, Read "Agile Principles, Patterns and Practices in C# (Martin)". The authors state that the I of IEnumerable is HN and that you shouldn't use it.
tuinstoel
I was taught to use HN from my first day in the industry, when I was programming Access databases. It was a revelation for me, because I could easily tell by looking at a variable what was its scope and its type. Yes, it involves a bit more typing, but if it makes the code more readable and intuitive, that's a small price to pay. In these days of intellisense, the extra typing argument is superfluous anyway.
Billious
@Billious: In these days of Intellisense, pervasive HN is superfluous. You can just mouse over a variable to see its type and scope. I do use it for form controls though.
jnylen
Hungarian notation is horrible. How about just naming variables reasonably?
Joren
Actually, back in the older days of C, with the less capable IDE's and especially for windows, the "prefix notation" made perfect sense. Since an integer could have been a number, a pointer, a handle, a pointer to a handle, a pointer to a pointer, etc... then this was actually helpful. In fact the original "hungarian" concept was actually a completely different context. What is unforgivable was how it was forced on a language like Visual Basic in some "Emperor's Clothes Mentality" and thank heavans the community finally woke up.
Swanny
With C#, I prefix private fields with "_" and, yes, interface names with I. This is common practice and friendly on the eyes. Using something like `m_count` is acceptable to some, but to me, it's hideous. Hungarian Notation, however, is just a plague. I first came across HN during my VBA programming days. I bought into the "wisdom" somewhat, but I decided to move all the visual garbage to the *end* of the useful part of the variable name instead of the beginning, hence I would write `countInt` instead of `intCount`. Same thing with visual controls today. What's wrong with `NameTextBox`?
DanM
I hate hungarian notation with a passion. I make an exception for prefixing pointers with p (or p_, whatever). There is a very good case that a pointer is a variable of a different _kind_.
Kelly French
I wish I could up vote this a second time ;)
ceretullis
I think it varies: depends what language you are working in, whether the workplace has a consistent set of Hungarian notation conventions, etc. Works fine if it really begins to approach a situation where every developer on the team would give the same name to a variable, but it's a disaster if each developer has their own idiosyncratic variation of Hungarian notation.
Joe Mabel
Has anyone written a SWIG-like utility to automatically de-Hungarian code?
Mike DeSimone
Prefixes and rules can be handy, as can naming conventions. They can also suck. Delphi programmers and MFC C++ programmers often at least note data fields with a prefix, and classes with a prefix. Cocoa developers are used to NS prefixes on the provided AppKit/Cocoa classes, and write their own class names with their own prefixes (BNR for Big Nerd Ranch etc). I think that "classes and private/protected fields need a prefix" is enough prefixes for me. And I like english suffixes (Form or Control or Window, for properties which are meant to point to a Form, a Control, a Window, etc)
Warren P
There are three places I like a hungarian-style decoration: 1) to denote a member variable, 2) to denote a pointer, 3) to denote a GUI widget. The rest I could take or leave.
Mike Clark
+444  A: 

1) The Business Apps farce:

I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.

Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.

How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?

The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.

I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.

2) The n-years-of-experience-required:

Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.

3) The common "computer science" degree curriculum:

The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.

Daishiman
1) I used to work for a big multinational that rhymed with Dunisys. Anyway we used to use the word "Enterprisy" to mean any solution that wasn't complex enough. Like, "asking the user for a password isn't enterprisy enough".
Cameron MacFarland
2) Back in 2002 I once saw a job add asking for 2-3 years of C# experience. This basically restricted the job to those who worked on the original C# design team.
Cameron MacFarland
Regarding (3), you sound like my echo.
Mike Dunlavey
Man, make these separate so I can provide yet another vote up for #2!
skiphoppy
I beleive in abstraction layers. I think the application should be abstracted from the OS in most scenarios. Finally .net/java decouple the app from the OS! Besides when a programmer can concentrate on the usability of an app over low level code, you always get a nicer application.
Jeremy
+1 very nice opinion
0xA3
Jeremy, .net only does that decoupling if you take care to use only the decoupled part.
Svante
The number 1 deserves more than one vote..., well said!
Alex. S.
#3: The more CS degrees were like you wish they were, the more I'd regret not having one. As it is, I only regret it a little.
JasonFruit
-1 for #1, +1 for #2, +1 for #3. Only having 1 vote is not a problem here.
MusiGenesis
I HATE #2, more so when they say something like 20 years of python, which is only 18 years old!!!
Unkwntech
Unnecessary complexity == complications
mike g
@Cameron: Heh, I remember seeing requirements for 8+ years of (insert web technology) a lot in the mid 90's.
Tracker1
+1 for #2 in particular (as one with >20 yrs experience): what is needed are problem solvers, which has a bearing on #3 as well.
DCookie
wow :) I quoted your first point in here: http://stackoverflow.com/questions/781191/making-life-better-by-not-using-java-web-frameworks/785118#785118 thanks!
dankoliver
You totally said it ! :D
majkinetor
As far as #1, I think there is a tradeoff. Enterprise frameworks add a lot of structure that helps mediocre developers think about the problem in a more organized way. As a PHP developer, I've seen a lot of code that was the result of a developer just thinking of a page as a long list of commands to be executed sequentially. Applications built in this manner are incredibly difficult to debug and keep working, let alone build on top of.
notJim
What each business wants is often completely different from the next one, fitting frameworks just doesn't work you always have to do workarounds. Keep it simple and custom to the users.
PeteT
Regarding (3) - i agree that CS courses should be as you say. But i don't think CS courses are useful training for a career writing software. Hmm - maybe i should add that as my own controversial opinion!
Tom Anderson
if i may quote dijkstra here: “Computer science is no more about computers than astronomy is about telescopes.” This also applies to coding. Having a CS degree as a computer scientist can only a benefit, but the common misconception of people is computer scientist == programmer. If you have no idea about code complexity ( O(n) notation ) then your code may be bad (inefficient). so yep, i disagree. having a CS degree does make you a better coder, because you understand the theoretical background
Tom
...and here, on StackOverflow, you find your solemates. Upvote!
Mark
can't agree more with number 3
Domenic
n-years-of-experience-required does matter but it shouldn't go beyond 2
Yassir
@Jeremy - .NET only decouples the app from the OS if by "OS" you mean "Windows". Since C#/.NET really only work on Windows, it is the ultimate in platform coupling. Java only decouples the app from the OS if you _have_ all the OSes you want it to run on to make sure there aren't any subtle bugs in the complex library support code that rear their heads differently on different OSes.
Chris Lutz
Totally agree with point 1 !
Preets
I don't find 3 controversial, I find it to be factual.
amischiefr
I wish more people (employers) felt this way. I could have a much better job right now.
Cogwheel - Matthew Orlando
#3 is controversial? Really? I thought it was clear as day, and those that didn't think so were hopelessly stupid. You really should have posted 3 separate answers.
MAK
@notJim "Enterprise frameworks add a lot of structure that helps mediocre developers think about the problem in a more organized way". WOAAHH! I couldn't **disagree** more!! IMO this 'structure' reduces the thinking (the reason they're only mediocre) of such developers. The problem is: they use frameworks to add huge volumes of 'functionality' without understanding the impact of their decisions. This typically leads to inefficient systems with far more complexity than is required. See: http://stackoverflow.com/questions/406760/whats-your-most-controversial-programming-opinion/406775#406775
Craig Young
Gaah! Stop posting several answers in a single answer! How hard can it be to separate them out?
Timwi
Yes, yes, a thousand times yes on all three points!
Max Shawabkeh
#3: thank you! I remember being a degree-less 7-year professional programming "veteran" (who admittedly started coding at age 10) I was floored that people made it through the degree without knowing what I considered to be the basics. I still want to go back to school, but I need to find a *good* compsci program in the KC area.
Jaime Bellmyer
@Jaime School is often more about the proverbial piece of paper than learning concepts; I know many graduate software engineers who have no concept object orientation, let alone big O. I'd only go back to school for research.
Kirk Broadhurst
It would be better if these answers were split into three. I agree with #1 quite strongly, but disagree with #2 almost equally as strongly.
Erick Robertson
Modern MVC frameworks create more problems than they solve? Hmm. Please show me some decent webapp you've written in plain CGI. (And forgive me for not taking such radical statements too seriously from someone who's still in college)
Nikita Rybak
@Nikita ASP.NET, Struts, Silverlight, J2EE, Zend: all of them pretty worthless. I talked about "enterprise" frameworks, and this opinion holds true after having used all the aforementioned ones. My favorite framework as of now is Django, but it's far from the other monsters I have mentioned, and I'm quite partial to the new micro-frameworks trend. I may be in college, but I make a good living out of this and I have been coding for more than 10 years, which is more than what many other so-called professionals can say for themselves.
Daishiman
+175  A: 

Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)

Brian Rasmussen
agree, and add a real, solid low-level language; just to get some 'feeling' about that architecture. C is good for this
Javier
As soon as you say "every" that should be a hint that something is wrong with a statement.
PhoenixRedeemer
Change "Every developer should..." to something like "You can't call yourself a real developer if..." (with obvious follow-on changes) and you point is both better and more controversial.
duncan
I'd change this to say every developer should understand, at a basic level, how any platform they utilize should work, wether it's the hardware, or the software. I've seen too many using tools like ajax, ado.net, asp.net and not really understand what's happening under the hood.
Jeremy
@Jeremey: +1 understanding ASP.NET under the hood. You have to understand JavaScript and HTML when ASP.NET's magic voodoo breaks down and it doesn't work the way you thought.
Jared Updike
@PhoenixRedeemer: So "every developer should be competent" is wrong too?
jalf
"Every" time someone focuses on some detail about the word choice of a sentence rather than on it's implied idea, they should be punished.
Nick
This has to do with the law of leaky abstractions. See http://www.joelonsoftware.com/articles/LeakyAbstractions.html As a corollary, programmers should be curious and able to learn about the abstractions they rely on. Nobody will understand or remember all the abstractions, but I'd say it's a good sign if you can identify a time in your career or education where you learned how things worked under the hood, and understood a leaky abstraction.Some examples: integer division and float vs real, memory hierarchy, TCP/IP, sql optimization, race conditions, pointers, references, etc...
Kimball Robinson
A: 

Once i saw the following from a co-worker:

equal = a.CompareTo(b) == 0;

I stated that he cannot assume that in a general case, but he just laughed.

Rauhotz
I'd be interested in hearing your reasoning here - as well as which CompareTo method you're talking about.
Jon Skeet
I'm taking about the C# IComparable.CompareTo method. Don't expect that two IComparable implementing objects are equal if the CompareTo method returns zero. They just have the same order.
Rauhotz
Then your implementation of IComparable is broken. The docs state that a return value of zero means "This instance is equal to obj." I'm not saying that there aren't broken implementations out there, but your colleague can reasonably point to the docs...
Jon Skeet
I'd argue that if things don't have a natural equality/ordering relationship, it's better to have a separate IComparer implementation, which can express this explicitly. There are certainly tricky edge cases - is 1.000m equal to 1.0m for example?
Jon Skeet
that's a good case of narrow-minded view. check the lots of 'compare' predicates in Scheme
Javier
Jon, could you be so kind and point me the lines in the docs, where it says "a.CompareTo(b) == 0 implies a.Equals(b) == true"?
Rauhotz
Sure. The docs to ICompable.Compare mean that "a.CompareTo(b) == 0" implies "a is equal to b". The docs to object.Equals mean that "a.Equals(b)" should return true if a is equal to b. It can be argued that the documentation is too narrow or incomplete (Java's docs are more careful on this front)
Jon Skeet
but the documentation really does seem fairly clearly limiting. It does say that the meaning of "equals" depends on the implementation, but it's at the very least confusing for "equals" to mean something different *within the same type* between two methods.
Jon Skeet
That's why I think it's clearer to implement non-natural orderings (i.e. where equality within ordering doesn't mean equality between objects) via IComparer instead of IComparable.
Jon Skeet
JavaDocs where it's nice and clear (compared with the MSDN ones for IComparable): http://java.sun.com/javase/6/docs/api/java/lang/Comparable.html It even says how to document times when you violate consistency with equals.
Jon Skeet
Of course, given the number of comments discussing this line, I think we're justified in considering it at least a bit unclear.
David Thornley
On the other hand, 7 of the comments before this one were mine :)
Jon Skeet
+656  A: 

Not all programmers are created equal

Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.

It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)

Dmitri Nesteruk
Funny thing is... it's usually the worst devs that think they are 10x or 100x better than the others
John Kraft
Not in my experience :)
Dmitri Nesteruk
I wasn't convinced about this being "controversial" until your point about the politics of recognizing it. Good point, there.
Adam Bellaire
Yeah, good luck explaining to your boss that In Your Humble Opinion, Joe is 10 times better programmer than Jack when your boss pays them equal wage for identical positions. Dangerous!
Dmitri Nesteruk
Another point to make is that prolific != skillful.
Marcin
Excellent points, everybody...
Mike Dunlavey
"one developer can be 10x or even 100x that of another" - priceless
01
Why does it matter whether this opinion is C#-related? Is this part of the thing I see occasionally where people seem to speak as if stackoverflow were a dedicated C# and .NET discussion platform?
chaos
Heh. C# is in my ignored tags ;o)
Svante
Well, the question was asked by Jon Skeet, thus my disclaimer."one developer can be 10x or even 100x that of another" - that's what we call "taken out of context" :)
Dmitri Nesteruk
Unfortunately, I've read some of the code my tech lead has written in the past, and I have concluded that I've seen things no other developer should have to see.
moffdub
Although there do exist programmers that are significantly better at their job than most if not all of their colleages, they are such a rare breed that you may never ever work along side one during your entire carreer wich may span several decades.
Seventh Element
+1 - for software development is a rare example, where one person can literally be worth 1000 and even more ... Sure it does NEVER reflect proportionally in financial appreciation !
YordanGeorgiev
+1 because I recognize myself in the "lead developers were 'beyond hope' and junior devs did all the actual work" part. (me being the lead developer) :-)
jao
+1 because this is so true and makes estimation very hard
Jimmy
PHB says the one who commits 1000 lines of code per day is 10x as good as the one who only does 100 lines. :)
James M.
I heard this saying once, which has kept me wanting to improve ever since... "Do you have 9 years development experience ? or 1 years experience 9 times over ?"
cometbill
lead vs junior vs senior is less interesting to me in this as most titles/positions are gained politically anyway. However, I love the 10x and 100x comparison because it is very true. Developers are not interchangeable. The greatest metric I've seen so far is using success as a metric itself. Pontification about academics or architectural correctness is a lot less valuable than someone who ships quality code, on time, and in budget. I often value developers whose code is heavily referenced, reviewed, or blatantly copied/stolen by peers or other teams. Productivity is like a magnet.
Stuart Thompson
This isn't controversial, but it is 100% true!
Chris Pietschmann
hmm, should it be DeveloperA != DeveloperB or !DeveloperA.Equals(DeveloperB)??
Sunny
What does that say about the manager?
JerryOL
+516  A: 

If you only know one language, no matter how well you know it, you're not a great programmer.

There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.

It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.

glenatron
Would you go as far as to say if you only know one *kind* of language you're not a great programmer? For instance, knowing Java and C# isn't that much better than knowing just one or the other - but knowing Java and Haskell will give much more of an open mind, IMO.
Jon Skeet
@Jon Agreed. I learned Haskell at Uni and now it has helped alot with learning LINQ and lazy evaluation, etc. The concepts are what make a language worth learning.
Cameron MacFarland
I'm a little shy of the adjective "great" applied to programmers, but I know what you mean.
Mike Dunlavey
completely reasonable and uncontroversial opinion - you fail, sir
annakata
You'd be surprised how many people have really laid in to me for saying it- perhaps not so controversial to a Stackoverflow audience, but it certainly is in other company...
glenatron
I'm not sure I agree. Even though I kind of happen to know a few languages, if I'm interviewing a person who only knows C# but to a very good degree, they'll get hired. Would you pass by an expert just because they don't diversify? I wouldn't.
Dmitri Nesteruk
@nesteruk It depends on what you want the person for. You may not need a 'great programmer' like glenatron said, but a man to do one specific job, so the expert in this case would be even more useful than the 'great programmer'
Pablo Fernandez
My image of a good programmer is one who knows multiple languages, and who is also, by the way "white" and male. I don't need to be told I can be really, really wrong.
Mike Dunlavey
Off topic, but I believe these arguments apply analogously to learning more than one natural language as well. I wish I knew a few more so I could think in new and creative ways.
Adrian Pronk
@Adrian Pronk - absolutely. Every time I find myself in a foreign-speaking area I wish I knew the language better.
glenatron
IMHO a great programmer can use any language immediately with a good reference.
Gary Willoughby
If you author books in only only one language your not a great story teller?
Brian
I think its just nuthugging, I know more than one language, but I wont learn ruby and etc lanaguages any time soon, because I truly believe is pointless. You just cant admit that every language is almost the same. Whatever, feel like you are better.
01
I would say it depended on what programming language, VB programmers can never be considered `great programmers`, without knowing other languages.
Brad Gilbert
I would tend to agree. I think a well rounded developer knows or have some working knowledge in many languages and keep up to date on them. I've found that some useful design patterns manifest in one language, but can be applied in others, Knowing 1 language limits exposure to things like that.
Jeremy
If you really are happy just knowing one language, that's fine, but *please* do the world a favor and keep your mouth shut about how great your language is compared to others. You've got no substance to back it up.
dreftymac
@annakata - withness Mark's comment there- I told you it was controversial.
glenatron
@Gary - I'll have to disagree simply because there are _some_ classes of languages which I wouldn't expect anyone to learn from just a reference. Imperative, OO and functional - sure, but what about other paradigms? What about Prolog, Forth, FP, Icon, Befunge (Ok, esoteric, but still)..
Dan
I like my one language.
xoxo
Corollary: If you only know one kind of language, you...oh, Jon said it right up at the top there...
Rob
Learning a new language is easy for any programmer, learning the framework in depth takes much longer and is more beneficial.
Cookey
Not focussing to narrowly on a single programming language or platform will definitly widen your horizon and offer opportunities to better understand what you are doing. On the other hand, I do think that focus is important and proficiency commes at a cost.
Seventh Element
@Brian - it would be more analogous to an author who knows multiple languages is a more creative writer, though not necessarily better
dragonjujo
@Dan, agreed here... I find I'm able to pick up OO-style in languages fairly easily, the more functional a language, the harder time I have with it conceptually. I love loosely typed languages (JS is my fav, that's controversial).
Tracker1
Great answer glenatron. I strongly concur!
Yoely
Data from the Cocomo II estimation model shows that programmers working in a language they have used for three years or more are about 30 percent more productive then programmers with equivalent experience who are new to a language (Boehm et al. 2000). An earlier study at IBM found that programmers who had extensive experience with a programming language were more than three times as productive as those with minimal experience (Watson and Felix 1977).Having said that, I believe that a dev should know several languages also, but being fluent in one counts for something.
Nemi
In fact, I think everyone should learn an Imperative OO language (Java, C++, Python whatever), a functional language (haskell, erlang, OCaml whatever), a concatenative language (Factor, Forth, Cat, Joy etc), a logic programming language (Prolog, Mercury etc) and a dataflow language (labview, estereel, Lustre, verilog, pure data, MAX/MSP etc). This combination will show you that a) there are many radically different paradigms out there, b) some languages really are different and c) you cannot learn them all from a reference.
Dan
I would argue that a person is not a master of their field until they are sufficiently well versed in their options such that they both have the knowledge to pick the right tool for the job, and understanding to be flexible enough to do so. For a programmer on a single platform, this might mean a deep knowledge of the language's tools and libraries, but for someone designing a solution, this also means a deep knowledge of programming language paradigms.
T.R.
I think that language is secondary - either you have the aptitude to be a developer or you do not. Those that do can transition between languages have this skill and a lot of people who pigeonhole themselves (by choice) are on the shorter side of the aptitude equation. A good programmer will get the job done; a great programmer will adapt and apply language / framework specific paradigms to reach a solution and look to how the equivalent can be executed on the next toolset.
joseph.ferris
If I'm an expert at language X, then I can be a great X programmer without knowing any other languages. And I'd argue a great X programmer is a subset of 'great programmers'.
Kirk Broadhurst
@Kirk - this is precisely the opinion with which I dsagree absolutely. I see "great java programmers", "great perl programmers", "great C programmers" and so on as a superset- a great programmer will certainly belong to one of those groups, but they will also be able to use a few other languages too. I honestly don't see how you could achieve that level of greatness in one language without having the ability to pick up others easily.
glenatron
It's the generalist vs specialist argument. If you don't particularly enjoy programming but want to do it as a job, you could survive by just knowing one language very well.If you want any kind of growth in your career, become a generalist.
baultista
@Dan, that's why they invented Oz, a multi-paradigm language, for teaching at EU, it only lacks the concatenative facet, but layers paradigms one over the other, in a nice progressive way to learn each one firmly before upgrading to the next. Anyway, learning multiple languages still adds value even if you learned Oz at the start, as Oz isn't a 'production' language, at least not one that you find deployed at large.
Monoman
@Monoman, indeed, Oz is an interesting language. I learned a little of it from "Concepts, Techniques and Models of Computer Programming" and it certainly is a flexible language, paradigm-wise. Still, I think ultimately one would get more out of learning a handful more paradigm-specific languages, though if you learned Oz well you're certainly ahead of a lot of programmers.
Dan
+26  A: 

I often get shouted down when I claim that the code is merely an expression of my design. I quite dislike the way I see so many developers design their system "on the fly" while coding it.

The amount of time and effort wasted when one of these cowboys falls off his horse is amazing - and 9 times out of 10 the problem they hit would have been uncovered with just a little upfront design work.

I feel that modern methodologies do not emphasize the importance of design in the overall software development process. Eg, the importance placed on code reviews when you haven't even reviewed your design! It's madness.

Daniel Paull
I don't know, I've seen an upfront design be a very good guide to development. I've never seen it work out such that the upfront design is followed exactly. It seems in my experience that when the rubber hits the road, designs have to be reworked.
Doug T.
fine with that, so you iterate... amend the design now that you have discovered something new and get on with the job. Your code is, once again, an expression of your design. It's developers that think that a design follows code the urk me.
Daniel Paull
I wish I was allowed to design before I code. In this job it's "I have an idea" from somoene followed by a directive to get something in a demo ASAP.
David
Much of my design is noted in header files and/or a few diagrams on a white board. I'm not saying anything about the how formal your design should be, or how to do it, but for the love of God, get your thoughts sorted before coding!
Daniel Paull
I've been irritated by the opposite, too much value placed in the design. The mantra "reuse the design not the code" forgets the time spent on implementing, testing, debugging and generally hardening the codebase. You cannot just throw that amount of work out.
JeffV
@Daniel: I think I agree with you. At the same time, it's important to be ready and able to revise the design and the code late and often. That takes skill that, I'm afraid, is not taught.
Mike Dunlavey
@Mike - I'm not saying that we all return to a waterfall model. Quite the opposite - as a developer you should expect things to change, so design your system to cater for change (eg, minimise coupling) and expect unexpected iterations that affect your design. You are right - this is not taught.
Daniel Paull
So if you have to iterate anyway, the choice to design first or code first is essentially the same thing.
Kendall Helmstetter Gelner
@Kendall: you are kidding, right? Perhaps you are thinking of a proof by induction for your statement, but I'd hope that the number of iterations to write a bit of code that is closed against change is small. In that case, I believe that design first is far more efficient.
Daniel Paull
I believe in iterative design. If you invest too much time upfront in design, you won't have the time to do the necessary rewrite (which always happens).
quant_dev
+512  A: 

Performance does matter.

Daniel Paull
Overall performance, or performance of every single block of code? More reasoning very welcome :)
Jon Skeet
I think this is a counter to the widely held opinion that performance doesn't matter since "you can always buy more CPU, hard disk, RAM etc".
Ed Guiness
And I thought the simplicity of the comment was it's main appeal. Ok, for example, many developers do not think about the time complexity of the algorithms that they use. Lesser developers reading this comment just ran off to wikipedia to find out what time complexity is.
Daniel Paull
There are lots of silly ideas about performance inhabiting programmers' heads. The only solution I have for this problem is to recommend that people do performance tuning of existing code.
Mike Dunlavey
@edg - Exactly so, and IMHO it just aint the case, as per my post here http://stackoverflow.com/questions/377420/throwing-hardware-at-software-problems-which-way-do-you-lean#377429
Shane MacLaughlin
Thats not controversial... Improving performance early is what's controversial (and stupid by the way)
Pablo Fernandez
I have to groan when I think of all the code I've optimized that was piggy because it contained massive data structure and event-driven architecture put in for the purpose of "performance".
Mike Dunlavey
especially developer performance!
Steven A. Lowe
Here's my take: Performance sometimes matters. There are applications that are uncomfortably slow no matter how much hardware you throw at them, and applications that are very fast on 486s, and applications in between.
David Thornley
no, they dont. If you join Strings with + or use StringBuilder the performance will be the same. db, clustering is performance. not creating less objects(except c++)
01
Wirth's law: http://en.wikipedia.org/wiki/Wirth%27s_law It's true.
Imbue
I completely disagree with that. More applications suffer from robustness issues than performance issues.
Uri
@Uri - but, improving quality is not controversial!
Daniel Paull
Importance *can* matter.
J.T. Hurley
"Premature optimization is evil" regards bit fiddling, not picking an efficient algorithm. It is not an excuse to produce bad code.
Svante
I work in embedded. So yes I agree, every little helps.
Quibblesome
If there weren't so many programmers who think that "you can always buy more RAM", we could nowadays run a complete office suite, a graphical web browser with flash, java etc. plugins, several messaging and online game clients concurrently on a 1 GHz, 256 MB RAM machine without any swap.
Svante
Clearly this is the first "controversial" opinion in this question that was really controversial
1800 INFORMATION
To state the obvious, Performance doesn't always matter, it only matters when it matters. The trick is being able to predict when that is before you start coding... Easier is figuring it out after you're done coding...
Charles Bretana
After you've done coding can be way too late on a complex system.
Pete Kirkham
My own corollary is: Performance matters, but you suck at it. Use a profiler.
qualidafial
Performance DOES matter. Learn your containers and use the best one for the job. Too many programmers use the early optimization quote as an excuse to be lazy.
WolfmanDragon
That's what she said!
Kip
Amen to that. In a project made by two different parties (we were using a library they produced), their chief engineer said we had 4GB of memory, so why bother about memory leaks [in c++]? *sigh*
Diego Sevilla
@unanswered: it varies. If you are joining a few strings, just go with += however if it's in a loop or a metric ton of joins, StringBuilder is faster because it preallocates the memory ahead of time. Once you max that out however...
Nazadus
Scalability != Performance... It's a balancing act.. do you need your application to scale to N servers? then the performance on one server may suffer, for decisions made for scalability... Some problems you can throw hardware at.
Tracker1
Diego.. LOL, I worry about memory leaks with my ajaxy stuff (removing event handlers on DOM elements being removed from the tree).
Tracker1
Don't forget about Amdahl's Law!
rlbond
Okay, this should only be taken to a point... Computational complexity should always be considered. But it drives me nuts when my coworkers daisy chain ternary operators touting that it's "slightly more efficient" than writing a few if statements. Sometimes the .0001% efficiency cost is worth code readability.
James Jones
It's too bad I can only upvote this once.
Crashworks
I think performance does matter, but code correctness and reliability matters more. That means you should develop your code to be correct, reliable and secure first and worry about performance later, after prototyping. The hotspots should then be rewritten as needed. Pure cycle-shaving optimisation is also never as effective as a good algorithm. Example: I once had a program that took 2.5 hours to run. I did some hardcore optimisation and it ran in 40 minutes!! I rewrote it to use a better algorithm; it ran in 20 seconds! I tried optimising that - but it did not go any faster. Go figure :-P
Dan
@Dan: "I rewrote it to use a better algorithm; it ran in 20 seconds!" I bet you wish you'd done it that way in the first place... It sounds like your initial slow but correct/robust solution was a waste of time in more ways than one!
Daniel Paull
@Daniel: Both solutions were correct and robust. The second solution used better data structures (one big win was replacing lists with tables so I went from O(n) to O(1) in that part of the code). This was, unfortunately, only possible because I could profile the code, so the first version wasn't a TOTAL waste of time and see which parts were inefficient. Would have been hard to do it that way from the start. But, yes, I do wish I wrote it that way in the first place. Would have saved me about a day...
Dan
I'd say if something is half as fast, it doesn't matter (in most cases) because you can always buy faster hardware. But a bad algorithm or bad data structure can make things thousands of times slower. If you really think performance doesn't matter you've obviously only ever done noddy programming.
Mark Baker
Here's the thing: performance comes with a price (paid in programmer-hours), and consumers aren't willing to pay for it.
Frank Farmer
@Frank - are you sure about that? I have many anecdotes relating to a poor performing app slowing down developers and testers, leading to a lot of wasted time. A relatively small amount of time spent improving performance could save a team a heap of time and money while increasing overall quality and reducing frustration. That's a win, win, win situation.
Daniel Paull
CPU, memory, disk IO or space? What is the one resource that has not been doubling every 18 months.The programmer.That is why when I think of performance I first think how can I make my developers (and myslef) more efficient.With a billion CPU cylces going to waste every second, why waste time on worrying about every CPU cycle.And if you want that level of control, then the only langauge that will give you that is assembler.Me I'm glad that my assembler days are mostly behind me, and more often than not I'm programming in languages written for this century.
Swanny
@Swanny: Your comments about using assembler are not necessarily true. When I talk about performance and scalability I am referring to choosing appropriate algorithms and data structures such that you may turn a naive O(n^2) algorithm into something better the O(n). The other area of increasing performance is changing the way your program works, focussing on interactivity. For example, making long running operations asynchronous. I don't need to resort to using assembly language very often, nor do I feel it necessary. I also assume that you don't ever work with embedded systems...
Daniel Paull
In my primary field (server-side applications), performance is for padding profit margins, scalability is for making sure your business can actually keep running. When it comes right down to it, you can add server capacity in an emergency by going to Fry's and exchanging currency for whatever hardware they have lying around. You cannot reliably enhance performance in time to save you from the fact that your application is about to crash because you have too many users, driving a bunch of them off.
Nicholas Knight
@Nicholas: what you say is true only if the server load scales linearly with the number users. If you had a nasty O(n^2) or worse algorithm that was causing your performance problem, them I doubt that just buying more hardware is an economical solution at all. So, I claim that scalability and performance are intimately entwined - so much so that I see them as pretty much the same thing. Wouldn't you agree?
Daniel Paull
@Daniel: No. Creating a scalable application _includes_ selection of _scalable algorithms_, which is still different from the selection of _fast algorithms_. O(anything) tells you how an algorithm scales, but by itself it tells you nothing about performance, and it's entirely possible to have an O(n^2) algorithm that will be faster for many practical datasets.
Nicholas Knight
@Nicholas: "and it's entirely possible to have an O(n^2) algorithm that will be faster for many practical datasets" - in which case performance doesn't matter? I find your distinction between scalability and performance confusing. To me, scalability is one aspect of performance. Hence, a "performance improvement" may be gained by improving scalability.
Daniel Paull
@Daniel: A simplistic example, but consider a single-threaded webserver. With select() or similar, its performance may vastly outstrip that of a naïve threading or forking server (like traditional Apache), up until you start overloading a single CPU. If you make it threaded, you can add capacity simply by adding CPUs (and other hardware as necessary). It is now scalable, but you have done nothing to speed up an individual request (actually, it's probably slightly slower with the additional overhead from threading). That is my distinction between scalability and performance.
Nicholas Knight
@Nicholas: Your example does not improve scalability at all. The single threaded server scales linearly with load, as too does the threaded version. However, you can get away with less hardware (assuming you are using multi-CPU machines), which will pad your profit margins... no wait, padding profit margins is what performance does. Oh dear, I am confused by what you mean.
Daniel Paull
I am so glad you said this. Try writing a genetic algorithm in Ruby. It's not fun.Well, it is fun, but trying to get it to finish in under six hours is not fun.
Michael Dickens
I love that people always say "Performance matters, but reliability matters more" as if you have to choose between the two. You might as well say "Performance matters, but comfortable office lighting matters more." You can have it all! Good code is reliable, performs well, and is written under good office lighting. Don't settle for less!
Dan Olson
@Dan: Sure, you *can* have both performance and reliability, but that doesn't happen too frequently in practise - and when it does, you've sacrificed something else (probably development time). Performance doesn't matter, unless it does. You need to make a conscious decision to care about a particular performance metric in a particular case, set a benchmark, and profile. The anti-performance mindset doesn't say "performance *never* matters", rather "stop caring about performance in those frequent cases where it doesn't matter".
Iain Galloway
Wait, this is a controvery? There's some projects where performance matters and some that don't. That doesn't mean there's a dichotomy here.
Rei Miyasaka
@Rei: I can't think of a single software project where performance is not important. It just-so-happens that in many situations, a naive implementation exceeds your performance requirements.
Daniel Paull
@Daniel: Seriously? Well then I guess there's some controversy here after all. I can think of plenty of examples where performance is hardly a concern: any code that's O(1) and waits on other invariable bottlenecks like user response is and always will be fast enough, in my mind.
Rei Miyasaka
@Rei: Yes, Seriously! It just-so-happens that for many applications using modern hardware, a naive implementation may satisfy the performance requirements of most users - that is not to say that performance does not matter; it just means that it's already taken care of. BTW - to say, "and will always be fast enough" suggests that you have an idea of the performance requirements and know that you'll always exceed them - ergo, performance mattered. If you disagree, then I think you've just become complacent.
Daniel Paull
Wow, getting personal. There's no way that my button click handler will ever be too slow, because there's no way that it'll be used outside of WPF -- because it can't be used outside of WPF. It might be used on Silverlight on a phone, sure, but that phone will be fast enough to run Silverlight, and thus, fast enough to deal with my less-than-perfectly-efficient click handler. The view logic in my code is already hard-coupled to WPF/Silverlight. Just because I thought fleetingly about performance *doesn't* mean performance mattered for the project. Complacency or pragmatism?
Rei Miyasaka
@Rei: I hadn't meant for you to take my comments personally. I don't think you understand my argument. You keep saying "fast enough" - this means that you have an idea of acceptable performance. Ergo, performance matters. Just because you didn't have to do anything "special" doesn't imply that performance isn't important.
Daniel Paull
@Daniel So what you're saying is that even the very act of *considering* performance for a split second is to prove that performance matters. That seems like a rather contrived understanding of phrase "performance matters". Determining whether performance is important or not is obviously important -- but once it's been determined for a case that it's not, then it simply is not. It's like saying that the vase in the corner of the room requires constant attention because someone might someday throw it at you. Yeah, sure maybe, but it just isn't prudent.
Rei Miyasaka
@Rei - isn't the problem when developers don't consider performance - not even for a split second? To be able to state "that it will always be fast enough", which is what I was responding to, involves much consideration - far more than a split second. I find your vase analogy is very weak.
Daniel Paull
@Daniel Death by vase is never a concern until it's very imminent; that's the analogy. I don't know what you experienced that disillusioned you on the ability for a developer to passively identify potential performance issues, but I know potentially slow code when I see it; I don't need a mental checklist so to speak. It comes as part of thinking about the imperative execution of your code. There are signs -- large IO, platform API in tight loops, >O(n) function calls, loops on indefinite collections, timers, threading etc. They're signs that you *see*, not signs that you stand there and read.
Rei Miyasaka
@Rei: Ok, let's stick with the vase - when choosing a third-party vase (as opposed to rolling your own), do you look for one that is unlikely to inflict serious damage when flung at you? No. Why not? Because it doesn't matter. The corollary for third party software products and performance is not true. "large IO, platform API in tight loops, >O(n) function calls, loops on indefinite collections, timers, threading etc" - that's a lot of things to keep in mind when writing and/or maintaining your simple button click handler.
Daniel Paull
@Daniel Again, I don't keep any of that "in mind" -- I notice it when I see it. I'm not thinking about performance or a flying vase until I see a certain code pattern or an API call or a psycho ex with the vase in hand. Check this out: http://stanford.library.usyd.edu.au/archives/fall2001/entries/mind-identity/ `Place has argued that the function of the ‘automatic pilot’, to which he refers as ‘the zombie within’, is to alert consciousness to inputs which it identifies as problematic, while it ignores non-problematic inputs or re-routes them to output without the need for conscious awareness.`
Rei Miyasaka
@Rei: You seem to take the phrase "does matter" to mean "I have to actively do things to take care of it." I read your comments and I have no idea if you are supporting my argument that "performance does matter" or not. For example, your auto pilot interrupts you when there is a potential performance problem - ergo, performance matters to you and your auto pilot. Under normal flying conditions, your auto pilot writes code in the usual manner that adheres to your regular flight pattern of readbility, reliability and ensuring appropriate performance levels. Again, performance matters.
Daniel Paull
@Rei: You said, "I'm not thinking about performance ... until I see a certain code pattern or an API call." In my opinion that's just too late. You should have considered the performance implications BEFORE you wrote the code. It must be funny to watch you cut code - zombie, zombie, zombie, CRAP! refactor, refactor, refactor. Zombie, zombie, CRAP! refactor, zombie, zombie ...
Daniel Paull
@Daniel Actually, "autopilot" in this context refers to driving. When you're on the highway, you sort of zone out and stop paying attention to anything until there's a deer in front of you -- and usually, unless you're otherwise distracted, your response will be no different. No, I don't write code and realize after the fact that it's too slow -- I think about the API that I need to call, the algorithms that I need to write, and it's then that it'll click -- "hey, could get pretty slow". That doesn't necessarily mean I'm constantly thinking about performance.
Rei Miyasaka
To restate my, `Determining whether performance is important or not is obviously important -- but once it's been determined for a case that it's not, then it simply is not.` And, I don't need to be thinking about it constantly to notice it. So my position is that it matters when it matters, but it shouldn't be a constant obsession.
Rei Miyasaka
@Rei: '"autopilot" in this context refers to driving.' Umm... ok. I had expected you to understand that "usual flying pattern" was merely an extension of your chosen "autopilot" metaphor. The autopilot metaphore is not just restricted to driving - it can be anything that you do by rote (washing the car, eating soup, walking to the corner shop, etc). If I ever feel like I am "programming by rote", then alarm bells go off - why can't this be automated or commonality factored out? To continue the analogy, my autopilot only makes short syntactic flights - long haul flights are always aborted.
Daniel Paull
@Daniel: I think we're talking about two different things here now. I'm talking about not actively paying attention to performance; you're talking about not paying attention to design. Obviously I pay close attention to what I'm designing, but there are patterns in design and implementation that instantly set off "performance alarm bells" in my mind the same way rote code sets off "redundancy alarm bells" in your mind. But you don't need to be constantly going "I will not repeat myself" when you're coding, do you? You just think of a design and go "huh, this could be refractored".
Rei Miyasaka
And yes, if you mean "performance does matter" in the sense that you need to always be able to subconsciously spot potentially slow code, then I totally agree with you. But like I said a while back, I'm not sure that that's a really useful interpretation of the word "matters"; I think the term is more pertinent to the discussion of whether or not code needs to *always* be as fast as it can be regardless of known invariable implementation constraints. I hope you're not annoyed by this discussion by the way, because I honestly think it's damn interesting.
Rei Miyasaka
@Rei: no, I'm certainly not annoyed - quite the opposite. I mean that "performance matters" in the way that "readability matters". Readability does not stop "mattering" once you consistently write readable code. What inspired me to post this one like controversial opinion is that I have observed far too many situations when developers have not known that their designs have serious performance problems until they hit some wall at runtime and they they plead ignorance. By "performance matters", I mean known your limitations, predict bottle necks and build appropriate systems.
Daniel Paull
There have been many times where I have taken the "low road" in a design, knowing that my design and implementation is not as fast or scalable as it could be, however, the extra cost of the higher performance approach could not be warranted or would not be useful. The important thing is that I made an informed design decision. Heck, I've even been known to write code that blocks on IO from time to time - but I know the dangers and accept the consequences! Blocking on IO without knowing the dangers or accept the consequences is what gives me the absolute shits and is the crux of my argument.
Daniel Paull
@Daniel Totally agree there. I think I was just expecting you to have meant "performance matters" in a more controversial, possibly disagreeable way!
Rei Miyasaka
@Rei: Perhaps you need to spend some time around web developers. Now that's a controversial statement!
Daniel Paull
@Daniel, Rei: I didn't read the last few rounds of this discussion, so you may have got around to this point, but the way I see it is this: If it is feasible, for any given requirement, that code _could_ have been written (even if only by a deranged lunatic writing an O(2^(2^n)) "speed-up loop", etc.) that fails to meet performance requirements, then the mere possibility of existence of such code is proof that performance "matters", because there _is_ a requirement - even if it's insanely easy to meet. I think this was the point of Daniel's original statement.
mokus
+37  A: 

You don't have to program everything

I'm getting tired that everything, but then everything needs to be stuffed in a program, like that is always faster. everything needs to be webbased, evrything needs to be done via a computer. Please, just use your pen and paper. it's faster and less maintenance.

Mafti
+1 sorta. I use my tablet when I can like pen and paper because sometimes its just easier to write than use a piece of software.
percent20
Do you mean "everything" and not "anything"?
Adam Bellaire
Well, he sayd "you don't have to program" and I completely agree - nobody has forced me to program, I just happen to like it. Sorry, but no controversy here.
Rene Saarsoo
No, no, I have to program lots of things.
postfuturist
You don't have to program everything
Anil
+1 for low-tech. Sometimes an Excel spreadsheet will do the trick just fine instead of coding an expensive CRUD.
Mauricio Scheffer
+193  A: 

Code == Design

I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.


Here's an article on the topic of Code as Design.

Jon B
It's hard for programmers to fathom, but non-programmers have a very hard time reading ANY programming language, and something visual or free-form text is usually much easier for them to handle. And you WILL need to talk about the design with non-programmer domain experts or managers.
Michael Borgwardt
I generally agree that it is possible to write readable code in even the most complex algorithm. By using techniques such as splitting up code into smaller well named functions, variable naming, and commenting, to name a few.
Jeremy
How about using a program to read and design diagrams from the source code that helps non-programmers understand the program?However, why should non-programmers care about the source code in the first place? That should be the domain of the programmer, not the manager.
Jeff Hubbard
I mostly agree. I often write prototype code as a means of designing. I then throw it out and write it again (with flaws fixed) as a means of coding.
Dan
developing software is like writing laws, you start at some level and keep filling in the detail or providing generalisations until it is sufficient that it can be followed. Mostly the aim is for it to be followed automatically, but forget not being able to explain it to the fleshies.
Greg Domjan
I disagree it is much easier to change a line in a word doc or visio about a function is to act then it is the code.
David Basarab
There's a difference between "hey, check out this UML, see any issues with the architecture?" and "hey, check out my code, see any issues with the architecture?" Humans aren't particularly good at parsing code; they're good at parsing images, though.
LKM
UML is always insufficient to describe in detail the problem at hand - if it wasn't, we wouldn't have programming languages. It is at best a formalized sort of handwaving.
Kris Nuttycombe
UML and code are meant for different purposes, and it is not really reasonable to use one in lieu of the other.
Kwang Mark Eleven
UML is overrated but it's not the only aspect of design.Code != Design.
I. J. Kennedy
+1 for Controversy, but you are 110% wrong. Cowboy coding ftl.
Kyle Rozendo
However, when discussing your code in a meeting with functional testers and business people, having one diagram that shows the problem, makes things a lot clearer a lot faster than several pages of code. However, I am no fan of UML.
Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious. ― Frederick Brooks, in The Mythical Man‐Month, p. 102.
Teddy
if code was equal to design, then there will be no coders. All you will need is a designer. Man this is sooo untrue...
Random
+57  A: 

Opinion: Unit tests don't need to be written up front, and sometimes not at all.

Reasoning: Developers suck at testing their own code. We do. That's why we generally have test teams or QA groups.

Most of the time the code we write is to intertwined with other code to be tested separately, so we end up jumping through patterned hoops to provide testability. Not that those patterns are bad, but they can sometimes add unnecessary complexity, all for the sake of unit testing...

... which often doesn't work anyway. To write a comprehensive unit test requires alot of time. Often more time than we're willing to give. And the more comprehensive the test, the more brittle it becomes if the interface of the thing it's testing changes, forcing a rewrite of a test that no longer compiles.

Cameron MacFarland
Yes. And code can only be tested if it has room to fail. Simple structures without inconsistent states have nothing to unit test.
Mike Dunlavey
Yeah, unit tests up front don't really make sense. If I wrote it down, I thought about the possibility. If I thought about the possibility, unless I'm a complete moron it'll at least work the first time around where the test would apply. Testing needs to catch what I DIDN'T think about!
PhoenixRedeemer
Phoenix - you have a point about only catching what you didn't think about but I disagree with your overall point. The value of the tests is that they form a spec. Later, when I make a "small change" - the tests tell me I'm still Ok.
Mark Brittingham
I worked a company that wanted 95% test coverage, even for classes containing which contained fields to assign and no business logic whatsoever. The code produced at that company was horrible. My current company does not write any unit tests, relying instead on intense QA, and the code is top-notch.
Juliet
I write unit tests when I think I need them, but more importantly I write random test drivers, because my code might work fine in 100% of predictable cases. It's the unpredictable cases I'm worried about.
Mike Dunlavey
In my current project, I've introduced up-front unit tests, and code quality has improved drastically. People had to be convinced at first, but soon noticed the positive effects themselves. So my experience says you're wrong. And PhoenixRedeemer, you ARE a complete moron... just like everyone else.
Michael Borgwardt
@Brazzy: Why weren't your devs writing better code to start with? Notice my opinion says you don't "need" to write tests up front. I'm not saying you shouldn't, just that you should think about why you're writing that way.
Cameron MacFarland
@brazzy: Hey, complete morons rule! :) I've seen code that is improved by unit tests, because it needed them. I've seen code that didn't need many unit tests, because it had few invalid states. My code tends to need randomly generated tests, due to the problem space.
Mike Dunlavey
Unit tests are also about managing change.It's not the code that you are writing right now that needs the tests, but the code after the next iteration of change that will need it. How can you re-factor code if you have no way to prove that what it did before the change is still what it does after?
Greg Domjan
@Greg: While it is true to say how can you refactor if you can't prove you didn't break stuff, but then I do write tests designed to show changes after a refactor. My opinion of tests is mainly confined to their use up front. Tests are very useful when refactoring.
Cameron MacFarland
Everyone writes the unit test that checks open() fails if the file doesn't exit. No one writes the unit test for what happens if the username is 100characters on a tablet PC with a right-left language and a turkish keyboard.
Martin Beckett
I think this misses the point of test driven development, which hurts the argument. It isn't about testing edge cases, it is about driving design.
Yishai
You don't need to catch every edge case. If you are testing the best case and a few common errors, when an edge case pops up you can write a test for it, fix it, AND ensure that all you don't introduce new bugs. Apart from that, writing tests first forces you to think about what you are trying to acheive, and how. It helps you write small maintainable methods. I don't see how any programmer with a desire to write good software could be against this.
railsninja
Although I agree that "unit tests only catch the issues I've thought about", there are many times where I'm *positive* the code I just wrote satisfies a particular condition, yet the test reveals something I totally overlooked. Furthermore, the act of writing tests first forces you to think about all the edge cases in a manner that you might not have to as great a degree.
Ether
For me, an eye-opener about testing was this: you need to try out your code anyway - so why not do it in form of a test? Extensive testing is controversial, of course, but a little can get you a long way.
hstoerr
+229  A: 

It's ok to write garbage code once in a while

Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.

jfar
Some languages *cough*Perl*cough* are better for garbage code than real development. A good developer will know more than one language for either role.
David Thornley
Absolutely. Don't make a mountain out of a molehill. Little throw-away apps don't have to be pretty - they get a job done.
Mike Dunlavey
I had to recently recover and reprocess payroll data with a deadline of a few hours to identify how to get the data and reprocess it and then be very sure of its correctness by eyeballing it. When the pressure is on like that you have to just hack it up until it works.
Tony Peterson
brad.lane
I would add that if you're adding "garbage code" to a non-garbage app, do it in a way that won't pollute the rest of the app. Encapsulation is especially important for hackish code.
JW
J.J.
In my environment I found that almost ever app grows in to a large project so what seems like a good idea to write garbage code ends up being a very bad idea later. Do a job well or not at all inevitably pays off in the end for me.
Jeremy
2 problems with this: first, you never know how permanent your little garbage app may become; I found one not long ago I wrote eight years ago, still in use, still crap. Second, writing crap is habit-forming; like you say, it feels good, which makes you want to do more of it. Just say no.
JasonFruit
It's not ok to write garbage code if *MY* name is on it, that's for sure...
LarryF
It's fine sometimes - if it's only going to be run once for a particular task, then thrown away
MarkJ
Sometimes you really do know when somethings garbage code. Those one-time exports or imports are perfect examples.
Brian Bolton
It's fine until it moves into production, which is about 90% of the time...
Daz
I usually work from home and when my kids were a little younger they would look at the screen and say "Dad's doing his garbage writing best keep quiet" The name stuck so when coding I'm still referred to as doing "Garbage writing". Always made me laugh!
Despatcher
I agree with this and we use a concept of TECHDEBT to track it. If you write garbage code to do something now, you have to mark it as a Debt to your overall system with a promise to go back and fix it right later. Overall you allocate X amount of debt your system can carry and you can never go over that limit
Gord
I don't think this is ever true. If you write junk code there's no reason for writing it in the first place other than maybe a client with a deadline that needs booked.
leeand00
It is ok, but only for so long as it does not go in the source code repository... THEN you can be publicly flogged AND told to fix it..
Thorbjørn Ravn Andersen
NO. Throwaway code, yes. Garbage code, no. You don't need to generalize your code to cover every possible situation if it does what it needs to do. But you do need to make sure your code can be read and followed, because it's funny how often throwaway code becomes useful later.
Kyralessa
extreme YAGNI! yummy :)
Mauricio Scheffer
if (idiocyOfRequirements > willToWork) BringOutTheDuctTape();
Repo Man
Just don't do it out of laziness. Refactoring and evolving your first fumbling attempts at solving a problem are what iterative software development is all about.
burkestar
+5  A: 

Manually halting a program is an effective, proven way to find performance problems.

Believable? Not to most. True? Absolutely.

Programmers are far more judgmental than necessary.

Witness all the things considered "evil" or "horrible" in these posts.

Programmers are data-structure-happy.

Witness all the discussions of classes, inheritance, private-vs-public, memory management, etc., versus how to analyze requirements.

Mike Dunlavey
By manually halting you're acting as a simple sampling profiler, so there's certainly some logic behind it, but I tend to find that instrumenting profilers give better results on the whole (albeit with more performance impact on the running application).
Greg Beech
Yes it is a sampling method. The difference is that you're trading precision of timing for precision of insight. Concern about slowing down the app is confusing means with ends. You're trying to find cycles spent for poor reasons. This does not require running fast.
Mike Dunlavey
I would humbly assert, from logic as well as experience, low-frequency sampling of the program state beats any profiler for the purpose of finding things that can be optimized. However, for asynchronous message-driven software, other methods are needed.
Mike Dunlavey
What I do think profilers are very good for is monitoring program health, to see if performance problems are creeping in as development proceeds.
Mike Dunlavey
The "best" way to analyze requirements varies both on who is giving them, and who is receiving them. Therefore discussion around the "best" way to do that is not very quantifiable.
Kendall Helmstetter Gelner
@Kendall: I've never seen "any" work in how to analyze requirements, and propose and evaluate alternative solutions, let alone "best". If we were doctors, we would know all about treatments but diagnosis would be "obvious".
Mike Dunlavey
+150  A: 

There is no "one size fits all" approach to development

I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.

Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.

People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.

It isn't.

Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.

Greg Beech
Hurray for common sense! Having started life as an engineer, I'm often baffled by the "religious" tone of this field.
Mike Dunlavey
There is no silver bullet! quoting F.Brooks
epatel
@epatel: I used to think there was no silver bullet, until I stumbled on a couple of them. The problem is, as Greg says, tools should be chosen to match the project, not treated as cure-alls. I'm tired of all the "religion" in this field.
Mike Dunlavey
The only silver bullet in software development is "being smart about it". (imo)
Pop Catalin
Surely silver bullets would in fact not work very well?
Liam
Five worlds man. 1. Shrinkwrap 2. Internal 3. Embedded 4. Games 5. Throwaway www.joelonsoftware.com/articles/FiveWorlds.html
MarkJ
When was Brooks' essay last considered controversial?
Hanno Fietz
Programmer interview question: What are the pros and cons of methodology XYZ? What considerations should you account for when deciding whether to use it? QA people and developers should both be able to judge when to use what methodology, and recognize when methodologies are less (or not) useful.
Kimball Robinson
+37  A: 

Before January 1st 1970, true and false were the other way around...

annakata
http://en.wikipedia.org/wiki/Humour#Understanding_humour
annakata
Oh man, this is the funniest thing I've seen on SO in a long time.
MusiGenesis
LOL.. am tweeting this.
Amarghosh
I understand how *nix systems record time, and how true and false are represented. But, could someone explain this joke to me, I don't get it? Thanks.
Matt Blaine
I don't get it ._.
M28
it's like particles and anti-particles: for an arbitrary system (like a computer) it doesn't actually matter what label you ascribe to each value, the two things are defined by each other. Kaons spoil the metaphor a bit, but it's just a joke so you'll have to learn to let it go.
annakata
+28  A: 

Regurgitating well known sayings by programming greats out of context with the zeal of a fanatic and the misplaced assumption that they are ironclad rules really gets my goat. For example 'premature optimization is the root of all evil' as covered by this thread.

IMO, many technical problems and solutions are very context sensitive and the notion of global best practices is a fallacy.

Shane MacLaughlin
There are two types of optimisation, by architecture and by code. Architecural optimisation is clearly needed before you write code. However the term 'premature optimization' really applies to efforts to write code optimally instead of simply. This is evil.
AnthonyWJones
I am often called in to straighten out big messes that were architected ostensibly with the objective of "performance".
Mike Dunlavey
@Mike: There has to be some understanding of volumes and response requirements before the app is developed. Such things have to be considered in the inital archecture. Of course specific performance choices need to be justified.
AnthonyWJones
@Mike, as I mentioned, it's all to do with context. I work in the geospatial domain, where the default complexity of many problems is O(n^3). In this arena, optimization is a must, and it has to happen at design time. Analysing underperforming code with a profiler is rarely helpful.
Shane MacLaughlin
+9  A: 

That most language proponents make a lot of noise.

Varun Mahajan
Controversial, and simultaneously axiomatic. Nice.
ChrisA
Stu Thompson
+17  A: 

Here's one which has seemed obvious to me for many years but is anathema to everyone else: it is almost always a mistake to switch off C (or C++) assertions with NDEBUG in 'release' builds. (The sole exceptions are where the time or space penalty is unacceptable).

Rationale: If an assertion fails, your program has entered a state which

  • has never been tested
  • the developer was unable to code a recovery strategy for
  • the developer has effectively documented as being inconceivable.

Yet somehow 'industry best practice' is that the thing should just muddle on and hope for the best when it comes to live runs with your customers' data.

fizzer
"has never been tested" You do pre-release testing with assertions on and accept the assertion being triggered as passing the test? Weird idea. If you do that than I agree with you but I don't understand why you are doing this.
duncan
No, I'm just assuming that a failed assertion during testing causes a build to be rejected. Therefore, if one happens in the wild, the program has necessarily entered a state outside of test coverage.
fizzer
If during testing your assertions never failed and it does fail during production code, there is a problem with testing, but nevertheless, the error should be logged and the applications should end. Assertions or code that warrants the same should be in production. I agree.
David Rodríguez - dribeas
The problem is when the action of doing the assertion costs something that would otherwise slow down your code. If it is not in a hot path, I totally agree, the asserts should always be on.
nosatalian
++ I've followed this path, in the spirit of "hope for the best - plan for the worst". We test to the very best of our ability, but never assume we have found *every* possible problem. Assert (or throwing an exception) is a way of guarding against doing further damage if a problem occurs (heaven forbid).
Mike Dunlavey
It depends. Software that controls pacemakers or nuclear power stations should not be written like that.
MarkJ
+22  A: 

Opinion: That frameworks and third part components should be only used as a last resort.

I often see programmers immediately pick a framework to accomplish a task without learning the underlying approach it takes to work. Something will inevitably break, or we'll find a limition we didn't account for and we'll be immediately stuck and have to rethink major part of a system. Frameworks are fine to use as long it is carefully thought out.

Kevin
+1 for the most stunning spelling of 'inevitably' ever.
ChrisA
I disagree, how many StringUtils classes you have in your project? I once found project that had 5 of them. Most of that stuff could be replaced by third part lib.
01
Frameworks, yes. Useless overhead, many times. Third party components, no! Portions of the task already completed, tested and debugged by thousands of other people!
skiphoppy
@skiphoppy -- I can't help it. I really am a roll your own type of guy at heart. I will fully admit that I might be jaded as places I've worked at the past tried to buy the absolute cheapest things possible. It bit us in the end.
Kevin
Joel in defence of not-invented here syndrome: http://www.joelonsoftware.com/articles/fog0000000007.html
MarkJ
+1 disagree completly :)
ykaganovich
+68  A: 

Opinion: Never ever have different code between "debug" and "release" builds

The main reason being that release code almost never gets tested. Better to have the same code running in test as it is in the wild.

Cameron MacFarland
I released something week before last that I'd only tested in debug mode. Unfortunately, while it worked just fine in debug, with no complaints, it failed in release mode.
David Thornley
The only thing I differ between Debug/Release builds is the default logging level. Anything else always comes back to bite you.
devstuff
ummm - what about asserts? Do you either not use them, or do you leave them in the release build?
Daniel Paull
Again, I don't tend to use them. If you're asserting something in debug shouldn't you have it fail in release too? Use an exception if it's critical, or don't use an assert (or not care if the assert doesn't make it to release).
Cameron MacFarland
@Cameron MacFarland - a good point; code with assertions in Debug mode either ends up not handling the failure condition in Release mode, or with a second failure-handling path which only works in Release mode.
Graham Lee
It would be like writing to different applications. you're debug version would be nicely debugged, and your release version wouldn't. Tragic!
Jeremy
@Daniel Paull, if there is something fishy it is often better to stop the processing than having corrupt data.
tuinstoel
Agreed: Exceptions > Asserts.
postfuturist
Agree: there are some very nasty bugs in there that could be real detrimental to your rep!
Seventh Element
Hmmm. So, release code almost never gets tested, right? No offence Cameron, but remind me never to use any of your software
MarkJ
@MarkJ: That's what I'm saying, you should be testing the code that goes out the door, and not have a difference between "Release" that is not tested, and "Debug" that is tested, but never released.
Cameron MacFarland
James Curran
@James: Exceptions also bring the app crashing down. Also what happens when a user sees an assert error? Are they supposed to fix it?
Cameron MacFarland
All development and testing should be done on the release build, but a debug build should exist to assist in debugging. (Hello #ifdef!)
rpetrich
You just need to switch. Our QA uses debugging builds during development but switches to release towards the end. There are certain levels of sanity checking that you would like to be performed as much as possible before shipping, but cannot afford to ship due to performance reasons.
nosatalian
+7  A: 

There is no such thing as Object-Oriented programming.

Apocalisp
The problem I have with that article is that it argues that OOP doesn't model the real world properly and so it doesn't exist. I agree that OOP is a poor real-world model but that doesn't mean it doesn't exist.
Cameron MacFarland
@Cameron MacFarland: That's not what the article argues at all. It argues that there's no distinction between "OOP" and other kinds of programming, other than a rhetorical one.
Apocalisp
Why is there no reference to ADT which I believe OOP was sprung from?
epatel
@Apocalisp: You're right, I only skimmed the article. Now that I've read it properly, he compared making distinctions between code styles with making distinctions about race by using the argument made by capitalist libertarians, who believe in things that lead to slavery and killing poor people.
Cameron MacFarland
See I told you it was controversial. Enough to draw an ad hominem with a non-sequitur and a straw man in a single sentence. I'm impressed.
Apocalisp
@Cameron, actually liberals are the one killing poor people by telling them that they don't need to be responsible for their life, they just need to do what 'superior' liberals tell them to do. Liberalism is all about emotional and intellectual ego.
Lance Roberts
@Apocalisp: You impress easy. "Valid concepts are arrived at by induction" completely ignores Kant's idea of a priori concept, which is what OOP and Smurfs would be considered. Restricting concepts to facts of reality is itself a straw man argument.
Cameron MacFarland
"It is a useless distinction, in exactly the same way that “race” is a useless distinction." - And nationality, religion, sex, occupation. They are all useless distinctions if you follow the logic of the Ayn Rand article.
Cameron MacFarland
@Cameron: You've hit the nail on the head. I'm deliberately and completely in defiance of Kant because his ideas are drivel. To think is to think about something.
Apocalisp
"Java is object-disoriented" -- me
Svante
Nice answer... "No such thing as OOP"... And it's easy to prove. Just look at the assembly generated from any C++ compiler. I don't see any OOP in there... :)
LarryF
There needs to be an Object-Action Oriented Language. Actions are not Objects. It makes me angry when I write a void to modify an Object. ARRRGH............................
WolfmanDragon
@epatel: perhaps because the idea that OOP was sprung from ADT is wrong. See "OOP vs ADTs" (http://www.cs.utexas.edu/~wcook/papers/OOPvsADT/CookOOPvsADT90.pdf) and "On Understanding Data Abstraction, Revisited" (http://www.cs.utexas.edu/~wcook/Drafts/2009/essay.pdf) by William R. Cook.
MaD70
+331  A: 

If you're a developer, you should be able to write code

I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:

Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.

It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:

Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.

Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.

I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!


Edit:

There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.

Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.

Greg Beech
calculating how many terms you need to guarantee that the 5th decimal place does not change any more is actually not that straightforward.
martinus
@martinus - I agree with you. The only part where I'd have to think answering to such question is the "accuracy of 5 decimal places" thing. I would probably hack it (perform a lot more calculations than needed and cut the result after 5th place :-) )
Abgan
@martinus - yeah I realise 'simple' isn't quite fair, I've updated the text to be more accurate.
Greg Beech
@PhoenixRedeemer - this isn't to test your maths, it's to see how you factor the solution (do you use recursion or looping? why? how do you work out what the exit condition is? how do you test for it?)
Greg Beech
2 words: Fizz Buzz.
Kibbee
Here's a solution to the problem, if you're interested: http://stackoverflow.com/questions/407518/code-golf-leibniz-formula-for-pi#407530
Greg Beech
You're wasting your time interviewing people who can't answer basic questions. Give them a written test, the test only needs to ask really, really easy questions. Don't even bother with taking the time to read their resume until you've done the test. Its amazing how much time you'll save.
AnthonyWJones
Give them a real question not a mathematical question. Job interviews give nervous and stress to people. This kind of questions are a waste of time for all. Real questions for real Job. I like this questions but if someone hiring to me ask them, i don't want this job. The hirer is not professional.
FerranB
I like that kind of question, because it shows you who your dealing with. Id not like to work with this guy. good question.
01
For my very first job, I knew it was the place I wanted to work when I was asked "Can you tell me what a linked list is?" Noone else asked me that question. Apparently nobody else answered that question for them, either! I stayed there for nine years. Never used linked lists there. :)
skiphoppy
@ PhoenixRedeemer: I don't think this is a math question at all. A programmer should be able to implement a simple formula like that. That doesn't test your math skills. Besides, a programmer is supposed to have some math background, so it shouldn't be so confusing even if you are nervous.
Marc
A recent interview process I was involved in asked candidates to invert the order of a string of words. Less than 10% of candidates could even start a psuedocode response. Frightening and a response to the complaint about maths in the comment above. Very basic programming is rarer than it should be.
duncan
@FerranB - This isn't a mathematical question. Sure, it contains maths, but really it's a question of how you factor a solution (loops? recursion? functional?) and whether you can identify the patterns and implement it. Any senior developer should be able to do this. Also, working under stress is...
Greg Beech
(continued) sometimes part of the job. Sometimes you have deadlines and you have to implement complex code in a short space of time. That's just the way life is, especially working at a start-up. Not everyone enjoys that, but that's why you have interviews - to find out!
Greg Beech
Yes, I agree, in our dev team, they've hired people with the title "programmer analyst" and "software developer" who can't write code in any language, and have absolutely no development education. How the heck does THAT work?!
Jeremy
I'm unsure about asking people programming/math questions in an interview :-) I don't have an issue with a written test that said, sometimes it is really cool to get someone who is excited about discussing such problems.
billybob
Even if this is controversial, it shouldn't be. I think it should be obvious.
luiscubal
@luiscubal - my thought is that you can advertise for people and not get enough to fill positions who have any demonstrable coding skill - so do you hire one of the 'non-coders' as a developer or leave the position unfilled? That question is where the controversy lies, IMO.
duncan
I don't buy this. Need a builder, do you ask them to build a little wall first? Looking for a school for your kid? do you ask them to 'teach' you something? No, You look at their achievements and qualifications. This guy hit the nail on the head. http://burningbird.net/technology/perfect-example/
Skittles
The "write function to calculate area of a circle" is a favourite question of mine when running interviews. I usually bring it out at the start, and if they flunk that, game over. I am constantly surprised how many "experienced programmers" CANNOT even do something this basic.
madlep
@Skittles - basic logic dictates that you need to prove your analogies are actually applicable.
SDX2000
+1 because I agree, but mostly because I cannot believe how many people disagree with this. You're developers for Jeebus sake, you're supposed to be able to write code, anywhere, at anytime, under any conditions.
Binary Worrier
math == coding (sort of)
pi
I was once asked to draw a house... everyone else in the room drew a cartoon, i drew a blueprint.
Chris
@Skittles: just looking at achievements isn't enough because it is _very_ easy to take credit for other people's work. Many people look good on paper and then can't even answer very simple questions. Plus, prior performance doesn't always predict future performance.
Greg Rogers
@Skittles, getting somebody to renovate a house, or build a wall, I would go an examine a wall they had already built, or check an accreditation with a known body.As most developers are not free to show off their previous work and there are few places for accreditation, other alternates need be
Greg Domjan
BS Problem. To determine the accuracy of the algorithm you have to have an accurate value of Pi as a point of reference. My answer would be use a Constant and don't be a pedant.
_ande_turner_
@Ande: you fail the interview. Not that I'd ever ask question 1, because I can accept some people suck at math, but a sequence that alternately adds then subtracts a decreasing number has a well-defined upper and lower bound.
Jimmy
The problem is that the question doesn't test coding ability, it tests ability to pseudocode on a whiteboard in a high-stress situation. Many competent people, especially introverted developers, will merely freeze in the spotlight at this point. Give them a computer and a quiet room instead.
nezroy
@nezroy - I've never seen any competent developer freeze. These aren't the only type of questions asked (most are just discussing technical issues) and the people who can't start these questions don't tend to do very well in those either.
Greg Beech
Also we always tell candidates, up front, that the interviews will involve coding on a whiteboard and/or a laptop. So nobody is going in there to be surprised. Experience shows that those who can't do it on a whiteboard can't do it on a laptop in a quiet room either.
Greg Beech
The point of the pi calculation is not that you expect a perfect high-performance solution, but that you get to see the programmer at work. You tolerate whiteboard mistakes, and you tolerate fiddling around. Somebody who comes up with a slightly flawed answer is a good bet.
David Thornley
If that were an interview question, I'd give the job to the person who responded with: return System.Math.PI;
Craz
Many of you aren't really understanding the point.. It shouldn't be that hard to come up with *something* for the PI question. Even though it's right to use the constant, the intent of the question is to see that you can think, and not to get the best answer. This should be made clear up front.
FryGuy
"I've never seen any competent developer freeze" - well abviously. You define "competent programmer" as not equal to "froze in interview"...
WOPR
@Kieran - The sentence after the one you quoted says "These aren't the only type of questions asked (most are just discussing technical issues) and the people who can't start these questions don't tend to do very well in those either." I can judge whether they are competent from these too.
Greg Beech
Jeff (CodingHorror) likes Fizz Buzz as a simple programming test http://www.codinghorror.com/blog/archives/000781.html
MarkJ
I've only ever worked as a web developer for a year, and I failed my university studies, but even I can answer that first question in a handful of lines of code. This is NOT a math question at all, the math is handed to you on a platter. This is out and out coding, implementing the math in your code
Matthew Scharley
I don't see why there are people who complain about having to solve a "math problem". You're given a formula, program it. It has zero math in it. Only thing that requires some thinking is the accuracy to five numbers. I wouldn't want someone who can't solve this little problem.
Carra
Math.round(Math.PI, 5); ;)
Fraser
javascript... function GetArea(circle) { return Math.PI * (circle.radius^2); }
Tracker1
I guess as a math person, I recognize that as a telescoping series. Thus when the absolute value of the next term is less than 1/100000 it will be within 5 decimal places.
rlbond
I don't understand how anyone interviewing for a developer job could fail at the "area of the circle" test. Can you please give an example of what you hear from inadequate candidates?
David
This is a great test! I was able to write it in about 10 minutes in C#, and I know that alot of the people I've hired in the past wouldn't have been able to do it at all
Kevin Laity
Edit: I'm referring to the PI question, not the area of a circle question of course. That one would take more like 6 seconds.
Kevin Laity
This is a great question because it tests more than basic coding ability. It tests the maturity of the job candidate. A good candidate will treat the whole accuracy thing lightly and ask why it's necessary. Bad candidate will get really worked up it.
MrDatabase
How is that 'controversial'?
Chad Okere
I don't understand the purpose... Why do you need to calculate pi? PI is (essentially) constant. WTF. If anyone writes a function longer than:function() { return 3.14159 } They're wasting their time.
jason
@jason - It's to test your ability to think about a problem, break it down into its component parts, and write code that implements it. The subject matter is not important -- it's an interview question, not a real world requirement.
Greg Beech
"They're just questions, Leon."
quant_dev
I could understand not being able to write a fully working solution on a whiteboard off the top of their heads... but if they can't reason out the LOGIC to do it, that just makes me SAAAAAAAD!
Gabriel Hurley
@jason - you completely missed the point. You want to see if they can actually take a problem and develop a programmatic solution - if they can't so that, why would you let them take a list of design requirements and tell them to build a fully fledged business system?
Callum Rogers
return Math.PI; :P
Nick Bedford
Thats exactly what I was thinking Nick.
Kyle Rozendo
Sorry, but, did you just say that they couldn't even work out the second question? Damn... I actually find that hard to believe. Perhaps I have too much faith in humanity :(
@dstibbe - Yep, I'm not kidding, unfortunately. I found it hard to believe too. It was actually quite awkward being in an interview with supposedly professional programmers who couldn't begin to answer even the most basic programming questions.
Greg Beech
mine would just say print 3.14159. thats the algorithm.
corymathews
If you choose to just hardcode PI, you failed the test. One of the more important tasks of a developer is to correctly interpet the customers demands. The customer in this case wasn't interested of a function returning PI. The interest was in seeing you produce a simple function following text instructions.Trying to be smart and bypass the actual construction of a function by hardcoding the answer fails to recognize that.
Marcus Andrén
even if you don't know the Leibnitz formula (like me), you could use a method like double doMagic(int n) and work on a solution for the rest.
Zappi
Ignoring the fact that the known value of the series is PI, supposeyou were just given a series and asked to write a program to calculateit's value.Now, the real question is if the series is convergent or divergent, and how fast it converges. This can't be reliably determined by writing a program. Your winning candidate would tell you to hire a mathematician. Your super-winning candidate would then offer to put on his other hat and study the problem.
ddyer
`4 * (1 - 1/3 + 1/5 - 1/7) = not even close to PI` so if this was your question, they probably couldn't answer it because it doesn't add up
Russ Bradberry
What is so complex about about q1? Assuming you only need to run the loop 5 times. double pi(){ double ret = 0; int denom = 1; for (int i=0; i<5; i++){ double a=1/denom; denom+=2; double b=1/denom; denom+=2; ret = ret + (a - b); } return (4 * ret);}
tgandrews
Both of these questions are better than any "real world" problems that one could use as both require minimal background knowledge and have very precise requirements. Anyone who struggles with translating these into even pseudo-code can't be expected to do any better when dealing with real work.
lins314159
+1 I think the code (and the thinking behind) is perfectly achiveable in an job interview. OK, everyone can have a bad day, get nervous and not getting the point, and that's why I don't believe in asking for code and check it with a compiler, I think it's important to see the thinking process...Anyway, I've made a few "code tests" on several job interviews, which go well, and then not getting the job, so I think should be not uncommon as it seems.
Khelben
Anybody who asked me to write a function to calculate pi to 5 decimal places, would see me say "Thanks for your time" and leave. But if they asked me to write code that will calculate pi to 200 decimal places, then they've asked me something interesting. It is worth calculating pi from scratch, but not when you're interested in less precision than a pocket calculator, or what we have stored in our heads already.
Warren P
I coded a solution in C to the above PI question easily but got what i thought was a strange answer. It results in a poor approximation of PI, there are better approximation algorithms than that.
Gary Willoughby
My favorite language is C# so my answer would be:Double GetStupidQuestionAnswer() { return Math.Round(Math.PI, 5); }If this thread wasn't so long I would post this most controversial programming opinion:A good programmer doesn't need the math background to re-implement well known algorithms.
@jason: Oh dear. It might shock you, but when you got tested on the multiplication table in school, *it wasn't because the teachers wanted to know what 5 * 7 was.*
j_random_hacker
Man... those 2 questions would drive me out of the interview room! When a potential employer begins by asking me simple academic questions that have no bearing on real-life programming problems, my instincts tell me that the team being assembled will have problems with project delivery.
code4life
@code4life - Your instincts would be wrong then. In the last three years we've had 16 major releases and about 25 interim releases, and have hit the go-live date for all but one, which we slipped by a week due to unforeseen circumstances. I'd say that's a pretty good track record for project delivery.
Greg Beech
@Greg, I guess birds of a feather flock together... In any case it's just not my cup of tea. And certainly not the sort of questions I'd ask at an interview.
code4life
@j_random_hacker: Then you missed the point of my comment. A decent developer will realize that calculating a known constant is completely ridiculous, and should demonstrate that knowledge even in an abstracted interview question such as this. Part of being a good programmer is challenging even the fundamentals of the problem you're presented, to see if you can accomplish your goal in a more graceful and efficient manner.
jason
@jason: Sure, as an interviewer I would give a couple of points for mentioning that this is something that you would never do in practice -- I would acknowledge the truth of that, and then ask them to solve it anyway, *as an exercise to show that they have some basic skills*. If at that point they still don't want to try their hand, then they have either attitudinal or cognitive problems, and I'd show them the door.
j_random_hacker
I would vote you up, but you have 314 points...
Douglas
@jason, Programmers need to be able to deal with abstract problems, impractical demands, and underdeveloped technologies. Good programmers can handle theoretical problems, be diplomatic about suggesting alternate solutions, and be able to recognize when a situation is symbolic of something else (solving this problem symbolizes ability, not the need to solve the problem).
Kimball Robinson
whats a radius?
Uncle
You will not be able to explain why a given (or your own) solution to this problem is correct without mathematically analyzing it because otherwise, you will not be able to explain why your solution is accurate to 5 digital places. It is **not** enough to just assume that if your 5th digit does not change anymore after *n* steps, you have reached it. (For whatever *n*, probably people would have stopped with *n=1*.)
Albert
+68  A: 

Opinion: explicit variable declaration is a great thing.

I'll never understand the "wisdom" of letting the developer waste costly time tracking down runtime errors caused by variable name typos instead of simply letting the compiler/interpreter catch them.

Nobody's ever given me an explanation better than "well it saves time since I don't have to write 'int i;'." Uhhhhh... yeah, sure, but how much time does it take to track down a runtime error?

John Booty
What's your view on whether the *type* of the variable should be explicit or not? (Thinking of "var" in C#.)
Jon Skeet
Good one. If you have to work with legacy Fortran code, you wouldn't believe the headaches caused by this issue.
Mike Dunlavey
I actually wanted to write this same opinion, as well. IMHO, this is the major drawback of both Python and Ruby, for no good reason at all. Perl at least offers `use strict`.
Konrad Rudolph
Explicit declaration is good, to avoid typos. Assigning types to variables is frequently premature optimization.
David Thornley
Yup. *ONE* bug hunt involving an l (between k and m) becoming a 1 (between 0 and 2) wasted a lifetime of declaring variables.
Loren Pechtel
Anything else is not a real language. Now THAT'S controversial.
Andrei Taranchenko
Controversial... but true!!
MarkJ
I remember learning Visual Basic 6 in high school. If OPTION EXPLICIT was not the first line in each source file, we would fail.
rlbond
+1  A: 

That (at least during initial design), every Database Table (well, almost every one) should be clearly defined to contain some clearly understanable business entity or system-level domain abstraction, and that whether or not you use it as a a primary key and as Foreign Keys in other dependant tables, some column (attribute) or subset of the table attributes should be clearly defined to represent a unique key for that table (entity/abstraction). This is the only way to ensure that the overall table structure represents a logically consistent representation of the complete system data structure, without overlap or misunbderstood flattening. I am a firm believeer in using non-meaningful surrogate keys for Pks and Fks and join functionality, (for performance, ease of use, and other reasons), but I beleive the tendency in this direction has taken the database community too far away from the original Cobb principles, and we jhave lost much of the benefits (of database consistency) that natural keys provided.

So why not use both?

Charles Bretana
+4  A: 

Whenever you expose a mutable class to the outside world, you should provide events to make it possible to observe its mutation. The extra effort may also convince you to make it immutable after all.

Alexey Romanov
+367  A: 

Opinion: SQL is code. Treat it as such

That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.

I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?

And check it into source control
Cameron MacFarland
sqlinform.com is your friend.
Christopher Mahan
It depends. There are an awful lot of idiots who set hard to maintain styles in SQL. It is one thing thing to set up indentation standards, it is another thing to make it so all the name = value or column as name all line up in order (because if you add something long you often have to re-indent)
Cervo
Amen! I have found if I ever have to update someone elses code and their formatting sucks, I have to reformat the code to make it readable before I can make my changes.
Jeremy
A lot of XML should be treated as code too. Whether this is XSLT or plugin.xml/web.xml configuration, it is not just data, it's wiring.
jamesh
I do scream! And, @Cameron: bravo, my friend.
JasonFruit
@Christopher Mahan hey, thanks, didn't know about that. You'd think after all these years I'd know to google for a solution to a reoccurring problem like that.
Manos Dilaverakis
++ to this and especially the source control bit. I am constantly flabbergasted that people don't 'get it'.
kpollock
I've also heard "There's no code change required. We just need to tweak the SQL"!
LaJmOn
People still write SQL? j/k :P
Lusid
One of my ex-bosses hasn't treated PHP like code, and build up a string with a single assignment that was around 100 lines long. Same for indentation. But Delphi he could at least format in a readable manner :S
phresnel
"Code or Code"? Did you mean "Schema or Code" perhaps?
Adam Nofsinger
This is a great answer. SQL and DDL should be treated both as code.
Kwang Mark Eleven
Very good point
dimus
Right - it's code. It's just that it's Bad code.
Ladlestein
+1 and I'll add the corollary that it should be tested (in some manner, via integration or Fit tests).
Michael Easter
Yeah! And don't type it all in CAPS either!
Chris Needham
Readable to you is not readable to me. I think your formatting is not readable. Get over it.
Joe Philllips
Agreed 100x over. Don't be afraid to use stored procedures where possible, either.
baultista
And release it like code. Check it into source control, and manage its release just like you would any code. Do not change it directly in production!
Mike Miller
+3  A: 

MVC for the web should be far simpler than traditional MVC.

Traditional MVC involves code that "listens" for "events" so that the view can continually be updated to reflect the current state of the model. In the web paradigm however, the web server already does the listening, and the request is the event. Therefore MVC for the web need only be a specific instance of the mediator pattern: controllers mediating between views and the model. If a web framework is crafted properly, a re-usable core should probably not be more than 100 lines. That core need only implement the "page controller" paradigm but should be extensible so as to be able to support the "front controller" paradigm.

Below is a method that is the crux of my own framework, used successfully in an embedded consumer device manufactured by a Fortune 100 network hardware manufacturer, for a Fortune 50 media company. My approach has been likened to Smalltalk by a former Smalltalk programmer and author of an Oreilly book about the most prominent Java web framework ever; furthermore I have ported the same framework to mod_python/psp.

static function sendResponse(IBareBonesController $controller) {
  $controller->setMto($controller->applyInputToModel());
  $controller->mto->applyModelToView();
}
George Jempty
Your bio is scary - all washed up at 20! Here is my own anti-MVC screed. http://stackoverflow.com/questions/371898/how-does-differential-execution-work
Mike Dunlavey
+59  A: 

Source Control: Anything But SourceSafe

Also: Exclusive locking is evil.

I once worked somewhere where they argued that exclusive locks meant that you were guaranteeing that people were not overwriting someone else's changes when you checked in. The problem was that in order to get any work done, if a file was locked devs would just change their local file to writable and merging (or overwriting) the source control with their version when they had the chance.

Cameron MacFarland
I've always local-mirrored the code. Then I would do the merging with Windiff and an emacs-macro, then lock it only long enough to check in the changes. I hated it when people would lock a file, then go on vacation.
Mike Dunlavey
I used to think that it was impossible to work in a team without file locks in your SCM. But after working with Subversion in four companies (and rolling it out myself in two of them, I find merging (auto when possible, manual when not) much better 99% of the time.
dj_segfault
Not controversial. Nobody used SourceSafe by choice.
MusiGenesis
@MusiGenesis: Yes they do. They exist.
Cameron MacFarland
My company is still using SourceSafe. The main reasons are a) General inertia and b) The devs are scared of the idea of working without exclusive locks.
T.E.D.
My personal feeling is that the ability to merge code files should be a skill all programmers need, like all programmers need to know how to compile their code. It's part of what we do as a byproduct of using source control.
Cameron MacFarland
@MusiGenesis: I've headed a move away from SourceSafe in two different companies over the last 5 years, and in both cases the reason for using SourceSafe was ignorance of the alternatives.
scraimer
SourceSafe doesn't even work on anything based on IIS7. So soon enough it's going to be pretty much redundant.
Ed Woodcock
Just to be pedantic...while exclusive locks were the default until recently, SourceSafe has actually supported edit-merge-commit mode since 1998.
Richard Berg
Richard Berg
@Richard: Yes but nobody who uses Source Unsafe uses it in Merge mode because they're afraid to, etc.
Cameron MacFarland
worked very well for many years for us.
peterchen
MKS baby! Finally just killing it off now.
TJ
I would never want to put my precious source in something notorious for corrupting files. Had to use it once due to a lack of alternatives, got burnt.
Oorang
@MusiGenesis we do at my work place, but I don't particularly enjoy it. I'm much happier with SVN.
baultista
+5  A: 

Arrays should by default be 1-based rather than 0-based. This is not necessarily the case with system implementation languages, but languages like Java swallowed more C oddities than they should have. "Element 1" should be the first element, not the second, to avoid confusion.

Computer science is not software development. You wouldn't hire an engineer who studied only physics, after all.

Learn as much mathematics as is feasible. You won't use most of it, but you need to be able to think that way to be good at software.

The single best programming language yet standardized is Common Lisp, even if it is verbose and has zero-based arrays. That comes largely from being designed as a way to write computations, rather than as an abstraction of a von Neumann machine.

At least 90% of all comparative criticism of programming languages can be reduced to "Language A has feature C, and I don't know how to do C or something equivalent in Language B, so Language A is better."

"Best practices" is the most impressive way to spell "mediocrity" I've ever seen.

David Thornley
Your last sencence is +1. The rest is IMHO wrong because zero-based indices are very useful because make cause the indices of a container of size N to be the set of integers in the half-open interval [0, N[. This has some nice mathematical/algorithmic/practical consequences.
Konrad Rudolph
Personally, I haven't seen as much use for the half-open intervals as you have. If you could leave a pointer in a comment, I'd be interested.
David Thornley
+1 because A) I disagree with paragraph 1, so I guess it answers the question, and, 2) I like the other paragraphs :)
Mike Dunlavey
Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.- Stan Kelly-Bootle
Gavin Miller
Yup, +1 for your final sentence.
Graham Lee
+1 for your comment about Common Lisp
Technical Bard
+1 for learning math, -1 for saying Lisp is best (it takes more than parentheses to make a good language)
Lance Roberts
in smalltalk arrays start with 1
nes1983
It's just a convention and it doesn't matter.
Seventh Element
Can't agree with the 1-based arrays, either. Would make add/remove elements much more complex (because you'd have to rebase your indexes during the operation). I'd opt for -1 being the last element in an array, though :)
Aaron Digulla
What's the difference between 0-based and 1-based arrays for add/remove? Python's notation using negative numbers for measuring from the end is kinda neat.
David Thornley
+56  A: 

All variables/properties should be readonly/final by default.

The reasoning is a bit analogous to the sealed argument for classes, put forward by Jon. One entity in a program should have one job, and one job only. In particular, it makes absolutely no sense for most variables and properties to ever change value. There are basically two exceptions.

  1. Loop variables. But then, I argue that the variable actually doesn't change value at all. Rather, it goes out of scope at the end of the loop and is re-instantiated in the next turn. Therefore, immutability would work nicely with loop variables and everyone who tries to change a loop variable's value by hand should go straight to hell.

  2. Accumulators. For example, imagine the case of summing over the values in an array, or even a list/string that accumulates some information about something else.

    Today, there are better means to accomplish the same goal. Functional languages have higher-order functions, Python has list comprehension and .NET has LINQ. In all these cases, there is no need for a mutable accumulator / result holder.

    Consider the special case of string concatenation. In many environments (.NET, Java), strings are actually immutables. Why then allow an assignment to a string variable at all? Much better to use a builder class (i.e. a StringBuilder) all along.

I realize that most languages today just aren't built to acquiesce in my wish. In my opinion, all these languages are fundamentally flawed for this reason. They would lose nothing of their expressiveness, power, and ease of use if they would be changed to treat all variables as read-only by default and didn't allow any assignment to them after their initialization.

Konrad Rudolph
Most functional languages are just like this; for example F# explicitly requires you to declare something as "mutable" if you want to be able to change it.
Greg Beech
Functional languages are just superior that way. Of the non-functional languages, Nemerle seems to be the only one offering this feature.
Konrad Rudolph
I like the bit in SICP where the authors dismiss 'looping constructs such as do, repeat, until, for, and while' as a language defect.
fizzer
Disagree but made me think. Interesting.
Steve B.
I personally like this. "Everything is immutable" makes multithreaded code a lot easier to write: locks are no longer needed since you never have to worry about another thread changing your object under your feet, so a whole class of errors related to race-conditions and deadlocking cease to exist.
Juliet
There's no such thing as a free lunch. Immutability despite its many benefits will have a cost. Generally I like the idea, in the same way I like the idea of functional programming. Can I get my head round that, no. Am I particular thick, may be, but I don't think so.
AnthonyWJones
@AnthonyWJones: what costs does immutable-by-default have?
Juliet
This makes me wonder what my code would be like and how I would need to change my understanding of programming paradigms. Could I deal with immutable variables? I can't begin to grasp the extent of the repercussions of doing this in C#, but I can't imagine anything good coming of it.
BenAlabaster
The thing I don't like about immutability is the amount of copying required.
TraumaPony
I though this was too much when I read it in Effective Java: Favor immutability. Then, when applied it make totally sense. Apps are MUCH easier to create and maintain using immutability. The only extra thing needed is a macro template to "code" the copy methods just as TraumaPony pointed out.
OscarRyz
Language constructs can't take care of all accumulator cases. Sometimes what you are adding up isn't a simple list. It also could make hairy logic in some cases as you can't have a default value.
Loren Pechtel
@TraumaPony: The nice thing about immutability is that in (almost?) all cases copying can be replaced by simple aliasing. This *does* require some changes in data structures, though.
Konrad Rudolph
Another case that can't be immutable: Any sort of iterative calculation or calculation within a loop. More generally, the data you are working on. How well would Microsoft Immutable Word sell??
Loren Pechtel
@Princess: immutable-by-default has a comprehension cost. It's much more difficult to think about (not reason about, think about) immutable-by-default objects/variables/what-have-you.
Jeff Hubbard
I agree that variables should be readonly whenever possible. It lets the compiler optimize and it lets the developer know the value never changes after a certain point.
Jeremy
@Loren: about your “other case”: how is that different from a special accumulator? It is actually just that, and well covered by many frameworks, such as LINQ. Notice that any kind of user interaction rarely benefits from immutability so Immutable Word is probably not a good idea.
Konrad Rudolph
@Jeff: I think this is *at least* debatable. Programming in general has a comprehension cost, any style of programming does. But I doubt that immutable-by-default incurs *any* additional comprehension cost at all, especially since it's much closer to the mathematical use of variables in equations.
Konrad Rudolph
@Loren Pectel, I think that databases should be immutable too.
tuinstoel
There's an obvious cost in complexifying and slowing down the code, to a huge degree. This idea must have been thought of by those who don't have to do too much math programming.
Lance Roberts
@Lance, The opposite is true. Immutability actually helps the compiler a great deal in producing *more efficient* code because it can apply many more automated optimizations. This style of coding works perfectly with “math programming” (I guess you mean arithmetically dense code).
Konrad Rudolph
I want an immutable apple. When I take a bite of the apple I get your apple with the bite taken out of it, and can give my apple to the next person who wants a whole apple.It's all so simple!
Greg Domjan
@Greg, Things always change, we developers are the orchestrators and conductors of this change, because we change and shape the future with our ideas and our code. That's the reason we want immutability!
tuinstoel
Yes, and we'll only access read-only databases, stored on read-only media. Maybe once our programs have no mutable state, and therefore accomplish nothing we can move on to truly pure functional programming where nothing happens and the compiler with the best optimization outputs nothing.
postfuturist
Might be little hard to animate anything if variables describing object to animate were immutable.
Kamil Szot
@Kamil: no, not at all. In fact, `Point` objects in .NET *are* immutable, and animate just fine. You just need to create a new object for each animation position – which *sounds* inefficient but really isn’t necessarily.
Konrad Rudolph
Interestingly, in Java even loop variables can be final: for (final item : list) { ... } Took me a while to discover that.
hstoerr
He's not saying that all variables should be final, he's saying all variables should be final *by default*. That's reasonable.
Craig P. Motlin
+253  A: 

Unit Testing won't help you write good code

The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.

And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.

In fact, I'll generalize that even further,

Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.

They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.

Chad Okere
+1 - I think your further generalization sums up my opinion very well
Greg Beech
Although I agree with your second statement ( the first I'm not sure about ) who judges who the good developers are? Many of the smartest programmers I know will often make dumb mistakes out of pure arrogance because they believe themselves to be such good developers.
glenatron
If you don't know what the edge cases are, perhaps you don't understand the problem you're trying to solve.
Barry Brown
I'd go so far as to say all programmers are bad developers on some days, so they're there to minimize the damage you can do to yourself. I agree with you about Unit Tests, in a lot of situations the cost of maintaining Unit Tests gets really high compared to the benefits.
mweiss
I have to point out that NOT Unit Testing won't help you write good code, either. Writing the tests first does force you to think differently about your API, which can arguably make your code better. If you don't know what tests to write, then you don't know what code to write either.
Bill the Lizard
It can help you write good code... if you write excellent automated unit tests, it helps you to decouple your code so that it can be tested, and thus it becomes more reusable.
Jesse Pepper
@Barry: Some edge cases are inherent in the problem, others are artifacts of the implementation. A test written before the code will only be able to handle the first type.
Dave Sherohman
IMO people who believe that best practices don't apply to them because they're better than everyone else are actually likely to be the worst programmers around.
Michael Borgwardt
No it won't, but it will help you maintain the quality of already good code when you need to modify it later.
Dan
unit tests are invaluable for regression testing - e.g. to make sure your refactoring change didn't break anything *else*
kpollock
I think Kent Beck's "TDD by Example" would convince you that writing tests first is not ridiculous at all (albeit, as for any practice, there is no need to always follow it slavishly).
Fabian Steeg
Agreed, but I think most "best practices" in software engineering are actually there to ensure job security for the engineers.
MusiGenesis
-1 Anyone who thinks they're too good a developer to need to test hasn't progressed very far. Need to be humble to be good.
MarkJ
I think you missed the point. Unit tests for libraries serve as the most concise and correct documentation for the library in existance. Treat it as documentation - cause that's what it is.
Thorbjørn Ravn Andersen
the easiest way to keep bugs from reappearing is through tests. If something breaks, write a unit test that fails, THEN fix the code so that the unit test no longer fails. If the test ever fails in the future, then someone reintroduced the bug.
Laplie
Oh yeah sure the BAD developers need to have their hands held, but us GOOD developers know better, right? We don't need no stinkin' tests, our code is totally SOLID. Like a rock. SRP, OCP, DIP, you name it, us GOOD developers nail it first time, every time... Gimme a break, this makes me sick. -1
Paul Batum
I've tinkered with TDD, and found it to be a good way of developing. I usually write a test for the expected route ("the happy path") first, followed by a test per corner case. At the end, I have a piece of code in which I have a greater confidence, and it usually leaves me with a smile.
Kaz Dragon
+1 for the boldface statement.
flodin
Wow, couldn't disagree more! But not going to downvote you, as it's a valid opinion. But, I think of writing tests first makes you consider how your api will be consumed, so it's more 'client-first' development.
Travis
If your unit testing doesn't cover edge cases, you haven't written your tests thoroughly enough. If you introduce further edge cases into the software because of your coding technique, it's a good sign you haven't thought about the problem enough yet - something unit tests help you do.
notJim
The purpose of unit test cases is one aid to verify that the code has implemented the functionality. Of course you can't do white box testing before writing the code, but it is perfectly useful and desirable to determine positive and negative test cases for spec functionality.
Kwang Mark Eleven
@Thorbjørn Ravn Andersen - I agree, but most people are app programmers rather than library programmers.
user9876
Really disagree. Unit tests act as documentation. Writing code with mocking/unit testing in mind forces you to decouple and inject in your dependencies; it makes your code implcitly more resuable as you are writing the code with two uses in mind (its primary function and the test). Unit tests *should* cover the edge cases, because you should be adding more unit tests as you are writing the code and spotting the edge cases (yes - right there and then as you key the edge case in). Unit tests help prevent regressions, etc, etc. I can't help but feel that you've missed the point somewhat.
Rob Levine
-1 for attempting to turn developers off a valuable technique with which you clearly have little experience.
TrueWill
I have ended up with better APIs in production code as a result of writing unit tests, since the code needed to be restructured to allow better testability.
Matthew Wilson
I agree that this is controversial - but can't agree with the statement (which is why I agree it is controversial). TDD can really help if you do it right. The problem with most methodologies, though, is that people DON'T ever do them right and then dismiss them. Hands up if you've been on a Scrum project where the business and testers weren't actually involved!!!
Sohnee
If you don't know the edge cases before you write the code (to put them into the tests) how the *** are you going to write the code????
amischiefr
Unit Testing is about quality assurance. It's there to make sure your code works and fails as you would expect it to. And then when you modify it later. And then when someone else modifies it later. Used correctly it does improve the quality of the code in the development phase which is the cheapest place to catch and fix bugs.
Swanny
+1. "Unit tests as documentation" is poor man's DbC. Other uses are very much overrated. And the unfortunately common mentality of "design to limitations of my favorite testing framework" (like interfaces and factories for everything, or all members virtual, just so they can be mocked) results in ugly APIs and overcomplicated code.
Pavel Minaev
"good developers will keep cohesion low" don't you mean good developers will keep cohesion high? High cohesion means you likely have small single purpose classes.
ceretullis
I think good developers are the ones who consider writing unit tests, and try it out, and then find out that (a) it helps, or (b) it doesn't, and are capable of finding some other way to help their process stay under control if Unit Tests are not working for them. They are always worth it is a lie. They are never worth it, is also a lie. They are almost always worth it, in my opinion. But that's not always true. I would say "good developers" should show me an equivalent way of finding regressions that is automatic and can be used in a smoke-testing environment. And then I'll listen to them.
Warren P
Disagree, by writing unit tests you are forced into evaluating the way that a given method will actually work and therefore have to consider ways in which it could be broken by a programmer. This in turn may make you re-evaluate the method signature or return type etc, leading to better code, more sustainable and thought out code. It is then, of course, available for making sure that you don't introduce bugs later etc!
Gary Paluk
unit tests makes you see your interfaces from the consumer's perspective, which will help you make it simpler and more coherent for others to use. Unit tests are also awesome ways to experiment with new features and ease them into your codebase. Not just for regression testing and debugging.
burkestar
I wish I could upvote this a million times.
benjy
I would generalise your rule even further: **Most rules in life are designed to stop stupid people from doing stupid things** -> The "very good" people follow the rules to the tiniest detail, the brilliant people throw away the rule book, and do things there own way. note: I am not saying that rules are bad, but they are by there nature way too rigid
Nico Burns
+9  A: 

My one:

Long switch statements are your friends. Really. At least in C#.

People tend to avoid and discourage others to use long switch statements beause they are "unmanagable" and "have bad performance characteristics".

Well, the thing is that in C#, switch statements are always compiled automagically to hash jump tables so actually using them is the Best Thing To Do™ in terms of performance if you need simple branching to multiple branches. Also, if the case statements are organized and grouped intelligently (for example in alphabetical order), they are not unmanageable at all.

DrJokepu
Define long. I've seen a 13,000 line switch statement (admittedly it was C++ but still...)
Cameron MacFarland
Well, (in c#) if the switch statement is generated (as opposed to manually edited), I see nothing wrong with a 13K line switch statement to be honest. It's going to end up as a hashtable anyway.
DrJokepu
Of course, if it has 13K lines because there is loads of code in each "case" clause, that's totally different. It should be refactored then.
DrJokepu
Ever wondered why there is no "switch" statement in python?
Christopher Mahan
Actually, I do. Was it either that or if, and replacing all if's with switch's would have been a bit too verbose, even for python?
JB
What I want a compiler to do is generate good assembly code for me, and switch is how I tell it I want a jump table. That said, it's easy to think you're doing things for "performance" reasons when in fact you'll never notice the difference.
Mike Dunlavey
@Mike: if you you have a switch statement with thousands of cases, you _will_ notice the performance difference between a jump table and a series of if-else statements.
DrJokepu
How can you have thousands of cases? I can't imagine it, do you have an example?
tuinstoel
@tuinstoel: It's not that hard to imagine it if you try. Before the rise of floating point units, it was a common practice to keep trigonometric functions in lookup tables. I think that keeping the results of complex math functions in premade lookup tables still makes sense today.
DrJokepu
Great answer. Agree completely.
Jonathan C Dickinson
+180  A: 

Software development is just a job

Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.

But in the grand scheme of things, it is just a job.

It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.

I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.

Greg Beech
I would say that about money: money is just a mean to enhance your life; don't let it get in the way of enjoying your life. This is getting way off topic ...
hasen j
It can be a passion for some people too!
SDX2000
I wish I could vote this up a million times. Moderators, is there a way I could transfer all my reputation points to this chap? Is that allowed, Jeff Atwood?
Vulcan Eager
Tell musicians their music is just a job.
icelava
you are great. Best answer!I've lost my girlfriend for "deep programming" and I will never forgive myself.
ugasoft
I agree, but I believe it can be more than just a job and still rank below family, friends, and happiness. I think that programming is just a job for you, but not for all.
brad
Well, "just a job" is for clock-watchers. Since we spend so much time on it, might as well like it, eh?
Andrei Taranchenko
@icelava: I know some musicians (classical music, bass and violin) to whom it is exactly that: Just a job.
Treb
Programmers who overvalue programming overvalue themselves.
Seventh Element
I disagree... I would still be doing software development, if I didn't have to work for a living... Though, it would be on projects *I* want to work on.
Tracker1
@Seventh Element, I totally agree!!
xoxo
This is something that applies to you. You assume that it automatically applies to everyone. I consider friends and family very important indeed, but I consider doing what I was born to do just as important. I cannot under any circumstance neglect either of them.
Lucas Lindström
I didn't assume anything. It's my opinion. You're free to disagree.
Greg Beech
While a nice blanket statement that has some controversial "pop" to it, your assertion leaves no room for passion. How can *any* work done for money be more than just a job according to your statement? Your opinion devalues the passion of millions and diminishes the efforts of hobbyists who don't even get paid. You are certainly entitled to your opinion, but come on, the world is more complex than that.
Bryan Watts
@Bryan: If I disliked cheese and you liked cheese, saying that because I don't really love cheese my opinion devalues your love of that particular bovine product would seem absurd. Telling people that I don't want eat cheese for every meal does not diminish your cheese blog, nor your love for cheese, nor the millions of other cheese aficionados. If I want have crackers with only ham on them, that doesn't stop you at all from making nachos, quesadillas, or cheesecake, and then blogging about your cheese experience, whether it be chedder, cream, provalone, american., or peperjack. (Mmmm...)
Robert P
@Robert P what a silly thing to say! If you dislike cheese then why do you eat it on a daily basis? (This is what you are effectively saying...think about it.)
SDX2000
@icelava It's even more true for musicians, since not all (or even most) musicians play other people's music rather than write their own.
Brian Ortiz
This is a great controversial topic. Probably should be #1. Personally, I come here to make money, so that I can pay for my kids, and vacation, and food, and beer. I chose a profession that I am passionate about, but at the end of the day, I would rather be teaching HS Math and coaching HS Football. Now, if I could only get the same kind of salary doing those...
amischiefr
icelava, I'm both a performing musician and a software developer. While I enjoy software development and do take an interest in it outside work, it's very obvious to me that it's much closer to "just a job" than music, which is a passion, and which I do gladly for free. I think my comment, your comment, and the parent answer are all a matter of personal values—with consequences for our respective jobs, but otherwise without much room for any kind of controversy.
eyelidlessness
No, your just a job.
freedrull
Software development is an *art*. At least to some. Which kind of programmer would you rather have on your team?
Loadmaster
It depends how much time you want to put into it outside of work. I like coming to StackOverflow to solve problems while learning something new, and I like reading up on the latest and greatest tools/technologies/techniques. I occasionally write code outside of work, but it's usually on one-off projects that I rarely finish.
baultista
Software development resulted in the internet and many other things that changed the world forever. Is that not important in the grand scheme of things?
Bart van Heukelom
It's not. I'd rather see software developing as an art. Make it a passion and you will become the best.
Exa
+35  A: 

Singletons are not evil

There is a place for singletons in the real world, and methods to get around them (i.e. monostate pattern) are simply singletons in disguise. For instance, a Logger is a perfect candidate for a singleton. Addtionally, so is a message pump. My current app uses distributed computing, and different objects need to be able to send appropriate messages. There should only be one message pump, and everyone should be able to access it. The alternative is passing an object to my message pump everywhere it might be needed and hoping that a new developer doesn't new one up without thinking and wonder why his messages are going nowhere. The uniqueness of the singleton is the most important part, not its availability. The singleton has its place in the world.

Steve
+1 because I disagree so strongly. Singletons (the design pattern) make testing such a nightmare they should never be used. Note that singletons (an object only instantiated once) are fine, but they should be passed in through dependency injection.
Craig P. Motlin
A logger is certainly not a perfect candidate for a singleton. You may want to have two loggers. I've been in that exact situation before. It may be a good candidate for being *global*, but certainly not for being forced into "one instance only". Very few things require that constraint.
jalf
The way I figure it, I've used some singletons in one project, and I might well do so again before I retire. Not the most widely useable patterns, but valuable for some things.
David Thornley
I really recommend reading http://misko.hevery.com/2008/08/25/root-cause-of-singletons/ to you.
codethief
I would like to add that in C++, the singleton pattern is extremely important due to the static initialization fiasco.
rlbond
Logging is the only common use of the singleton pattern, all others uses are mostly bad.
Emmanuel Caradec
I have never found a case of singleton that could not be substituted for a static, besides in languages that do not have a proper static inicialization time, bringing static fiasco.
kurast
+9  A: 

Rob Pike wrote: "Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."

And since these days any serious data is in the millions of records, I content that good data modeling is the most important programming skill (whether using a rdbms or something like sqlite or amazon simpleDB or google appengine data storage.)

Fancy search and sorting algorithms aren't needed any more when the data, all the data, is stored in such a data storage system.

Christopher Mahan
It depends on the rawness of your original data. If the data is accumuleted by data entry in a UI it is true. But if you do something like Text Mining you need to process your data first, algos become more important.
tuinstoel
tuinstoel: ok, but text mining is eminently parallelisable, so the algo should be ultra simple and then be run by a few hundreds or thousand processes. Image processing needs solid algos though.
Christopher Mahan
I would agree if you also mean that data should be kept as minimal and normalized as reasonable. I see far too much data structure whose ostensible purpose is "better performance" that causes the opposite.
Mike Dunlavey
+1 If I was speaking to an assembly of CS Freshmen my first advice would be to "Know Thou Data_Structures" Amen Brother.
WolfmanDragon
Brooks, in "The Mythical Man-Month", had a comment that he'd be confused if you hid your tables and showed him your flow charts, but if you showed him your tables he wouldn't need to see your flow charts. This should give you an idea of how old this idea is.
David Thornley
+11  A: 

Junior programmers should be assigned to doing object/ module design and design maintenance for several months before they are allowed to actually write or modify code.

Too many programmers/developers make it to the 5 and 10 year marks without understanding the elements of good design. It can be crippling later when they want to advance beyond just writing and maintaining code.

kloucks
I will tell you from having dealt with entry-level and junior developers that they learn precisely nothing by performing "maintanence and bug fixes", they never develop any skills. Letting juniors build an app something from scratch teaches them an incredible amount in a short period of time.
Juliet
Quite so. Aptitude has very little to do with experience, which often just entrenches bad habits.
ChrisA
I would say the exact opposite. Let them write implementations of existing interfaces, that must pass existing unit tests. They will pick up some design skills just by working with the senior developer's designs for a few months.
finnw
Have to agree with finnw.
Software Monkey
@Juliet, absolute rubbish. When I was an entry-level developer I did maintenance and bug fixwork and learnt directly why consistency and separation of concerns is so essential in software. Maintaining code with "issues" it THE best way to improve your own designs.
Ash
i agree this is very controversial lol
Egg
Nothing teaches you the value of doing things the right way like the pain of doing things the wrong way and then having to live with the results.
Jeremy Friesner
+1  A: 

(Unnamed) tuples are evil

  • If you're using tuples as a container for several objects with unique meanings, use a class instead.
  • If you're using them to hold several objects that should be accessible by index, use a list.
  • If you're using them to return multiple values from a method, use Out parameters instead (this does require that your language supports pass-by-reference)

  • If it's part of a code obfuscation strategy, keep using them!

I see people using tuples just because they're too lazy to bother giving NAMES to their objects. Users of the API are then forced to access items in the tuple based on a meaningless index instead of a useful name.

Roy Peled
I'm glad you qualified this. Thank goodness for Python 2.6 adding [named tuples](http://docs.python.org/library/collections.html#collections.namedtuple).
bignose
Hey that's cool. I didn't know there was a such thing as a named tuple. I think for a tuple-perfect-storm you should design a GUI library in python that expects 2-tuples in x,y and y,x order in various places. :-)
Warren P
+5  A: 

Goto is OK! (is that controversial enough)
Sometimes... so give us the choice! For example, BASH doesn't have goto. Maybe there is some internal reason for this but still.
Also, goto is the building block of Assembly language. No if statements for you! :)

Lucas Jones
bash has break n; and continue n; instead. imho the only reason to use goto is when you don't have those (or don't have labelled break/continue)
Johannes Schaub - litb
In assembly everything is implemented as goto (jump/branch). Most languages have if and some form of loop, but many are lacking try/catch or break/continue all of which can be implemented by the goto. Admittedly it can be used really badly so be careful :)
Cervo
I see headaches in making gotos in a language that is parsed while running.
Joshua
@Joshua, you mean interpreted languages? A language like Basic used to be a interpreted language and it certainly had the goto statement. How old are you?
tuinstoel
@Joshua, I'd say it was simpler. I wrote a simple interpreted language (by "simple", I mean "didn't really do anything at all" :D) which had goto. No conditions though.
Lucas Jones
and there are `cmp` statements (`if` statements) in Assembly - otherwise you'd never know when to `jmp`
warren
I suppose.... :)
Lucas Jones
+342  A: 

Readability is the most important aspect of your code.

Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.

Craig P. Motlin
I would temper this statement by replacing "readability" with "modifiability". I've seen entirely too much code that was made "readable" just by puffing it up with whitespace so you could see less of it, and being wordy instead of precise.
Mike Dunlavey
They certainly go hand in hand. And readability is subjective. Sounds like you think that whitespace made the code less readable. That's why a group standard is so important.
Craig P. Motlin
Agreed, this goes along with well factored code and ealier answer re: comments not being all that useful. In the C# 3 I suspect that lots of clever one line LINQ/Lambda expressions are being writen which are almost inscrutable and would be more readable in C# 2.
AnthonyWJones
Agreed. One line statements that do 16 things are a horror in the 80% of the life of the code spent in maintenance (especially when the maintenance is the duty of 'lesser' programmers). Write code that can be read by humans not just compilers.
duncan
I wouldn't say that this is overly controversial. Although, readability is more important that correctness is extremely controversial :-) Your customers may have a different view that you on this :-)
billybob
I would vote this up if I didn't suspect that you are thinking of some One True Brace Style.
Svante
Why do people associate readability so strongly with whitespace? It's a part of it, but a small part.
Craig P. Motlin
If it doesn't run, it doesn't matter.
Lance Roberts
http://www.expatsoftware.com/articles/2007/06/getting-your-priorities-straight.html
Jason Kester
Maintainability > Readability. I can auto-reformat code to make it readable anytime.
thenonhacker
again, readability is not white-space. readability includes level-of-nesting, function length, cyclomatic complexity, variable names, and a bunch of other things.
Jimmy
I agree 100%. Unreadable code puts unnecessary strain on my gray cells.
moffdub
I would code that works is more valuable than code that looks pretty.
Steve918
If the code is not correct, it is invalid. Code that is unreadable but works is always better than code that is readable but fails to do what it is supposed to do. That said, readable working code is much better than unreadable, non-working code.
Callum Rogers
Assuming you're working on a reasonably big team with typical code-flexibility needs, I agree. Code that's broken but *easy to change* is better than code that works but nobody understands. I'd say "maintainability" rather than "readability" though.
Iain Galloway
A: 

System.Data.DataSet Rocks!

Strongly-typed DataSets are better, in my opinion, than custom DDD objects for most business applications.

Reasoning: We're bending over backwards to figure out Unit of Work on custom objects, LINQ to SQL, Entity Framework and it's adding complexity. Use a nice code generator from somewhere to generate the data layer and the Unit of Work sits on the object collections (DataTable and DataSet)--no mystery.

Mark A Johnson
You've obviously never used a DataSet then :P
Cameron MacFarland
I have to disagree. IMO the DataSet is overkill for the vast majority of operations. And before it's asked, yes, I have used it.
Mike Hofer
By the same reasoning, LINQ to SQL, Entity Framework, NHibernate, etc. are also overkill for the "vast majority" of operations. BTW, did you mean the "vast majority" of all operations or the "vast majority" of places where I'd use DDD?
Mark A Johnson
+10  A: 

Using Stored Procedures

Unless you are writing a large procedural function composed of non-reusable SQL queries, please move your stored procedures of the database and into version control.

Shawn Simon
I concur: you can't version stored procedures, and having 200+ stored procedures in a large project becomes a maintenance nightmare. Embedded SQL is ok for small projects, but I'd rather use an ORM to write my queries for me.
Juliet
Princess: I must disagree with your statement that you can't version stored procedures. I version them myself by keeping the SQL for them in source code control. If you make a change to the database, re-export the script for it and check it into the repository.
Mike Hofer
I agree about versioning stored procedures. If you are writing SP, you need to take it upon yourself to version them in source control.
casperOne
Out of *your* database? There speaks a 1970s DBA
ChrisA
We can version SPs. The build process moves them from source control into the database.
Joshua
In DB2/400 stored procedures are an interface to native code on the system... In other words, hard to move over to the calling system.
Thorbjørn Ravn Andersen
+76  A: 

I've been burned for broadcasting these opinions in public before, but here goes:

Well-written code in dynamically typed languages follows static-typing conventions

Having used Python, PHP, Perl, and a few other dynamically typed languages, I find that well-written code in these languages follows static typing conventions, for example:

  • Its considered bad style to re-use a variable with different types (for example, its bad style to take a list variable and assign an int, then assign the variable a bool in the same method). Well-written code in dynamically typed languages doesn't mix types.

  • A type-error in a statically typed language is still a type-error in a dynamically typed language.

  • Functions are generally designed to operate on a single datatype at a time, so that a function which accepts a parameter of type T can only sensibly be used with objects of type T or subclasses of T.

  • Functions designed to operator on many different datatypes are written in a way that constrains parameters to a well-defined interface. In general terms, if two objects of types A and B perform a similar function, but aren't subclasses of one another, then they almost certainly implement the same interface.

While dynamically typed languages certainly provide more than one way to crack a nut, most well-written, idiomatic code in these languages pays close attention to types just as rigorously as code written in statically typed languages.

Dynamic typing does not reduce the amount of code programmers need to write

When I point out how peculiar it is that so many static-typing conventions cross over into dynamic typing world, I usually add "so why use dynamically typed languages to begin with?". The immediate response is something along the lines of being able to write more terse, expressive code, because dynamic typing allows programmers to omit type annotations and explicitly defined interfaces. However, I think the most popular statically typed languages, such as C#, Java, and Delphi, are bulky by design, not as a result of their type systems.

I like to use languages with a real type system like OCaml, which is not only statically typed, but its type inference and structural typing allow programmers to omit most type annotations and interface definitions.

The existence of the ML family of languages demostrates that we can enjoy the benefits of static typing with all the brevity of writing in a dynamically typed language. I actually use OCaml's REPL for ad hoc, throwaway scripts in exactly the same way everyone else uses Perl or Python as a scripting language.

Juliet
100% right. If only the Python developers would finally acknowledge this and change their otherwise exceptional language accordingly. Thanks for posting this.
Konrad Rudolph
But there is already one statically-typed Python-like language. Tt's called C# ;-)
zuber
C# is python-like? Maybe you meant Boo ;)
Juliet
If anyone says dynamic typing is more terse, just point them to Haskell =).I agree with all but your 3rd bullet point. Dynamic code often accepts parameters that can be one of two types. For example, Prototype functions accept either HTMLElements, or strings which you can use $() to look up to get HTMLElements. A good static typing system will allow you to do this =).
Claudiu
#2 is only true if you follow #1, which in my opinion is unnecessary. If it's clear what the code does, then it is correct. I have a code I use a lot that reads in data from a tab delimited file, and parses that into an array of floats. Why do I need a different variable for each step of the process? The data(as the variable is called) is still the data in each step.
notJim
+70  A: 

Code layout does matter

Maybe specifics of brace position should remain purely religious arguments - but it doesn't mean that all layout styles are equal, or that there are no objective factors at all!

The trouble is that the uber-rule for layout, namely: "be consistent", sound as it is, is used as a crutch by many to never try to see if their default style can be improved on - and that, furthermore, it doesn't even matter.

A few years ago I was studying Speed Reading techniques, and some of the things I learned about how the eye takes in information in "fixations", can most optimally scan pages, and the role of subconsciously picking up context, got me thinking about how this applied to code - and writing code with it in mind especially.

It led me to a style that tended to be columnar in nature, with identifiers logically grouped and aligned where possible (in particular I became strict about having each method argument on its own line). However, rather than long columns of unchanging structure it's actually beneficial to vary the structure in blocks so that you end up with rectangular islands that the eye can take in in a single fixture - even if you don't consciously read every character.

The net result is that, once you get used to it (which typically takes 1-3 days) it becomes pleasing to the eye, easier and faster to comprehend, and is less taxing on the eyes and brain because it's laid out in a way that makes it easier to take in.

Almost without exception, everyone I have asked to try this style (including myself) initially said, "ugh I hate it!", but after a day or two said, "I love it - I'm finding it hard not to go back and rewrite all my old stuff this way!".

I've been hoping to find the time to do more controlled experiments to collect together enough evidence to write a paper on, but as ever have been too busy with other things. However this seemed like a good opportunity to mention it to people interested in controversial techniques :-)

[Edit]

I finally got around to blogging about this (after many years parked in the "meaning to" phase): Part one, Part two, Part three.

Phil Nash
Generally when things are aligned in a columnar way it creates a maintenance burden for a developer. Ie aligning the data type and identifier in a method declaration... Line1(int id,) line 2(char id,) ... making sure the data type, variable name, and even commas all are in a column is a MESS
Cervo
it usually just takes a couple of extra keypresses, if that.I didn't go into too many specifics, but I usually only break it into two columns for alignment purposes (usually type - id). I have some other rules to ease the burden where parantheses are concerned. The biggest obstacle I have [cont...]
Phil Nash
[...cont] is fighting against auto-formatting editors. In fact, unless it's easy to disable I usually give up in those circumstances and "go with the flow". But with especially verbose languages like C++ I still prefer it.
Phil Nash
Interesting. I would like to see some examples. Do you have a blog?
Jay Bazuzi
Well, I have: http://www.levelofindirection.com (yes, it forwards to blogspot - the pun *was* intended), and also http://organic-programming.blogspot.com . However, you'll notice neither have been updated for quite a while - due in large part to http://www.vconqr.com ;-) [cont...]
Phil Nash
[...cont] - and I don't mention the layout stuff on either. I'll consider myself prodded - again!
Phil Nash
Code formatting matters so much, it doesn't matter at all.By that I mean that editors should always reformat code when you load it, and SCM systems should reformat to a canonical style on checkin. Then everyone sees the code the way that works best for them.
Kendall Helmstetter Gelner
@Kendall: Sounds nice. It's hard, though, because you have to be able to specify the exact formatting of every possible bit of code, including code that isn't legal in the language!
Jay Bazuzi
This is a pretty much standard opinion. Or, at least, it should be. If this is controversial, then there is a problem.
Eduardo León
+10  A: 

The ability to create UML diagrams similar to pretzels with mad cow disease is not actually a useful software development skill.

The whole point of diagramming code is to visualise connections, to see the shape of a design. But once you pass a certain rather low level of complexity, the visualisation is too much to process mentally. Making connections pictorially is only simple if you stick to straight lines, which typically makes the diagram much harder to read than if the connections were cleverly grouped and routed along the cardinal directions.

Use diagrams only for broad communication purposes, and only when they're understood to be lies.

RoadWarrior
+9  A: 

How about this one:

Garbage collectors actually hurt programmers' productivity and make resource leaks harder to find and fix

Note that I am talking about resouces in general, and not only memory.

Nemanja Trifunovic
Would you mind justifying that?
Juliet
I've seen 50mb leaked bescause some library programmer hooked an event and didn't make absolutely sure to unhook it.
Joshua
now imagine you have 8gb ram
01
8gb RAM is nothing to a repetitive leak on a server under high load.
Kendall Helmstetter Gelner
I guess it refers to RIIA idiom. In that case I must adhere to the proposal. RIIA is a solution for all resources, GC is a partial solution for memory resources only.
David Rodríguez - dribeas
+1 to that. Before GC, programmers took care of leaks before deployment. These days, applications are deployed and then when a 100 users are using the application, we discover that we've run out of database connections.
Vulcan Eager
Anyone who expects garbage collection to handle all resource management has desperately misunderstood garbage collection. GC is only for managing *memory*
benjismith
I'd give a +1 if you had said: "GC because it's not available for all resoures; only memory. So you can leak DB connections." GC has solved 100 issues and introduced 20 new ones, so it's still an advantage.
Aaron Digulla
Which "100 issues"? It has solved only one - memory management, and IMHO even that poorly.
Nemanja Trifunovic
Wait, memory management needed to be solved?
GMan
+12  A: 

SQL could and should have been done better. Because its original spec was limited, various venders have been extending the language in different directions for years. SQL that is written for MS-SQL is different than SQL for Oracle, IBM, MySQL, Sybase, etc. Other serious languages (take C++ for example) were carefully standardized so that C++ written under one compiler will generally compile unmodified under another. Why couldn't SQL have been designed and standardized better?

HTML was a seriously broken choice as a browser display language. We've spent years extending it through CSS, XHTML, Javascript, Ajax, Flash, etc. in order to make a useable UI, and the result is still not as good as your basic thick-client windows app. Plus, a competent web programmer now needs to know three or four languages in order to make a decent UI.

Oh yeah. Hungarian notation is an abomination.

Kluge
+1 for the abomination. Anything that's harder to read than write has got to be wrong.
ChrisA
This is a statement that two things that had been around for a long time, and have been heavily used, would be much better done if they'd known then what we know now. That is much closer to being a tautology than a controversy.
David Thornley
html layout is a lot easier than assembling widgets in C++
hasen j
+6  A: 

Globals and/or Singletons are not inherently evil

I come from more of a sysadmin, shell, Perl (and my "real" programming), PHP type background; last year I was thrown into a Java development gig.

Singletons are evil. Globals are so evil they are not even allowed. Yet, Java has things like AOP, and now various "Dependency Injection" frameworks (we used Google Guice). AOP less so, but DI things for sure give you what? Globals. Uhh, thanks.

Jeff Warnica
I think you have some misconceptions about DI. You should watch Misko Hevery's Clean Code talks.
Craig P. Motlin
I agree about globals. The problem is not the concept of a global itself, but what type of thing is made global. Used correctly, globals are very powerful.
PhoenixRedeemer
Perhaps I am. But if you had globals, you wouldn't need DI. I'm entirely prepared to believe that I'm mis-understanding a technology that solves a self-imposed problem.
Jeff Warnica
We use Globals all the time in java, every time we use a final public static in place of a Constant (C, C++, C#). I think the thought is that if it needs to be global then it should be a static. I can (Mostly) agree with this.
WolfmanDragon
+4  A: 

The class library guidelines for implementing IDisposable are wrong.

I don't share this too often, but I believe that the guidance for the default implementation for IDisposable is completely wrong.

My issue isn't with the overload of Dispose and then removing the item from finalization, but rather, I despise how there is a call to release the managed resources in the finalizer. I personally believe that an exception should be thrown (and yes, with all the nastiness that comes from throwing it on the finalizer thread).

The reasoning behind it is that if you are a client or server of IDisposable, there is an understanding that you can't simply leave the object lying around to be finalized. If you do, this is a design/implementation flaw (depending on how it is left lying around and/or how it is exposed), as you are not aware of the lifetime of instances that you should be aware of.

I think that this type of bug/error is on the level of race conditions/synchronization to resources. Unfortunately, with calling the overload of Dispose, that error is never materialized.

Edit: I've written a blog post on the subject if anyone is interested:

http://www.caspershouse.com/post/A-Better-Implementation-Pattern-for-IDisposable.aspx

casperOne
I like it! Now I wish that all the IDisposable objects in the framework would do this.
Jay Bazuzi
On a related note, MemoryStream is disposable but safe to leak. Think about it.
Joshua
Joshua:The fact that MemoryStream is disposable is an implementation detail, and as we all know, it's not good practice to rely on implementation details if you don't have to.It could very easily be changed to use a unmanaged memory pointer for it's buffer in the future. Think about that. =)
casperOne
I would prefer that all types that implement IDisposable were forced to be stack allocated, or some similar concept.
Daniel Paull
+95  A: 

SESE (Single Entry Single Exit) is not law

Example:

public int foo() {
   if( someCondition ) {
      return 0;
   }

   return -1;
}

vs:

public int foo() {
   int returnValue = -1;

   if( someCondition ) {
      returnValue = 0;
   }

   return returnValue;
}

My team and I have found that abiding by this all the time is actually counter-productive in many cases.

javamonkey79
If only I knew what SESE is?
tuinstoel
I found it: Single Entry Single Exit !!
tuinstoel
what the hell is SESE?
hasen j
I guess, that in other words it is "function should have only one return statement" - never agreed with that one.
Rene Saarsoo
Moreover, an exception is just another exit point. When functions are short and error-safe (-> finally, RAII), there is no need to follow SESE.
Luc Hermitte
Agreed. I cringe at the 100+ loc methods I've seen that carry a return value from the first line all the way to the bottom just to adhere to SESE. There is something to be said for exiting when you find the answer.
Rontologist
wow .. whoever came up with SESE must be a world class idiot
hasen j
Totally agree on that one, I was about to add it onto this post, you beat me to it ;)
dbones
Wait people actually do this? Why can't you just search for "return"?
nosatalian
SESE is law in unmanaged code, but in managed code it isn't, some post somewhere here in SO explains it better
Jader Dias
I'd like to see that post, but admittedly, my opinion comes from a strict managed code domain.
javamonkey79
This might be useful when your debugger only has a maximum of two breakpoints. Very common in embedded hardware environments.
Casey
I think SESE is a great example of a solution in search of a problem
Kevin Laity
SESE dates back to 1960s and structured programming. it made a lot of sense then. single entry is pretty much guaranteed today, clinging to single exit just betrays low iq.
just somebody
It only makes sense if it's SESRP: Single Entry, Single Return Point. This was important in languages like BASIC where you could GOTO here, there, and everywhere. Better practice was to always return where you came from, using GOSUB instead of GOTO. With modern programming languages this isn't so much of an issue...which seems to be how the sensible "return where you came from" morphed into the awful "exit from only one point of the method".
Kyralessa
+34  A: 

Null references should be removed from OO languages

Coming from a Java and C# background, where its normal to return null from a method to indicate a failure, I've come to conclude that nulls cause a lot of avoidable problems. Language designers can remove a whole class of errors relate to NullRefernceExceptions if they simply eliminate null references from code.

Additionally, when I call a method, I have no way of knowing whether that method can return null references unless I actually dig in the implementation. I'd like to see more languages follow F#'s model for handling nulls: F# doesn't allow programmers to return null references (at least for classes compiled in F#), instead it requires programmers to represent empty objects using option types. The nice thing about this design is how useful information, such as whether a function can return null references, is propagated through the type system: functions which return a type 'a have a different return type than functions which return 'a option.

Juliet
An interesting link to confirm your point of view: http://sadekdrobi.com/2008/12/22/null-references-the-billion-dollar-mistake/
Nemanja Trifunovic
Nemanja: Fascinating find, too bad I can't upvote comments :)
Juliet
I would rather have "non-nullable reference types" (with compiler checking) than completely remove null.
Jon Skeet
I have to agree with Jon; "null" is frequently a valid state and indicates something completely different from zero or empty. Eliminating it would be a mistake IMO; but for those cases where it's not appropriate, a non-nullable object type would be nice.
Mike Hofer
Correction: a non-nullable reference.
Mike Hofer
I disagree, but then I use Objective-C where nil is quite a handy concept.
Graham Lee
This is like prohibiting zero to prevent divide-by-zero errors. Nulls happen in real-world situations and forbidding them would force everyone to hand roll their own ad hoc implementations.
Dour High Arch
I really like Scala's approach to this: there is no null, and if you want the same effect you have to wrap it in an Option[T] object (either Some[T] or None) which forces you to notice and check it. No more accidental nulls.
Marcus Downing
I don't necessarily agree that they should be removed, but I do think the Null Object Pattern should be preferred over checking for null every four lines in your code.
moffdub
Princess, if you like Nemanja's link you can edit your answer and include it
MarkJ
Agree with Jon. It should be possible to have the language enforce that a given variable can never be assigned null.
Thorbjørn Ravn Andersen
The problem is your strongly typed language, not null. In a language where null is a valid value and calling any method on null returns null is great.
drawnonward
+5  A: 

Opinion: Data driven design puts the cart before the horse. It should be eliminated from our thinking forthwith.

The vast majority of software isn't about the data, it's about the business problem we're trying to solve for our customers. It's about a problem domain, which involves objects, rules, flows, cases, and relationships.

When we start our design with the data, and model the rest of the system after the data and the relationships between the data (tables, foreign keys, and x-to-x relationships), we constrain the entire application to how the data is stored in and retrieved from the database. Further, we expose the database architecture to the software.

The database schema is an implementation detail. We should be free to change it without having to significantly alter the design of our software at all. The business layer should never have to know how the tables are set up, or if it's pulling from a view or a table, or getting the table from dynamic SQL or a stored procedure. And that type of code should never appear in the presentation layer.

Software is about solving business problems. We deal with users, cars, accounts, balances, averages, summaries, transfers, animals, messsages, packages, carts, orders, and all sorts of other real tangible objects, and the actions we can perform on them. We need to save, load, update, find, and delete those items as needed. Sometimes, we have to do those things in special ways.

But there's no real compelling reason that we should take the work that should be done in the database and move it away from the data and put it in the source code, potentially on a separate machine (introducing network traffic and degrading performance). Doing so means turning our backs on the decades of work that has already been done to improve the performance of stored procedures and functions built into databases. The argument that stored procedures introduce "yet another API" to be manged is specious: of course it does; that API is a facade that shields you from the database schema, including the intricate details of primary and foreign keys, transactions, cursors, and so on, and it prevents you from having to splice SQL together in your source code.

Put the horse back in front of the cart. Think about the problem domain, and design the solution around it. Then, derive the data from the problem domain.

Mike Hofer
I agree with the principal, but the problem is in real world IT development you often have existing data stores that you must make use of - while total constraint to existing code might be bad you can save a ton of development effort if you conform to data standards that exist when you can.
Kendall Helmstetter Gelner
Hey, someone who understands the real purpose of stored procedures!
Lurker Indeed
Hmmm. Take the data out of a system and what do you have? A system that computes nothing. Put bad data into your system and what happens? Crash. Analogy: Bake your bricks (create strong data types) and mix your cement (enforce the constraints), then design/build your system with perfect blocks.
Triynko
+259  A: 

PHP sucks ;-)

The proof is in the pudding.

php sucks! justification? just use it for a while. it SUCKS!!!!1111 +10 (I wish hehe)
hasen j
So true - just try it, after having used a "normal" language that actually has rules that it follows.
Evgeny
Justification? How about the complete inability to find out that you typoed a variable name at compile time (well, syntax-check time, with PHP) instead of runtime? Even Perl has 'use strict', and Perl catches so much flak it's barely funny.
chaos
I could post "Perl sucks!" but that would start a flame-war. :-)
staticsan
How is that controversial? Anyone who uses PHP will agree with you!
comingstorm
it is controversial ... lots of people defend PHP! it's crazy, I know! what the hell are they thinking?
hasen j
It _can_ suck, especially in the hands of the inexperienced, where it spends most of its time. But, really, PHP 5 with the right framework can be fantastically productive. You can shoot yourself in the foot with it, but you can do that in any language.
postfuturist
Set error_reporting to E_ALL, and you will get a warning on using an uninitialised variable. I assume that's what you meant by typoed variable name?
troelskn
Upvoted! Couldn't agree more, I've been saying this for 7 years now and finally people are starting to agree with me!
nerdabilly
@troelskn: Yes, you get this, as you say, ON USING the variable, i.e. at runtime. I quite specifically described the ability to find out that the variable was typoed prior to runtime, which even as maligned a language as Perl gives me.
chaos
@chaos, why would you even want to do that? nothing happens before runtime. If your code screws everything up in ie a databse because you typoed a variable, then it's bad code, that's your fault not that of PHP.
Pim Jager
dont really see the reason. How many languages did you try before php?
Quamis
So what if I cant see that I typoed a variable name until runtime, 'runtime' happens for me at the same step, then compile time happens for you, if a good developer is writing PHP then a) (s)he'll use a good IDE that won't let them make that kind of mistake b) the code won't actully touch...
Unkwntech
(continued) anything mission critical until it has been verified to be in working order, BAD CODE CAN BE WRITTEN IN ANY LANGUAGE! ffs
Unkwntech
Does "function blah() { return array(1,2,3); }; print blah()[1];" work already? If not: SUCKS SUCKS SUCKS. :-)
pi
Programmers that still spend their time putting down languages are wasting away precious moments they could be using to increase their skills. "Men have become the tools of their tools." -Henry David Thorean
Lusid
Jeff on Coding Horror: PHP sucks, but it doesn't matter http://www.codinghorror.com/blog/archives/001119.html
MarkJ
It sucks because it's not Micro$oft?
Brock Woolf
You people just boggle my mind.
chaos
Henry David Thoreau sucked too. He mooched off his family while suggesting that the gov should rase childern instead of the family. PHP is the worst language ever.
WolfmanDragon
@Brock VB sucks too! If I want to use basic, give me back my spaghetti bowl and let me write my GOTO's.
WolfmanDragon
The cliche you're looking for is "The proof of the pudding is in the eating."
Daniel Earwicker
Other than the module notation is very flat, and not OO, which makes it difficult to use... PHP as a language isn't bad. and APIs are fairly inconsistent...
Tracker1
I don't see the problem here. I find easier PHP easy to use... variable types have never been an issue for me.
Mark
I thought these opinions were supposed to be controversial? PHP sucks seems more like a statement of fact :-).
Travis
PHP sucks, but it's still a good language. If you don't understand that statement, or don't agree with it, you haven't been writing PHP long enough.
notJim
I use PHP! You can be as productive as you want and write great code in PHP. Its possible. Really. However, it lacks cohesiveness and elegance for a language that I would _enjoy_ on day to day use. So to generalize, I use it every day, and IT SUCKS!
Nick
Please someone down-vote this answer. Php's simplicity outweighs it's non-object-orientiness. So what that it uses global functions? Even object oriented approaches are forced to use global singletons.
AareP
Badly written PHP sucks... it's just a shame that there are so many examples of it.
HorusKol
I worked on a Web project where the back end was written with PHP. As a result, whenever I'm asked about PHP, I describe it as "Perl with a lobotomy."
BlairHippo
It sucks generally speaking, but using it doesn't have to.
Brian Ortiz
All languages suck when they're used by apes.
Sohnee
*ahem* ...I love PHP. Really.
Pedro Ladaria
It may suck but you can't ignore its use everywhere on the net :-)
Hannes de Jager
It's not *pudding*, it's [web] *soup*!
Chris
Given an option, I'll always take ASP.NET
baultista
Hahaha well said OP, I've used PHP once like 7 years ago because my boss didn't know sh*t about languages and it truly sucked. Sure you can always use the donkey wisely if you really really want to but who does seriously? All you'll find on the web is lousy code that looks indented by a retard. I think the fact the language acronym means Pretty Home Page tells all the story.
I love this, how people can hate a language because it doesn't hold their hand and tell them the second they have done something wrong. Learn how to use a standard variable naming convention and learn to spell correctly. Problem solved.
pondpad
just because you're allowed to do a code mess in php, it doesn't mean that you should do it or that you're going to do it, so it's not PHP fault, it's developers that give php a bad fame.. people usually think that php sucks because the think that all php developpers mix html code with php code on the same file
pleasedontbelong
PHP sucked less than ASP. But that was decades ago...
burkestar
+32  A: 

You need to watch out for Object-Obsessed Programmers.

e.g. if you write a class that models built-in types such as ints or floats, you may be an object-obsessed programmer.

Ferruccio
If your language claims to be OO but has built-in types that are syntactically and semantically different from objects, and you think this is just fine, you may be a Java or C++ programmer.
Barry Brown
@Barry! What about us Objective-C programmers! That might be us too!
Kendall Helmstetter Gelner
C++ is multiparadigm, and as such it can decide to use whatever types it wants :P
David Rodríguez - dribeas
Object orientation is a means to a goal and not a goal in and of itself.
Seventh Element
A: 

I know everything there is to know about everything.

jTresidder
why do they downvote this? i often hear ppl say that and i find it indeed controversial. +1 of course... oh wait.. no you should add some text explaining your point. ill take my vote back haha :)
Johannes Schaub - litb
I got the point, but I'm not sure I've got the point.
andyk
What do you know about me?
Seventh Element
I know you don't get self-deprecating humour, for a start. I think I must have got lost... I could have sworn this was SO, not YouTube, but the commentary around here recently has got me wondering. Heads go on the top guys, where you've got 'em is bad for your neck.
jTresidder
hehe jTresidder not that bad your quote, it's the first time I see something so down voted, I up vote ;p
Nicolas Dorier
"the more you know you know, the more you know you don't know"so anyone claiming he knows everything, actually knows nothing.
alexanderpas
+456  A: 

Print statements are a valid way to debug code

I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.

Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)

David
Absolutely, Make them into logging statements to begin with, and make them output to screen during dev.
Christopher Mahan
Definitely. I did this for years out of necessity and often by preference now--having it all sitting there in a logfile often gives a lot more information than stepping through it.
Loren Pechtel
Yes, thus there are logging frameworks that make this process more organized.
thenonhacker
I agree, just remove them prior to checkin. People who leave Debug and even Console code in production deserve beatings.
Quibblesome
Depending on your language / platform / application style, it is often your only choice.
postfuturist
@Quarrelsome, I disagree. If you make them really useful and structure them properly then I say leave them in and create a way to toggle them on when needed. Sometimes problems only happen in the customers environment.
bruceatk
Until you forget to delete a debug statement and it goes to production, or delete an actual statement with a debug statement, when you are tired. Logging, dedicated debug output routines, and debuggers are your friends.
Andrei Taranchenko
SOMETIMES, it's the only way. Not all the time, but sometimes...
LarryF
Just remember to clean the bastards up, or at least include in the debug statement where it is being called in the code, otherwise you'l spend hours trying to find them to delete them later
johnc
Only very rarely do I use this technique to debug code. I personally see it as the cave-man method for debugging and for big applications large sections of logging files quickly become incomprehensible to anyone but the developer that included the logging statements.
Seventh Element
@Diego, you should look at log4j or log4net or whatever, their filtering of log comments (and levels) makes it trivial to dig into the appropriate area of even the largest app.
Si
This is called stubbing and it rocks! Debuggers suck.Period. finding a problem in a large loop, say one with 1000 iterations, yeah right! With stubbing I can go directly to the problem. Of course knowing the proper way to stub is a different subject altogether.
WolfmanDragon
sleske
@WolfmanDragon: a good debugger lets you break on the nth iteration of a loop. Eclipse does this for Java and I use it all the time
Laplie
I sometimes work on a platform so archaic (and yet a cash cow, which is usefully in this economy) that the debugger takes around about 5 hours to set up (no joke!). Debugging via printfs and similar is essential.
Kaz Dragon
Or create a unit test !
Yassir
not in GLSL... ..
shoosh
Every time you consider writing a debug printout, consider writing a unit-test instead. I've found I use far less time that way.
Markus Koivisto
You mean there is another way besides printing messages ????
RN
I concur with Andrei. In Java at least there's no excuse for writing 'System.out.println("foo")' versus 'LOG.debug("foo")'
Jherico
As fas as this is point is concerned that one might forget removing such print statements when going to release. Can't we just enclose them in " #ifdef TEST <newline> printf(something) <newline>; #endif " ? Later we can pass TEST to compiler, like " gcc -o exe -DTEST exe.c ". It will print all statements encolsed with "#ifdef TEST". When going to production we simply have to remove "-DTEST" from compile command so all those print statements will not make it to the released version.
Andrew-Dufresne
You're off target here--don't write to the screen unless you're looking at some sort of interaction or timing issue. Write to a log file set up for the purpose. Get rid of the file itself before release and there's no chance you'll leave behind any writes and by having it in a file you can go forward and backwards.
Loren Pechtel
Even when I was writing C code, I used this technique for 90% of my debugging. Back then I had a define macro that made the printf statements disappear when compiling the production code. It's amazing how many bugs just jump out at you when your printf statements say that it had to go wrong in one of these two or three lines.
Michael Dillon
println()s give you the *history* of the execution, which is something you don't get with interactive debuggers.
Loadmaster
I had a case once where I wished I could do printf() debugging. All I could do was twiddle bits that were wired to four LEDs.
Joshua
"When in doubt, print more out!"
Garen
Aaaah.. threading!
Partial
This is completely wrong! So I had to gave it a +1... intricate...
Danvil
Logging can't find segfaults the way debuggers can.
Ken Bloom
As a program grows, these can be difficult to track. I once encountered a crash in Firefox on Linux, and was presented with an alert message displaying the call stack. That's never a good thing.
baultista
no System.out please, the Logger is there for a purpose !!!
Dapeng
Horrible opinion...+1. :) I dislike working on code littered with print statements and logging code.
dgnorton
+6  A: 

I think that using regions in C# is totally acceptable to collapse your code while in VS. Too many people try to say it hides your code and makes it hard to find things. But if you use them properly they can be very helpful to identify sections of code.

Jeremy Reagan
IMHO Regions are great for one thing... visualizing code rot.
Gavin Miller
Hah LFSR.Jeremy, your code is too big.
Jay Bazuzi
Never gotten used to them, don't use them, but it may just be me.
Seventh Element
Regions is the thing I miss most about VS (I use Eclipse). so instead of using regions, we make Method that have calls to methods that have calls to methods............. just so we can read the darned things. Regions are GOOD! +1
WolfmanDragon
+7  A: 

Relational databases are awful for web applications.

For example:

  • threaded comments
  • tag clouds
  • user search
  • maintaining record view counts
  • providing undo / revision tracking
  • multi-step wizards
+1 always surprised that OODBs didn't take off for web apps
Graham Lee
The reason OODB didn't take off for web apps is because web apps are the single area where scalability and speed matter most - and OODB fall flat when load gets high. That's why MySQL took off instead of something more robust like Postgres, because of sheer read speed and scalability.
Kendall Helmstetter Gelner
kendall, that's just trash. the biggest databases in the world have traditionally been oodbs. they handle all kinds of workload.
nes1983
Only deep ignorance can prevent someone to implement such things even in SQL, which is a badly designed language and not faithful to relational data model.
MaD70
+2  A: 

To Be A Good Programmer really requires working in multiple aspects of the field: Application development, Systems (Kernel) work, User Interface Design, Database, and so on. There are certain approaches common to all, and certain approaches that are specific to one aspect of the job. You need to learn how to program Java like a Java coder, not like a C++ coder and vice versa. User Interface design is really hard, and uses a different part of your brain than coding, but implementing that UI in code is yet another skill as well. It is not just that there is no "one" approach to coding, but there is not just one type of coding.

+5  A: 

Not very controversial AFAIK but... AJAX was around way before the term was coined and everyone needs to 'let it go'. People were using it for all sorts of things. No one really cared about it though.

Then suddenly POW! Someone coined the term and everyone jumped on the AJAX bandwagon. Suddenly people are now experts in AJAX, as if 'experts' in dynamically loading data weren't around before. I think its one of the biggest contributing factors that is leading to the brutal destruction of the internet. That and "Web 2.0".

Dalin Seivewright
Couldn't agree with this more! It shows just how fashion conscious our industry really is. When I looked into what all the AJAX fuss was about I discovered I had already been doing it for 2 years. But it takes a marketing style buzzword to make stuff happen.
AnthonyWJones
A vision on the history of AJAX: http://www.theregister.co.uk/2008/11/27/microsoft_ignored_ajax/
tuinstoel
I remember when it was called DHTML :P
Kronikarz
A: 

Not everything needs to be encapsulated into its own method. Some times it is ok to have a method do more then one thing.

Jeremy Reagan
reminds me of an old manager of mine who abstracted himself out of a job. He spent months abstracting an app to make it "perfect" but in the end got nothing done.
Neil N
+93  A: 

You must know how to type to be a programmer.

It's controversial among people who don't know how to type, but who insist that they can two-finger hunt-and-peck as fast as any typist, or that they don't really need to spend that much time typing, or that Intellisense relieves the need to type...

I've never met anyone who does know how to type, but insists that it doesn't make a difference.

See also: Programming's Dirtiest Little Secret

Kyralessa
I know how to type (was an army teleprinterist) but I insist it makes no difference whatsoever.
Nemanja Trifunovic
Nemanja->"no difference whatsoever"?!I just got 70wpm on an online test. I could see how someone could scrape by at 20-30wpm, but if they are using two fingers, plugging away at 5wpm (yes, I've worked with people like that), it's holding them back.
KeyserSoze
No difference whatsoever. I don't even know what is my current wpm level, because i completely lost interest in it. Surely, it is useful to type quickly when you are writing documentation or ansering e-mails, but for coding? Nah. Thinking takes time, typing is insignificant.
Nemanja Trifunovic
Well, if your typing is so bad that you are thinking about typing, that's time you could have spent thinking about the problem you are working on. And if your typing speed is a bottleneck in recording ideas, you may have to throttle your thinking until your output buffer is flushed.
KeyserSoze
@Nemanja Trifunovic - I hear what you are saying but, respectfully, I think you are dead wrong. Being able to type makes a huge difference.
duncan
@keysersoze: I have never worked on a project when typing speed made any difference. Even when I write code from scratch and not fighting some crazy frameworks, a good editor makes typing skill almost worthless. With vim I usually just type a couple of letters before pressing Ctrl+P.
Nemanja Trifunovic
@duncan: No hard feelings, but you are dead wrong - it makes no difference :)
Nemanja Trifunovic
Even though I never learned to touch type my typing is very quick, and optimized towards writing code - not english. I always felt touch typists must be at a little bit of disadvantage, considering the heavy use of symbols in coding which touch typing is not optimized for.
Kendall Helmstetter Gelner
I know how to type. After twenty years of typing my index and middle fingers know where all the keys are, so I don't have to look down at keyboard all that often. But I had this argument in a different context long back: a colleague argued that camel case is [contd...]
Hemal Pandya
[...contd] better then underscores because it is easier to type. My argument is that you are not supposed to write code at speed of typing.
Hemal Pandya
I don't mind looking at the keyboard once in a while to relieve eye strain. You HAVE to change your focus at times. If you are a good typist, chances are you either have glasses or contacts.
Andrei Taranchenko
While I can't touch type and confirm this I do suspect that it helps. I have encountered many situations where slow typing speed gets in the way. Sadly learning is mind-numbingly dull. Yes, I know there are all kinds of fun games to help you, but it's still dull for me. Still trying though...
Manos Dilaverakis
+1. I repeatedly see people make tons of mistake because they are watching their keyboard instead of watching the code on their screen. Most common are syntax and code-formatting issues, but also real bugs that aren't caught by the compiler.
flodin
You must be using some ridiculously verbose language like Java. Thinking is the bottleneck when programming, not typing.
nosatalian
I agree here. Though thinking is important, watching the screen is key.
Chet
I agree that thought is the limiting factor behind programming, but who codes from the hip so much that they design the software as they type it? While I'm coding/typing, I have largely already designed the software... as a result, my thinking easily keeps up with my 80wpm+ typing speed.
SnOrfus
I can't think faster then I type. I am hunt and peck, using six fingers and the thumbs. Problem is not that I wouldn't benefit from ten finger, but that trying to train it slows me down to much.
peterchen
The strange thing is that hunters and peckers are just a hair's breadth away from full blown ten finger typing. After using a keyboard for years you know exactly where the keys are - you just don't know where your hands are without looking. And that's only a little bit of technique. BTW: using a Kinesis contoured keyboard helps a LOT. And using an english keyboard instead of a localized one.
hstoerr
Yeah, Steve Yegge surely DOES know how to type...
Headcrab
@hstoerr: When I first took a typing course, in sixth grade, I cheated and looked at my fingers. I was the fastest one in the class, the star pupil. Only I didn't really know how to type. Luckily, in seventh grade, I took typing again and this time did it right. It's the only useful thing I learned in junior high. (Well, that and "Always carry your books in a backpack so they can't get knocked out of your hands and scattered down the hall.")
Kyralessa
The way I look at it, if you don't know how to type, how much programming experience could you really have? So yeah, I think a good programmer is one who knows how to type.
Renesis
I disagree. I never took any typing lessons, but spending most of my life behind a computer has made me remember where all the keys are so I can quickly type without looking at the keyboard. Maybe my hands aren't placed in the optimal position as you would learn in a typing lesson, or I don't use a DVORAK keyboard, but my typing is fine. And I sure don't want to type faster than I can think.
Dennis
I generally type with 4 fingers or so and I've tested my typing speed - 90 wpm.
Jake Petroules
Since when does wpm matter when programming? Programming requires thought, not just mindless typing.
pondpad
Typing is mindless by definition. If you're not typing, but hunt-and-pecking, you're using up brain cells to type that you could otherwise be using to think about your program.
Kyralessa
-1 for dead wrong: you don't need to type at all to be a programmer. Then, +2 for what it really means: you must know how to type to be a ***good*** programmer. When I interview people I'd pass immediately if they can't touch type.
Geoffrey Zheng
+12  A: 

Every developer should spend several weeks, or even months, developing paper-based systems before they start building electronic ones. They should also then be forced to use their systems.

Developing a good paper-based system is hard work. It forces you to take into account human nature (cumbersome processes get ignored, ones that are too complex tend to break down), and teaches you to appreciate the value of simplicity (new work goes in this tray, work for QA goes in this tray, archiving goes in this box).

Once you've worked out how to build a system on paper, it's often a lot easier to build an effective computer system - one that people will actually want to (and be able to) use.

The systems we develop are not manned by an army of perfectly-trained automata; real people use them, real people who are trained by managers who are also real people and have far too little time to waste training them how to jump through your hoops.

In fact, for my second point:

Every developer should be required to run an interactive training course to show users how to use their software.

Keith Williams
Programming has a lot in common with cleaning your room. The same principles of organization apply.
Alex Baranosky
Maybe... rather than dealing with your accounts as bits of paper you abstract them into folders, and encapsulate them in a filing cabinet or box. If you find a way to unit test laundry, let me know!
Keith Williams
Generally having a plan before building a web site/ desktop app/ house/ nuclear sub is always a good idea! Mapping things out, either with a sketches on a pad of paper, a wireframe, visio, work flow, mind map, whatever. And the training users...I see this missed by even the most brillant programmers. User acceptance in the long run determins your apps success. If they don't understand it, no matter what it does or how well it is done, your app will fail.
infocyde
+667  A: 

"Googling it" is okay!

Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.

Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.

What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.

(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)

PhoenixRedeemer
Google groups is one of the greatest gifts to even the most nerdy of developers along with stackoverflow and porn :)
Jeremy
Sorry PR, but I disagree... There are no right answers ;-)I do agree with your overall point, although you need to be careful of the people who Google for an answer, take it verbatim and then have to hack away (usually introducing numerous bugs) until it works for "most" cases
billybob
Google will provide *knowledge*, but it cannot provide *skill*. Poor developers will not be aware of the difference.
Tom
Thorsten79
@Tom - That's true, but I'm just saying I don't think that should be held against Google. If we're going to judge whether someone is a good or bad developer, Google usage isn't going to be the indicator.
PhoenixRedeemer
By mentioning Google, it also means getting references to billions of programming references on the Internet. Like what we find in books, except it's free and fast.
thenonhacker
the talking frog thing is sometimes valid; just describing the problem to someone (imaginary or otherwise) can help you get a better handle on it yourself.
Colin Pickard
I taught myself C# from scratch with no real knowledge of computers. I was thrown straight in the deep end at my job and had no help, so i turned to google. I have hundreds of bookmarked pages full of interesting code and information that no person could have taught me!
xoxo
How did people do it before the internet?
asp316
I agree-- it's far better to google than to go bother someone else (ie me) for a simple question. learning isn't always about knowing all the answers but is about knowing the best places to find the answers.
carolclarinet
What do you mean, before the internet? =P
Erik Forbes
I got a job and lost a job with the "google is ok" claim.I had an on-the-spot question, and my answer was "I'd google it, and be done with it". Immediate disqual.In another interview, same situation, I explained the means and said "I'd google the syntax"... and here I am.
Don't 'google it', 'StackOverflow it', you'll have community feedback !
Think Before Coding
My GP (medical doctor if you're not from the UK) Googles my symptoms.
Pete Kirkham
The problem is not the people that Google as a reference; it's the subset of people that Google blocks of code, paste them into the project and then monkey with the variables/flow until it compiles. It compiles?! Ship it!
joshperry
Never remember anything that you can google
Nat
"Life is an open-book test" and "ethical theft is good practice" rolled into one.
Will
>"does it really matter where you got the information?" - Yes it does. The proofreading and research that goes into most (reputable) books is worth it. I just can't say the same for joe schmoe's website.
SnOrfus
@Tom: I'd go a step further, and say that Google only provides facts, and not knowledge. Knowledge implies vast amounts of relationship between facts, and Google results only barely scratches the surface of that (which is the whole point of the semantic web, which is still vaporware). Having said that, I agree with your basic point. There's a big difference between having knowledge and supplementing with references and simply relying on references as a substitute for knowledge.
Ben Collins
@snorfus: How does a new developer tell a good book from a bad one? Many books about PHP programming contain horrible practices, consistently repeated in every code example (for example, concatenating $_GET variables straight into a query). A person is better off with google in those cases, because at least they'll get a mix of good and bad code. If you're new to a field you should always look at a variety of sources, and google can be one.
Joeri Sebrechts
Good Googling is a skill. I'm surprised more people haven't figured this out. A small variation in search terms can make a big difference in the quality of results.
Mark Ransom
@Tom neither will reading a book.
Stuart
Finding the answer efficiently is just as important as being able to apply it. I would always prefer my developers to Google something that they don't know, find out what and why, and learn how to apply it. Used correctly, it isn't just a search application, it is a learning tool. If people mindlessly look something up and copy and paste code snippets without understanding what they do, it is more likely a problem with the developer than the tool.
joseph.ferris
I'd be a little concerned if you got your information from a hullicinated talking frog.
Mark
+1 for the talking frogs
Chris Needham
**Googling *doesn't* provide knowledge**. *It provides information*. How well it is used is another issue
WmasterJ
@Wmaster That's exactly what I was going to type. Well said!
Ben McCormack
+39  A: 

If you have any idea how to program you are not fit to place a button on a form

Is that controversial enough? ;)

No matter how hard we try, it's almost impossible to have appropriate empathy with 53 year old Doris who has to use our order-entry software. We simply cannot grasp the mental model of what she imagines is going on inside the computer, because we don't need to imagine: we know whats going on, or have a very good idea.

Interaction Design should be done by non-programmers. Of course, this is never actually going to happen. Contradictorily I'm quite glad about that; I like UI design even though deep down I know I'm unsuited to it.

For further info, read the book The Inmates Are Running the Asylum. Be warned, I found this book upsetting and insulting; it's a difficult read if you are a developer that cares about the user's experience.

AnthonyWJones
Excellent point. I re-learn this point the hard way every time I try to teach my parents (in their early 70s) how to use something on the computer or their cell phones.
MusiGenesis
I disagree. I don't think they are mutually exclusive. To take the opposite, people who have never used a computer before are the best interface designers.
James McMahon
I disagree, but only in the sense that most interface design decisions seem to be made by management.
Dave
I'd say they're definitely not mutually exclusive. I would more likely say that management should never decide where to put the button. I've had some of the most complicated interfaces ever created that way.
Sam Erwin
I wish I could upvote this twice. Yes, it's not universally true, but programmers tend to have the completely wrong mindset to design UI. We are too forgiving of interface flaws when it gives power and flexibility that end users don't need.
Robert J. Walker
I hope you don't let Doris take the wheel when you start up your IDE...
James Jones
That's one of my favorite books. Should be a must read - particularly for programmers who think they are web designers...
CMPalmer
This is like saying "If you know anything about how a car works, you should not be allowed to design the interior."There is an entire discipline around UI design and if you are doing things just based on your mental model of some imaginary elderly user, then you are not doing it correctly. No one can account for everyone's mental model. Applying extensive research, best practices, statistical analysis, and user testing are the ways to get to your desired result. Programmers can learn this discipline too.
Ben Reierson
@Ben: no you can't account for "everyones" mental model but its a sure thing that the developers mental model is entirely different from everyone else. Thats why an Interaction design professional will invent a person that best represents the typical user. If a system has users of very different persona (e.g., in addition to Doris we may invent Jeff the IT admin guy) then good interaction design will use Jeff as the target audience for the tasks he is likely to engage in.
AnthonyWJones
Interaction Design by users is what gave MySpace its reputation for vomit-inducing pages.
Kelly French
+4  A: 

QA should know the code (indirectly) better than development. QA gets paid to find things development didn't intend to happen, and they often do. :) (Btw, I'm a developer who just values good QA guys a whole bunch -- far to few of them... far to few).

sam
(to -> too)^2
Christopher Mahan
+819  A: 

Programmers who don't code in their spare time for fun will never become as good as those that do.

I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.

(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)

rustyshelf
Well, yeah, everybody knows that.
chaos
I never code for fun. I code and get paid for it, that's it. I do think of it as just a job, but I do the best job I can, that's the difference. It's not coding for fun that makes a programmer good, it's the research, learning and training that does. And I do that on work time.
Jeremy
Jeremy, if you never code for fun, then why are you trolling SO with the rest of us geeks who get off on programming?
postfuturist
Maybe he is at work?
BlackWasp
Yeah i dont code for fun really, but i come on here at work. Just because you dont do it in your spare time (if i ever had any), doesnt mean you dont have a passion for it!
xoxo
One of my fellow new hires said she hadn't coded in two years and was anxious to get back into it. Back into it? And you're a developer? She should spare everyone the show and quit now.
moffdub
I partially agree, we dont realy need to do 'coding' at spare time, just going through SO/Blogs/other technical podcasts etc as part of the hobby will work very well.The point is spend time in the community and get a feeling of what others doing out there.
Jobi Joy
1/ There is a difference between enthusiasm and ability.2/ Imagine if they said that about doctors. Or demolition experts. Or soldiers, or....
kpollock
I think that whilst the people who code in their spare time may become better at pure coding, but I would argue that that isn't the be all and end all of the job. Yes it is a large part but not everything. Just my opinion
chillysapien
Maybe they will never be as good as the ones who do it on their spare time, but they may have more fulfilling lives. I mean, come on, you spend at least 40 hours a week typing away at the stuff, do you really want to go home and do it some more? Play some tennis or something :P
Ace
I don't code for fun in my spare time. But I reverse engineer. Does that count?
Treb
People who's sole interest is programming, both on and of the job, may very well be execellent programmers. But I don't think I would want to "hang out" with this type of person. You don't need to become autistic about it. There is more to being a human being than writing code.
Seventh Element
Just because you code for fun off the job doesn't mean you're a certain autistic "type of person". You can lead a balanced life *and* code for fun. For me being parent to a toddler is much more of a social life killer than my at-home-for-fun-coding ever was, even during my late teens.
Andreas Magnusson
@Diego:'But I don't think I would want to "hang out" with this type of person. You don't need to become autistic about it.'I don't think I'd like to "hang out" with a judgemental 9-5'er troll either.Feel sorry for you that you can't understand having a real passion for something. Your loss.
kronoz
@kronoz: having a passion for something is great. But I feel sorry for those who have a passion for only one thing and nothing else. Their loss.
Treb
HUMBLY: Amen. I love coding in my spare time, and I have surpassed most around me.
Ronnie Overby
I don't see how coding for fun makes you an 'autistic' with no life. Would you think the same if I told you that I like to watch TV 2 hours a day after work? And if I told you that instead of watching TV I like to code for fun for 2 hours after work?
Sergio Acosta
I always code for fun in my spare time, and I try to have fun at work, though unfortunately work coding is often not fun. But then, hearing nice music and getting off anyways seems to outweigh nonfun :PAbout autistism: I totally get off when I code, but I also get off when my gf is around, yay.
phresnel
@Sergio Acosta: Agreed, totally... Personally I do more reading about coding on my own time than a lot do. I do some personal projects, but it evens out.
Tracker1
A 'little' spare time coding, a lot of fun doing other things
Daz
"People who's sole interest is programming, both on and of the job, may very well be execellent programmers. But I don't think I would want to "hang out" with this type of person."Programming in your spare time doesn't mean you ONLY program in your spare time. I don't bore people to death about programming if they're not interested in it.
Chad Okere
I used to code for fun -- before I had a job coding. Now I code for work and do other things for fun. My job is now my previous hobby and my hobby's are other things. I can't think of a better setup or a more well rounded life.
Nemi
joshlrogers
re: doctors, etc. Think about how one *becomes* a top cardio surgeon for example. You aren't even allowed to "solo" until you've been doing 80-120 hours weeks for most of your 20's and many keep up that schedule long after residency. They spend their spare time in residency stitching up fruit, etc. In other words, MANY fields have their best and brightest putting in FAR more than 40 hours a week and much of it unpaid.
J Wynia
@kpollock: its unreasonable to expect doctors, demolitionists or soldiers practice their technique in their spare time because of the nature of what they do. However, I think its inevitable that were they able to do so, the ones who did would be better at their respective jobs than the ones who did not.
Jherico
I fully agree that one has to continue learning outside of work if they wish to improve their abilities. That is not to say that it won't happen for those who only code while they are at work. However, Captain Obvious would state that programming outside of work allows a person to improve their skills much more quickly in a broader area of topics. I find it admirable and highly beneficial career-wise for those that do. Lastly, it is highly important to find a subtle balance between coding and life outside of computers.
transmogrify
@ace "you spend at least 40 hours a week typing away at the stuff" HA! how many programmers do you know that work <=40hrs? agreed tho.
sequoia mcdowell
@sequoia: uh, a lot? Including me. I've found that programmers that put in less hours have clearer minds and are able to be fresh enough to look at a problem and solve it rather quickly.
temp2290
@moffdub, I hadn't coded in two years, and I was eager to get back into it. I was pregnant when I stopped, my brain didn't work right for programming. Perl might have died and web 2.0 came around while I was away, but I remember good practice and pseudocode. Should I "spare everyone the show and quit now"?
Elizabeth Buckwalter
Apparently, yes.
Ed Swangren
Definitely the most controversial topic, just based on the comments :)
amischiefr
I didn't say people who code for fun make great drinking buddies, nor did I say you had to spend your life married to a computer. But it still stands that those that have a passion for programming, and take it home and tinker with it, will quickly excel those that don't. For the rest it is a job. Sure it can be optimised, but it's a job. To a real programmer it's a passionate obsession, not a 9-5 job ;)
rustyshelf
Desire to code something good at home usually means that day time job does not allow it, which means that nearly 40 hours per week is wasted. Those who happen to not waste those 40 hours will be the best. But for others, desire to code something good is what matters.
alpav
In all fields of endeavour, those ahead of the curve have always put in more time than those that haven't.
Gary Willoughby
There is so much elitism in programming; I don't get it. Who cares how much better you are than someone else at coding? For 99.9% of coders it is a JOB. When you are 80 and on your death bed, you most certainly won't be thinking "Man I wish I would have written a few more LOC."IMO those who try and knock other's abilities are, in general, insecure about themselves and their own abilities.
DevDevDev
I can see two sides to this. On the one hand, yes, when you do something a lot you can get very good at it. Still, if your vocation is also your main avocation, you can forget what people less single-minded need. You can sometimes become *less* capable of documenting your work in a way that works for others, or participating well in the "soft" early portion of projects than someone whose life experience is broader.
Joe Mabel
Programming in your spare time doesn't mean that you don't do anything but programming, it just means you work on projects you enjoy as well as projects that pay the bills. I enjoy programming, and I program in my spare time. That doesn't mean I have no friends and don't do anything but programming. I do think however that it makes me a better programmer, as I discover new tricks and technologies more often this way.
wvdschel
@DevDevDev: you say that just because for you, 99.9% of coders it is a JOB. I think it's not the reality. Or maybe there are more and more people that do it only as a job (and that's why it's sooooo hard to find a really good developper nowadays (remember what Joël Spolsky said: "a good developper can do a better job than 10 average ones"... do you really think a "good" developper works 8 hours a day and goes home and stops? It seems you still live in cloud-cuckoo ;) ))
Olivier Pons
I'd be a little less 'agressive': I'd say 'a programmer who does NOT code in his/her spare time will never be as good as he would be HIM/HERSELF if they did'. I say that because I know some people who actually like coding for fun, but, on the other hand, don't enjoy 'pushing it' to the 'next level'.
Rafael Almeida
Very true. I hate being compared to professional developers even if everything I bring to the company is from experience done at home.
The Elite Gentleman
@DevDevDev, the only time that being better than others doesn't matter is when you're in kindergarten. The rest of your life you're constantly compared to others. If you're not better than they'll take your job, simple as that. Myself, I love programming on my free time. Best summer of my life was between 10th and 11th grade when I spent 19 hours a day programming. Consequently that sort of dedication is why I'm making money programming today. Why in the world would I just want to program at work?
Peter
@wvdschel, I enjoy the projects that pay my bills. What am I doing wrong? Am I a bad brogrammer?
Pavel Shved
Great doctors become bad doctors when they think they're done learning. If you'd ever had anything unusual wrong with you, you'd know this. "If it's not in a book I read cover to cover 5 years ago, then it doesn't exist." -> bad doctor
Jason
-1 if I could:) So you become a better programmer and? What's the point if all you do is program 24/7. You become another workaholic. If life would be 100% programming it would be ok, otherwise not. To me it is a bit too much. I have many other hobbies bycicle, reading (non programming related books), learning to play on musical instruments, or anything that is actually more fun and challenging than just the narrow field of programming. When do you have time to enjoy life when all you do is program.
kudor gyozo
Great doctors read specialized magazines, go to refresher courses, and even blog on House MD (http://www.politedissent.com/house_pd.html) in their spare time. Of course they don't do surgery in spare time, but maybe spend their vacation for MSF...
Lorenzo
Great athletes do not become good in their field by merely doing the bare minimum; they train hard and keep in shape! But moreover, they are balanced; not obsessive. Then, at some point, they slow down and do something else; perhaps coaching... I believe programming can be the same; you can't be quite as productive if you just do the bare min. But you keep in shape in your spare time (because you can't always keep yourself up-to-date at work). ...It's all a question of balance and priorities. (And, yes, I do code in my spare time, and I do have a broad knowledge/experience of things too!)
Yanick Rochon
@Yanick, the best athletes are obsessive though. Which explains awkwardness in some other aspects of their lives. There's a reason people like Jordan and Bryant come off as callous
b8b8j
@b8b8j, yes, this is why I mentioned "great" and not "best". :) I think a great role model I could cite in example (regarding programming) would be Linus Torvalds; maybe a little obsessive (after all, we completely rewrote a UNIX kernel from scratch... how obsessive is that?) But he is well balanced and one can say that he is living quite a successful life. I believe that he fits quite well in the description of my last comment. I don't necessarily aspire to be like him, however I tend to admire his life's achievements.
Yanick Rochon
IMHO, every developer should be part of some open-source projects, to which they contribute in their spare time just for the feeling of _making-the-world-a-better-place_.
Vikrant Chaudhary
From my experience, we employ developers who can interact well with people, this is a major asset. It's good to program outside work time, but socialising is important. Still it seems the clever-er the programmer the more socially in-ept they are, this doesn't constitute to a good programmer.
wonea
Okay... I'll vote for this one being controversial. You see, the thing is, I have a **life** and I don't like being told that unless I'm a total nerd then I'm no good. If that's the price of entry, then I'll just be "mediocre" I suppose. I happen to think that my other hobbies help me be a better programmer -- especially the ones that involve those other carbon-based lifeforms known as **people**. Build the most *efficient* software you want, but then put an interface that only a closet geek would use (think Unix) and I'll guarantee you that you'll never win the fight for market share.
Brad
I would have to agree with comment. I have personally lived on both sides of the argument. At a time I did it 8-5 (what’s this 9-5 stuff??) and not a second more. My peers that spent time on the side personally or professionally accelerated fast past me. That was because I spent a lot of the 8-5 hours putting out fires and bug fixes and not having the ability to move forward...at least as fast as the others that took extra time to do additional programming outside the bounds of work. Now having ventured into "software development" is as much a hobby as a job, I moved past those around me.
atconway
I never said it was healthy, or made them better people, I just that's how you get the best programmers.
rustyshelf
I love coding and do it more than 9-5 as I'm passionate about it. Doesn't mean I don't play Hockey, Tennis, socialise and do loads of other things. I love coding and constantly try to come up with new ideas and learn new things. Everyone prioritises things differently and if your happy 9-5 coding fine. I prefer to go a bit further and learn more in the process. Really depends on what you want to get from it I suppose! :)
Andi
Many people don't get exposed to all the technologies that they'd like to at their current job. If you don't stay focused and disciplined with your knowledge, your skills will be out of date. Make sure you have the skills for the Next phase of your career, if you happy and secure... then you don't need to. IMHO.
wcpro
+1  A: 

Exceptions considered harmful.

Jim In Texas
Checked exceptions. Unchecked exceptions are fantastic and do a great job of stabilizing your app.
Bill K
+162  A: 

Software Architects/Designers are Overrated

As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.

How's that for controversial?

Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.

rustyshelf
Amen! Smart and gets stuff done: Knows how to code, and actually produces production-ready code. It is better to say you don't know than pretend you do when you don't.
Christopher Mahan
I think there's a big necessary difference between architecting software and coding. What you say might apply to simple applications but there are many scenarios involving multi components spread across several servers, that requires archtecting, THEN coding.
Jeremy
@Jeremy I don't deny that things need designing, just that the design should be done by programmers, not Software Architects.
rustyshelf
A friend of mine once went to a lecture by Jim Coplien, where he said that architect is a noun, not a verb. So what does and architect do? They create plans to be used in making something (wordnet). You hace to be good at building to be an architect.
Hemal Pandya
I consider the term as a role... a role better played for real programmers :)
Alex. S.
I don't completely agree with you. I believe software architects are important. The thing is, architects should be developers first. I agree that many people who worked a couple decades ago carry the fancy architect name and don't do anything. I hate them too.
Mehrdad Afshari
Real Software Architects should be very experienced developers. Employing a non-technical person into the role of a Software Architect is nonsensical IMO.
weiran
Just because an architect doesn't code full times, doesn't mean he doesn't know how. Architects are best in the proof-of-concept area assuming their talents are as wide and deep as they should be.
Jas Panesar
A good software architect stays elbow-deep in code and works with the development team. Anyone who thinks architects are useless either (a) has never worked with a good architect (they do exist), or (b) never worked on a project big enough to need one, and just can't imagine needing one.
Rex M
They should be forced to implement their Architecture and or Design. Then let's see further....
Friedrich
Software architects who have never coded are truly evil. No one who has never done maintenance should be allowed near a design project.
HLGEM
I think that software architecture is just one of the responsibilities of the Software Developer. If you want to have a person with the title 'Software Architect' fine. But he is just the software developer that happens to be officially accountable for the architecture quality.
Sergio Acosta
In agreement with many other comments here, I'll say this: **to be a REAL Architect, you must be an excellent coder (among other things).**
Charlie Flowers
If you ever have to spend a year rewriting an entire application that was written by programmers with no architecture, you'd likely change your tune.
ctacke
I'm impressed. The top two answers that I disagree with most for this question were both posted by you (rustyshelf). I'm not sure how controversial your opinions are to most people, but I for one disagree with them completely.
Beska
The need for an architect usually is a symptom of substandard developers kicking about. I've been assigned the duty of an architect in the project I'm working on currently, but since all the developers I work with currently are excellent and up to the task, I can pretty much concentrate on programming myself. It hasn't always been like that, though. I've worked with people that need constant overlooking or they'll copypaste the living sh*t out of your DRY and other important design principles.
theiterator
The question did ask for controversial. In reality neither is to me. Architects make rubbish architects. Some of you assume that the inverse is automatically true (that programmers make great architects) and it's not. What I'm saying is that Architects will always be lousy architects who need to stick to BA work and forget about the fancy notion that they know how to design something they don't work with day in and day out. Good programmers on the other hand can make great architects as long as they stay programmers (confused yet?) :)
rustyshelf
+4  A: 

Although I'm in full favor of Test-Driven Development (TDD), I think there's a vital step before developers even start the full development cycle of prototyping a solution to the problem.

We too often get caught up trying to follow our TDD practices for a solution that may be misdirected because we don't know the domain well enough. Simple prototypes can often elucidate these problems.

Prototypes are great because you can quickly churn through and throw away more code than when you're writing tests first (sometimes). You can then begin the development process with a blank slate but a better understanding.

I don't know how controversial that opinion is. What you describe seems to be the well-known “Spike Solution” pattern http://c2.com/xp/SpikeSolution.html and is a good pattern to have.
bignose
+50  A: 

"Java Sucks" - yeah, I know that opinion is definitely not held by all :)

I have that opinion because the majority of Java applications I've seen are memory hogs, run slowly, horrible user interface and so on.

G-Man

GeoffreyF67
I think what you're trying to say is Swing sucks (as in JAVA UIs). Java back ends don't suck at all...unless that's the controversial bit ;)
rustyshelf
You don't have to be a Java partisan to appreciate an application like JEdit. Java has some serious crushing deficiencies, but so does every other language. Those of Java are just easier to recognize.
dreftymac
+1 cause I agree, java sucks.
Unkwntech
I a C# fanboy, but I admire quite a few Java apps as being very well done.
Neil N
I think what you are trying to say is that the barrier for Java coding is so low that there are many sucky Java "programmers" out there writing complete crap.
Software Monkey
I agree that most Java desktop apps I've seen suck. But I wouldn't say the same of server apps.
Sergio Acosta
You]re going to blame a programming language for 'horrible user interfaces'? Surely that is a fault of the UI designer. And while I'm sure Java has its share of poorly coded software that runs slowly and consumes too much memory, it is not at all hard to write Java programs that run efficiently and use memory only as needed. Having worked on a Java based web crawler capable of crawling 100s of millions of URIs I can attest to this.
Kris
A: 

Two lines of code is too many.

If a method has a second line of code, it is a code smell. Refactor.

Jay Bazuzi
Or you could make your entire program one (reaaaly long) line of code. That's always fun.
Kiv
BAKA!!even in a functional language, like haskell, you can have several lines in a function!
hasen j
When one combines the rule that a class should fit on the screen and every method has only one line a class can contain only approximately 7 lines of code.
tuinstoel
I'm amused that this is currently the lowest-ranked answer; I think I've succeeded at the "controversial" part.
Jay Bazuzi
It is indeed controversial, so I up.
tuinstoel
I agree completely, when will people see the light? I use Perl so I don't know how to write a function with more than one line of code, also, what is this "Refactor" thing you speak of? :-O
Robert Gamble
You must be a functional programmer... but one line per function is still a little extreme ;)
ceretullis
I'm sorry this is nonsense. -1 from me
Friedrich
It's not controversial - it's inane.
Software Monkey
That depends on your definition of "line".For some methods even a single line is too much.
G B
No method I've ever written (as far as I recall) has just one line of code =)
Jader Dias
int screwYou() { printf("This is balls...\n"); }
Jasarien
Typically, when I write a *VOID* dummy method, just due to formatting conventions, it takes at least two lines. Non-void functions typically take three lines. Of ocurse, like Kiv said, you can have 10.000 characters in a single line - so "lines" might not be the best metric for program size counting.
luiscubal
This is controversial because I do not think you can apply this type of statement to all languages.
atconway
@atconway: C++ fails, because you can't do anything useful in one statement. Perl fails because even one line is confusing. (To all: there is sanity behind this, but I was going for shock value.)
Jay Bazuzi
+16  A: 

Classes should fit on the screen.

If you have to use the scroll bar to see all of your class, your class is too big.

Code folding and miniature fonts are cheating.

Jay Bazuzi
You must have a really large screen then. Do you also think, that class can have no more than 3 or 4 methods, because no more clearly fits on the 41 lines that fit on my screen. Voting up, because this is really controversial.
Rene Saarsoo
Rene: thanks for disagreeing with me without dismissing my answer out of hand. I sense an open mind.
Jay Bazuzi
I have to disagree as well. I write a lot of Python classes and not many of them fit on my screen. Of course, I'm not counting my netbook's screen because that would just be unfair to me. =P
sli
Screen size varies widely depending on your visual acuity. I keep my screens running at 1680×1250, and use Consolas 8pt. What I can see on one screen is likely *much* more than a guy running at 640×480 using Courier New 10pt.
Mike Hofer
Make that, "Screen capacity varies widely depending on your visual acuity and display settings." :-) Not enough coffee yet. :-)
Mike Hofer
@Mike: it's true, screen capacity varies. To follow my guideline, you have to decide which screen you want to fit on. On a team, you have to make that decision together. Still, the principle is sound: I want to be able to look at a whole class and comprehend it in its entirety, without scrolling.
Jay Bazuzi
This might be quite challenging to implement in some languages that are more verbose (require more plumbing), but I admire the general sentiment.
Rob Williams
@Rob: thanks, and you're right. In some languages you can Extract Class and get some compactness, hopefully for the benefit of your code. In others (C++ I'm looking at you!) even simple classes have to work very hard to function.
Jay Bazuzi
Do you have any other rules to go with this? The list of classes in an API should fit on one screen? What is it in the class that you need to see anyway, surely the name tells you all about what it can do! What need for to look at the methods on a list.
Greg Domjan
Some other rules that may fit: "Methods should have one statement" and "blocks should have only one statement" and "switch cases much be trivial" and "each 'enum' type should be mentioned in a conditional only once". :-)
Jay Bazuzi
Ouch. It can be hard enough to make a method fit on the screen, never mind an entire class (my main language is Java BTW)
finnw
For some of my classes, I can barely fit the member list on the screen. If an obect is to represent something, it should do so in its entirety. Breaking it up into many smaller classes is just adding visual complexity (right click > go to definition - ad nauseum) where it need not exist.
SnOrfus
@SnOrfus: I bet that there are bits of self-contained, general-purpose, reusable bits of functionality in those big classes, that would make COMPLETE SENSE as a new class. You wouldn't be confused when looking at a reference to one, because the name and its functionality would be obvious.
Jay Bazuzi
I think this is baiting. The implication is that a class should have a limit to the number of attributes it can have because their declaration eats into the space for method bodies. This sounds like a language troll as in, any language that can't fit a class onto one screen isn't fit to use. Try coding something complex like the contact details for a person which includes an international address including phone numbers, email, fax, etc.
Kelly French
r u talking abt classes in c++ where function body is declared outside the class? then may be u r right...
Amarghosh
@Amarghosh No, that's not what I'm talking about. It's not possible to do this in C++ because the language is too complex and unwieldy. Also, I wish you would write Englis.
Jay Bazuzi
Not if you're programming for a mobile phone.
Daniel Daranas
+10  A: 

Explicit self in Python's method declarations is poor design choice.

Method calls got syntactic sugar, but declarations didn't. It's a leaky abstraction (by design!) that causes annoying errors, including runtime errors with apparent off-by-one error in reported number of arguments.

porneL
I've certainly forgotten to type "self" many times myself, but what would you have done instead? You can't just imply self in all method declarations because of classmethods and staticmethods.
Kiv
I often mistype it as `slef` and I get errors because `self` is undeclared
hasen j
I think that `def` in `class` should imply `self`, and other types of methods could use different/additional keyword, like `defstatic`/`static def`.
porneL
It's actually due to an implementation problem early on in the language design -- apparently Guido and team could not figure out how to bind the implicit self parameter to its enclosing environment, short of just passing it explicitly. Hope I got that right, not a compiler/translator guru.
Please read around and reconsider your opinion: http://effbot.org/pyfaq/why-must-self-be-used-explicitly-in-method-definitions-and-calls.htm and http://www.artima.com/weblogs/viewpost.jsp?thread=214325 are two good places to start.
Daz
@Daz: links you've given talk about either body of a function (but I'm talking about declaration of arguments) or semantics of functions being 1st class (which is completely orthogonal issue to the syntax).
porneL
+5  A: 

Primitive data types are premature optimization.

There are languages that get by with just one data type, the scalar, and they do just fine. Other languages are not so fortunate. Developers just throw "int" and "double" in because they have to write in something.

What's important is not how big the data types are, but what the data is used for. If you have a day of the month variable, it doesn't matter much if it's signed or unsigned, or whether it's char, short, int, long, long long, float, double, or long double. It does matter that it's a day of the month, and not a month, or day of week, or whatever. See Joel's column on making things that are wrong look wrong; Hungarian notation as originally proposed was a Good Idea. As used in practice, it's mostly useless, because it says the wrong thing.

David Thornley
It makes programs quite quite slower. Compare python to C or C++ and you'll see a huge performance difference when working with integers. It will avoid overflows at the expense of full checking all the time. That is a source of premature-pessimization in many cases.
David Rodríguez - dribeas
In at least Common Lisp, you can specify data types later, once you get the program working correctly. That's how CMU Common Lisp beat out a Fortran compiler in a number-crunching contest once.
David Thornley
That's basically Alan Perlis: "Functions delay binding: data structures induce binding. Moral: Structure data late in the programming process."
just somebody
+8  A: 

We do a lot of development here using a Model-View-Controller framework we built. I'm often telling my developers that we need to violate the rules of the MVC design pattern to make the site run faster. This is a hard sell for developers, who are usually unwilling to sacrifice well-designed code for anything. But performance is our top priority in building web applications, so sometimes we have to make concessions in the framework.

For example, the view layer should never talk directly to the database, right? But if you are generating large reports, the app will use a lot of memory to pass that data up through the model and controller layers. If you have a database that supports cursors, it can make the app a lot faster to hit the database directly from the view layer.

Performance trumps development standards, that's my controversial view.

jjriv
An excellent example of how sometimes rules are made to be broken. Do everything right but be prepared to do some things wrong from necessity.
Kendall Helmstetter Gelner
Interesting point!
Seventh Element
Performance trumps development standards -- if it is too poor to stand. As long as performance is not a problem, there is no need to fix it.
Aaron Digulla
Don't forget, what is considered "right" in terms of development standards was just somebody's common-sense temporary opinion that happened to get picked up by a lot of people. It is not a commandment from "on high" - common sense can change but is always useful. Good work.
Mike Dunlavey
+7  A: 

I believe the use of try/catch exception handling is worse than the use of simple return codes and associated common messaging structures to ferry useful error messages.

Littering code with try/catch blocks is not a solution.

Just passing exceptions up the stack hoping whats above you will do the right thing or generate an informative error is not a solution.

Thinking you have any chance of systematically verifying the proper exception handlers are avaliable to address anything that could go wrong in either transparent or opague objects is not realistic. (Think also in terms of late bindings/external libraries and unecessary dependancies between unrelated functions in a call stack as system evolves)

Use of return codes are simple, can be easily systematically verified for coverage and if handled properly forces developers to generate useful error messages rather than the all-too-common stack dumps and obscure I/O exceptions that are "exceptionally" meaningless to even the most clueful of end users.

--

My final objection is the use of garbage collected languages. Don't get me wrong.. I love them in some circumstances but in general for server/MC systems they have no place in my view.

GC is not infallable - even extremely well designed GC algorithms can hang on to objects too long or even forever based on non-obvious circular refrences in their dependancy graphs.

Non-GC systems following a few simple patterns and use of memory accounting tools don't have this problem but do require more work in design and test upfront than GC environments. The tradeoff here is that memory leaks are extremely easy to spot during testing in Non-GC while finding GC related problem conditions is a much more difficult proposition.

Memory is cheap but what happens when you leak expensive objects such as transaction handles, synchronization objects, socket connections...etc. In my environment the very thought that you can just sit back and let the language worry about this for you is unthinkable without significant fundental changes in software description.

Einstein
Return codes have the problem of coupling too many elements of a chain of calls to understand what they mean. That is to say, that everything between a called function and something that might handle an error has to understand the return codes, at least to pass them along - that can be a mess.
Kendall Helmstetter Gelner
My general advice is to follow a convention and don't fall into the trap of attempting to have them indiciate specific error conditions.At each level you should take steps to ensure meaning is normalized. (Which ususally isn't hard/necessary if you follow a convention)
Einstein
Good error code compared with bad exception code is better. But then again, there is good exception handling code, where exceptions are thrown and caught only where it makes sense... good exception code separates error handling from the error, and need not be replicated in each function of the stack
David Rodríguez - dribeas
If a GC platform is not right for your particular situation, use good judgmenet and don't use it. It's as simple as that.
Seventh Element
+3  A: 

Excessive HTML in PHP files: sometimes necessary

Excessive Javascript in PHP files: trigger the raptor attack

While I have a hard time figuring out all your switching between echoing and ?>< ?php 'ing html (after all, php is just a processor for html), lines and lines of javascript added in make it a completely unmaintainable mess.

People have to grasp this: They are two separate programming languages. Pick one to be your primary language. Then go on and find a quick, clean and easily maintainable way to make your primary include the secondary language.

The reason why you jump between PHP, Javascript and HTML all the time is because you are bad at all three of them.

Ok, maybe its not exactly controversial. I had the impression this was a general frustration venting topic :)

What? To build a dynamic, server-side generated website you'll need all three (Unless you use another system.)For PHP, you've got your templating, server power etc. For HTML you have the basis of the actual site. JS: Dynamically loaded content, special features (syntax highlighting).
Dalin Seivewright
+63  A: 

Opinion: developers should be testing their own code

I've seen too much crap handed off to test only to have it not actually fix the bug in question, incurring communication overhead and fostering irresponsible practices.

Kevin Davis
+1. This a matter of ownership, we tend to care better for things we own than the things we don't. Want proof? Take a look at your company vehicles.
AnthonyWJones
It also comes with the onus that people reporting bugs can report in sufficient detail so that it can be reproduced and tested to be proven fixed. It sucks to be so maligned when you reproduce a defect according to description, fix it, and find that the tester still has issues you didn't.
Greg Domjan
I think testing and developing are different skills, they should be done by those who are good at them. Isolating testers from developers and making it hard for testers to get ther bugs fixed: no excuse.
Benjamin Confino
But they shouldn't be the only ones to test their code.
peterchen
Sounds like bad developers to me. I'd file this under not all lazy developers are good developers.
gradbot
+1 for controversy: I'm only going to test the things I think to test for, and if I design the particular method... I've already thought of everything that can go wrong (from my point of view). A good tester will see another point of view -> like your users.
SnOrfus
if the developer could not write bug-free code, he should also not test it.
Orentet
+9  A: 

Web applications suck

My Internet connection is veeery slow. My experience with almost every Web site that is not Google is, at least, frustrating. Why doesn't anybody write desktop apps anymore? Oh, I see. Nobody wants to be bothered with learning how operating systems work. At least, not Windows. The last time you had to handle WM_PAINT, your head exploded. Creating a worker thread to perform a long task (I mean, doing it the Windows way) was totally beyond you. What the hell was a callback? Oh, my God!


Garbage collection sucks

No, it actually doesn't. But it makes the programmers suck like nothing else. In college, the first language they taught us was Visual Basic (the original one). After that, there was another course where the teachers pretended they taught us C++. But the damage was done. Nobody actually knew how to use this esoteric keyword delete did. After testing our programs, we either got invalid address exceptions or memory leaks. Sometimes, we got both. Among the 1% of my faculty who can actually program, only one who can manage his memory by himself (at least, he pretends) and he's writing this rant. The rest write their programs in VB.NET, which, by definition, is a bad language.


Dynamic typing suck

Unless you're using assembler, of course (that's the kind of dynamic typing that actually deserves praise). What I meant is the overhead imposed by dynamic, interpreted languages makes them suck. And don't come with that silly argument that different tools are good for different jobs. C is the right language for almost everything (it's fast, powerful and portable), and, when it isn't (it's not fast enough), there's always inline assembly.


I might come up with more rants, but that will be later, not now.

Eduardo León
C may be fast to execute, but dynamic, interpreted languages are faster to develop in. I think you're being a little close-minded here.
Kiv
C is NOT the right tool for everything! it's not the tool for web development! there's _that_ at least!
hasen j
What are dynamic, interpreted languages good for, besides Web development? Note, I happen to hate Web apps.
Eduardo León
Sure, dynamic languages should be burned. From now on I shall always compile my shell scripts to machine code.
Rene Saarsoo
Dynamic languages are good for different jobs. They tend to be ideal for quick and dirty throw away scripts for admin stuff, as well they tend to be better geared for applications that require a lot of string manipulation and need to be developed quickly.
Rontologist
That's 3 opinions in one answer, and they're all dupes
finnw
What do you mean by dupes?
Eduardo León
+1  A: 

Never make up your mind on an issue before thoroughly considering said issue. No programming standard EVER justifies approaching an issue in a poor manner. If the standard demands a class to be written, but after careful thought, you deem a static method to be more appropriate, always go with the static method. Your own discretion is always better than even the best forward thinking of whoever wrote the standard. Standards are great if you're working in a team, but rules are meant to be broken (in good taste, of course).

Davis Gallinghouse
+99  A: 

C++ is one of the WORST programming languages - EVER.

It has all of the hallmarks of something designed by committee - it does not do any given job well, and does some jobs (like OO) terribly. It has a "kitchen sink" desperation to it that just won't go away.

It is a horrible "first language" to learn to program with. You get no elegance, no assistance (from the language). Instead you have bear traps and mine fields (memory management, templates, etc.).

It is not a good language to try to learn OO concepts. It behaves as "C with a class wrapper" instead of a proper OO language.

I could go on, but will leave it at that for now. I have never liked programming in C++, and although I "cut my teeth" on FORTRAN, I totally loved programming in C. I still think C was one of the great "classic" languages. Something that C++ is certainly NOT, in my opinion.

Cheers,

-R

EDIT: To respond to the comments on teaching C++. You can teach C++ in two ways - either teaching it as C "on steroids" (start with variables, conditions, loops, etc), or teaching it as a pure "OO" language (start with classes, methods, etc). You can find teaching texts that use one or other of these approaches. I prefer the latter approach (OO first) as it does emphasize the capabilities of C++ as an OO language (which was the original design emphasis of C++). If you want to teach C++ "as C", then I think you should teach C, not C++.

But the problem with C++ as a first language in my experience is that the language is simply too BIG to teach in one semester, plus most "intro" texts try and cover everything. It is simply not possible to cover all the topics in a "first language" course. You have to at least split it into 2 semesters, and then it's no longer "first language", IMO.

I do teach C++, but only as a "new language" - that is, you must be proficient in some prior "pure" language (not scripting or macros) before you can enroll in the course. C++ is a very fine "second language" to learn, IMO.

-R

'Nother Edit: (to Konrad)

I do not at all agree that C++ "is superior in every way" to C. I spent years coding C programs for microcontrollers and other embedded applications. The C compilers for these devices are highly optimized, often producing code as good as hand-coded assembler. When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code. You can write it in C++, but then you're really just writing C, and the C compilers are more optimized in these applications.

I wrote a MIDI engine, first in C, later in C++ (at the vendor's request) for an embedded controller (sound card). In the end, to meet the performance requirements (MIDI timings, etc) we had to revert to pure C for all of the core code. We were able to use C++ for the high-level code, and having classes was very sweet - but we needed C to get the performance at the lower level. The C code was an order of magnitude faster than the C++ code, but hand coded assembler was only slightly faster than the compiled C code. This was back in the early 1990s, just to place the events properly.

-R

Huntrods
I would upvote if it wasn't for the "it's a horrible first language", I think it sucks but it's a good first language, particularly because it does suck, then one can appreciate the need for better languages!
hasen j
It's very difficult to create usable classes in C++, but once you create them, life is very easy. Way easier than using plain C. What I do is the following: I implement the functionality in C, then wrap it using C++ classes.
Eduardo León
The way I see it, a lot of misgivings about C++ stem from the fact that C++ is generally taught wrong. One typically needs to unlearn a lot of C before one can grok C++ well. Learning C++ after C never seems a good idea to me.
Nocturne
And I think that C++ is superior to C in every way, except that it unfortunately was designed to be “backwards” compatible to C.
Konrad Rudolph
I think C++ is a good example of "design by committee" done *RIGHT*. It's a mess in many ways, and for many purposes, it's a lousy languages. But if you bother to really learn it, there's a remarkably expressive and elegant language hidden within. It's just a shame that few people discover it.
jalf
Yea - that "elegant language, hidden within" ... IS C!!! ;-)
Huntrods
I've got another bone to pick with you: “You can teach C++ in two ways” – this is wrong. Apparently you have only ever used C++ in two ways, without unlocking its true potential. This also explains your microcontroller related experience: C is *no* faster than (well-written) C++.
Konrad Rudolph
+1: Of all the languages I've ever played with, C++ is the only one which has made me sick every time I've approached it. I've had a book on C++ for years, I pick it up every once in a while and tell myself "it really can't be that bad" and read until my eyes bleed, I've made it to page 47.
Robert Gamble
There is a third approach to learning C++: Accelerated C++ takes it. It builds from the very beginning (variables, functions) but using real C++ elements (STL). I recommend it for anyone who wants another view into C++.
David Rodríguez - dribeas
@dribeas: I appreciate the recommendation, it looks like a good book. I doubt I'll ever be able to "appreciate" what C++ has to offer but if I ever recover from my previous experiences I will take you up on your recommendation.
Robert Gamble
Okay, if C++ code was ten times slower than C code, what sort of Mickey Mouse compilers were you using? Or what idiotic code conventions were you required to use? Were you asked to do exception specifications, for example (almost always a bad idea)?
David Thornley
Just throwing this out there, but the Programming Language benchmark game has quite a few examples of C++ being faster then C.
James McMahon
"When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code." - who says you *have* to use classes, rtti and whatnot?
Johannes Schaub - litb
you don't *have* to use those features. if you only use the C subset, then C++ is equally fast as C. then, you can selectively pick those C++ features *you* like. some vector sugar here, some other stuff there. isn't that nice?
Johannes Schaub - litb
and i agree it's all but a nice first language. it's not wise to teach it first IMHO. and it's good that it's compatible to C. nuff said :)
Johannes Schaub - litb
Well said. Also read 'Worse is Better'
Vardhan Varma
I agree that it's got a whole raft of problems. but worst ever? Ever seen intercal? BFUNGE? assembly language?
Brian Postow
Regarding your anecdote about C++ being an order of magnitude slower, keep in mind that C++ compilers of the '80s are not the same as C++ compilers of today.
notJim
I agree that it's the worst language ever. Except for all the others.
Kaz Dragon
I don't agree that its the worst language; I do agree that its a bad language; I also agree that its a bad first language. C++ is powerful and has a lot of features that are very useful. This makes C++ a good choice - sometimes. C++ also has a lot of hidden evil (lots of undefined behavior that looks perfectly fine..) which makes it a bad language and definitely a bad first language.
Dan
@david-basarab - C++ compilers are now much better! I use c++ not only for MIDI but for audio DSP algorithms - utilizing C++ templates makes it very powerful to make tunable compile time parameters such as buffer size and layout which allows for automatic SSE/altivec optimizations. The benefit of C++ now is not the language which is always a template-puzzle nowadays, but because the compilers available are better at optimizing real time functions than Haskell, Ada, Scheme and Scala are
jdkoftinoff
-1. C++ is still the most powerful multi-paradigm widely available language there is. It's the most adaptable of them all, therefore it can solve many different problems, which in some applications is _very_ useful. It might not be best at each specific thing, but overall, it's seldom a really bad choice.
Marcus Lindblom
C++ is like Democracy, "Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time." -Sir Winston Churchill
gradbot
C++ is massive, and massively popular. Like all languages, it has applications for which is it well suited, and applications for which it is poorly suited.
baultista
+1 For second language. I learned Java first and a bit of C one year later. I'm glad I learned the low-level C stuff because it makes me a better high-level programmer, but I'm also glad I didn't have to start with C.
Bart van Heukelom
A: 

Higher level lanugages should be one based instead of zero based. This would eliminate "off by one" errors when dealing with arrays/collections.

I think arrays should not be based at all. If you want to refer to the first item one should use l_array[l_array.first], to the last item l_array[l_array.last]. If you want to loop: for i in l_array.first..l_array.last loop ..do your stuff..end loop;
tuinstoel
@tuinstoel, that's what lists are for. Sometimes you need random access to elements. For that, you need an index. By the way, I don't agree that arrays should be one based. Zero is more convenient most of the time IMHO.
Matthew Crumley
One based indexing can get pretty awkward... I like this article by DIjkstra:http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html
Kiv
@Matthew Crumley, if you want to access the second element, do l_array[l_array.first+1].
tuinstoel
@Kiv, it's Dijkstra, not Dljkstra.
tuinstoel
Wrong. Zero-based arrays are the most natural ones. When you use zero-based arrays, the array's length is the set of valid indices, according to Peano arithmetic.
Eduardo León
@Eduardo León, According the en:wikipedia 0 or 1 doesn't matter. "For example, the natural numbers starting with one also satisfy the axioms." See: http://en.wikipedia.org/wiki/Natural_number#Peano_axioms
tuinstoel
I find one-based leads to even more off by one erros.
Matthias Wandel
Changing from 0 to 1 would just change the OBOEs not eliminate them. I have to use languages that use both and the errors are just as common in both (just different). A non-idea.
duncan
I stick with my idea that arrays shouldn't not be based.
tuinstoel
One-based is the source of huge errors in Delphi where Strings are 1 based and everything else is zero-based. VB variant arrays can be initialized to have any lower and upper bound you like. And that's a perfect hell.
Warren P
By the way, this one should be upvoted instead of down-voted. It's definitely OUT THERE and controversial. Also, crazy.
Warren P
+9  A: 

Preconditions for arguments to methods/functions should be part of the language rather than programmers checking it always.

kal
I like it, but it is controversial?
erikkallen
+33  A: 

There's an awful lot of bad teaching out there.

We developers like to feel smugly superior when Joel says there's a part of the brain for understanding pointers that some people are just born without. The topics many of us discuss here and are passionate about are esoteric, but sometimes that's only because we make them so.

Brian
Amen ------------
Dario
Those who can't do, teach. By that logic, the people who can't program are the ones teaching us how to program.I've experienced it myself where the professors I've had have admitted to being unable to do the problems and exercises they assign. Protip: Take the classes with the teachers contracted by the university, not tenure (or tenure-pathed) professors.
baultista
+15  A: 

The best code is often the code you don't write. As programmers we want to solve every problem by writing some cool method. Anytime we can solve a problem and still give the users 80% of what they want without introducing more code to maintain and test we have provided waaaay more value.

Todd Friedlich
It reminds me of a quote (I can't remember who said it though) - "Measuring a program by lines of code is like measuring a plane by weight."
Cristián Romo
@Cristián: It was Bill Gates who said that.
Dan Dyer
+18  A: 

C (or C++) should be the first programming language

The first language should NOT be the easy one, it should be one that sets up the student's mind and prepare it for serious computer science.
C is perfect for that, it forces students to think about memory and all the low level stuff, and at the same time they can learn how to structure their code (it has functions!)

C++ has the added advantage that it really sucks :) thus the students will understand why people had to come up with Java and C#

hasen j
so everybody should suffer, because you have suffered? its always nice to learn useless things, but come on.
01
Not really, I really loved C++ back in the day, I was in denial when I heard from a prof that it's the worst language he's ever seen.
hasen j
+1: Everyone should learn C first because programming isn't for everyone and it isn't for anyone that can't grasp C.
Robert Gamble
Blast them with raw machine code. Suffer!!! The assembler course was the most fun in had (during class time) in university.
Jonathan C Dickinson
Mythology. Before encountering C I learned the assembly of 2/3 CPUs and familiarized with others. Some CPUs are a pleasure to program because of their orthogonal instruction sets, others are a pain but less idiosyncratic than C. C fails for its intended use, i.e. a portable assembly.
MaD70
.. and I find pathetic the elitism that too many programmers show.
MaD70
My university taught programming almost exclusively in Java. I felt simultaneously aroused and cheated when I finally got around to learning C and C++.
iandisme
I disagree. Its hard to get first-timers excited about memory allocation.. Start with a language where you can get near instant gratification. The web languages are good for this.
Matt
@Matt: you're not supposed to agree ;)
hasen j
I did a lot of teaching introductory CS. What I found was most useful was first a few weeks on a decimal machine simulator, to set up the basic mental framework of addresses, memory, instructions, and stepwise execution. Then we did Basic (sorry), then Pascal. I like C (and C++) but those are hell to teach to newbies, because there are too many subtle ways for students to get confused, like the difference between pointers and array referencing, and nested types. It's not acceptable to say "sink or swim" - they pay tuition.
Mike Dunlavey
+11  A: 

A random collection of Cook's aphorisms...

  • The hardest language to learn is your second.

  • The hardest OS to learn is your second one - especially if your first was an IBM mainframe.

  • Once you've learned several seemingly different languages, you finally realize that all programming languages are the same - just minor differences in syntax.

  • Although one can be quite productive and marketable without having learned any assembly, no one will ever have a visceral understanding of computing without it.

  • Debuggers are the final refuge for programmers who don't really know what they're doing in the first place.

  • No OS will ever be stable if it doesn't make use of hardware memory management.

  • Low level systems programming is much, much easier than applications programming.

  • The programmer who has a favorite language is just playing.

  • Write the User's Guide FIRST!

  • Policy and procedure are intended for those who lack the initiative to perform otherwise.

  • (The Contractor's Creed): Tell'em what they need. Give'em what they want. Make sure the check clears.

  • If you don't find programming fun, get out of it or accept that although you may make a living at it, you'll never be more than average.

  • Just as the old farts have to learn the .NET method names, you'll have to learn the library calls. But there's nothing new there.
    The life of a programmer is one of constantly adapting to different environments, and the more tools you have hung on your belt, the more versatile and marketable you'll be.

  • You may piddle around a bit with little code chunks near the beginning to try out some ideas, but, in general, one doesn't start coding in earnest until you KNOW how the whole program or app is going to be layed out, and you KNOW that the whole thing is going to work EXACTLY as advertised. For most projects with at least some degree of complexity, I generally end up spending 60 to 70 percent of the time up front just percolating ideas.

  • Understand that programming has little to do with language and everything to do with algorithm. All of those nifty geegaws with memorable acronyms that folks have come up with over the years are just different ways of skinning the implementation cat. When you strip away all the OOPiness, RADology, Development Methodology 37, and Best Practice 42, you still have to deal with the basic building blocks of:

    • assignments
    • conditionals
    • iterations
    • control flow
    • I/O

Once you can truly wrap yourself around that, you'll eventually get to the point where you see (from a programming standpoint) little difference between writing an inventory app for an auto parts company, a graphical real-time TCP performance analyzer, a mathematical model of a stellar core, or an appointments calendar.

  • Beginning programmers work with small chunks of code. As they gain experience, they work with ever increasingly large chunks of code.
    As they gain even more experience, they work with small chunks of code.
cookre
"Once you've learned several seemingly different languages, you finally realize that all programming languages are the same - just minor differences in syntax." - you just broke many hearts, some people learn new language every year.
01
And it gets easier and easier, doesn't, doesn't it?
cookre
"you finally realize that all programming languages are the same" -- you hear that a lot from people who have only programmed in C#, C++, flavors of VB, Java, and maybe Python. Then you finally learn Haskell, Ocaml, Erlang, Prolog, and Lisp, and you feel like an idiot for having missed so much.
Juliet
It's always nice to have lots of toys, but we know they all serve the same purpose - to entertain us in some way. Likewise with every programming language I've seen over the past forty some odd years. As mentioned above, it's all about algorithm - not syntax.
cookre
@cookre: try to use algorithms designed to be expressed in an imperative programming language (PL) with a pure lazy functional PL like Haskell or in a (constraint) logic PL like Prolog (and derivatives) or in a PL designed for fault tolerance and massive concurrency, like Erlang and you will discover that semantics differences are all that really counts.
MaD70
+6  A: 

According to the amount of feedback I've gotten, my most controversial opinion, apparently, is that programmers don't always read the books they claim to have read. This is followed closely by my opinion that a programmer with a formal education is better than the same programmer who is self-taught (but not necessarily better than a different programmer who is self-taught).

Bill the Lizard
I'm proud to say I've read all the programming books I own. Even the monsterous Programming Python and Programming Perl.
sli
I have a B.A. in English. It is likely that I'm a better programmer for it. Is that what you mean?
postfuturist
You over-estimate the value of education. I've been a full time programmer for 15 years and am self-taught. When I meet developers who are fresh out of school, I sometimes wonder if there whole education wasn't a big waste of time. They know next to nothing about "the real world", can seldomly work independently and their skills are average at best.
Seventh Element
@Seventh Element: I would expect someone fresh out of school with no work experience to have average skills. Comparing a fresh graduate to someone with 15 years of work experience is comparing apples to oranges. I worked as a programmer for 8 years before going back to school to get my degree. I think I have a pretty strong grasp of the value of my education *to me*. You get out of it what you put into it.
Bill the Lizard
+10  A: 

Jon Skeet is not all that special!

hasen j
Almost a +1 becouse of the controversy, but can't since you don't back it upp
martiert
I did back it up! don't you see the exclamation mark??
hasen j
+8  A: 

The best programmers trace all their code in the debugger and test all paths.

Well... the OP said controversial!

Enigme
Please justify your position. Note: test all paths requires that you only write paths you can test. Mindless error handlers go away.
Jay Bazuzi
Ever heard of unit tests? Using unit tests you don't need to "test all paths" after each change you made to the code. (Anyway, I think it's is impossible to test all paths except in a tiny little application)
Stefan Steinegger
A corollary: The fewer paths a piece of code has the better.
dangph
+24  A: 

The more process you put around programming, the worse the code becomes

I have noticed something in my 8 or so years of programming, and it seems ridiculous. It's that the only way to get quality is to employ quality developers, and remove as much process and formality from them as you can. Unit testing, coding standards, code/peer reviews, etc only reduce quality, not increase it. It sounds crazy, because the opposite should be true (more unit testing should lead to better code, great coding standards should lead to more readable code, code reviews should improve the quality of code) but it's not.

I think it boils down to the fact we call it "Software Engineering" when really it's design and not engineering at all.


Some numbers to substantiate this statement:

From the Editor

IEEE Software, November/December 2001

Quantifying Soft Factors

by Steve McConnell

...

Limited Importance of Process Maturity

... In comparing medium-size projects (100,000 lines of code), the one with the worst process will require 1.43 times as much effort as the one with the best process, all other things being equal. In other words, the maximum influence of process maturity on a project’s productivity is 1.43. ...

... What Clark doesn’t emphasize is that for a program of 100,000 lines of code, several human-oriented factors influence productivity more than process does. ...

... The seniority-oriented factors alone (AEXP, LTEX, PEXP) exert an influence of 3.02. The seven personnel-oriented factors collectively (ACAP, AEXP, LTEX, PCAP, PCON, PEXP, and SITE §) exert a staggering influence range of 25.8! This simple fact accounts for much of the reason that non-process-oriented organizations such as Microsoft, Amazon.com, and other entrepreneurial powerhouses can experience industry-leading productivity while seemingly shortchanging process. ...

The Bottom Line

... It turns out that trading process sophistication for staff continuity, business domain experience, private offices, and other human-oriented factors is a sound economic tradeoff. Of course, the best organizations achieve high motivation and process sophistication at the same time, and that is the key challenge for any leading software organization.

§ Read the article for an explanation of these acronyms.

rustyshelf
It sounds like you're seeing process being used to compensate for poor programmers, not to enhance great developers. This is why the Agile Manifest says "Individuals and interactions over processes and tools". Instead of adding process for poor programmers, add it when # of programmers grows.
Jay Bazuzi
@jay not quite. I think that process even put around the best developers causes a decrease in code quality. I would liken it to meeting a famous painter, and then telling him the rules he needs to abide by to make a good painting. It might make sense to you, but it's ridiculous.
rustyshelf
I suspect great painters have their own processes.
Alex Baranosky
Process takes away energy that makes code better - that applies to coders good and bad.Some process is useful but process breeds process and you always end up with too much.
Kendall Helmstetter Gelner
@GordonG Perhaps I should have said the more 'external' process...
rustyshelf
I couldn't agree with you more! The arguments I've gotten into with other programmers over their strict adherence to processes could fill a book the size of War And Peace. That includes both "good" and bad processes, though.
sli
I've seen the opposite effect. I worked a company which used an Agile methodology, and the code quality was nightmarishly bad, beyond awful. I now work at a company with a very rigid process, lots of red tape around undocumented changes, and the resulting code is top notch.
Juliet
One size does not fit all. Small project, small team in one location, experienced developers, domain expert on site, software not absolutely critical? (some software, if you have a bug someone might die.) Then yes, just run wild. If not, you need more process.
MarkJ
If your processes make things harder, you're doing it wrong. It should be like a aircraft takeoff checklist, helps you remember to do stuff in the right order. Automate things: you're a software developer dammit. Make the easy thing the right thing.
Tim Williscroft
+1  A: 
dreftymac
Replace a markup language with YAML... you must be crazy. Voting up for good controversy.
Rene Saarsoo
+5  A: 

Inheritance is evil and should be deprecated.

The truth is aggregation is better in all cases. Static typed OOP languages can't avoid inheritance, it's the only way to describe what method wants from a type. But dynamic languages and duck typing can live without it. Ruby mixins is much more powerful then inheritance and a lot more controllable.

vava
When I teach this, I make a big point of telling people that I'm only teaching it because they have to know the syntax to do it. There are other things we have to teach because there is special syntax involved, and people take what they learn from special syntax and use it all the time.
brian d foy
My controversial opinion in this regard is anyone who describes a technology as "evil" is evil. Patterns don't kill people, people kill people.
dreftymac
I don't think I agree, but I found your post interesting: upvoted.
Jay Bazuzi
"Static typed OOP languages can't avoid inheritance," -- OCaml is a statically typed OOP language, but it also supports structural typing ((http://en.wikipedia.org/wiki/Structural_type_system), which is more or less "duck typing for static languages". It also downplays the role of inheritance.
Juliet
Even in statically typed languages inheritance is overused. Prefer composition to inheritance in each and any language.
David Rodríguez - dribeas
"Static typed OOP languages can't avoid inheritance," Of course they can, with interfaces, delegations and programming by contract. Apart from that, and the "in all cases" part (I'd have said "in most cases"), I agree.
fbonnet
+8  A: 

Correct every defect when it's discovered. Not just "severity 1" defects; all defects.

Establish a deployment mechanism that makes application updates immediately available to users, but allows them to choose when to accept these updates. Establish a direct communication mechanism with users that enables them to report defects, relate their experience with updates, and suggest improvements.

With aggressive testing, many defects can be discovered during the iteration in which they are created; immediately correcting them reduces developer interrupts, a significant contributor to defect creation. Immediately correcting defects reported by users forges a constructive community, replacing product quality with product improvement as the main topic of conversation. Implementing user-suggested improvements that are consistent with your vision and strategy produces community of enthusiastic evangelists.

Dave
not really "controversial" - it's the standing practice everywhere I've worked
warren
+112  A: 

A degree in computer science does not---and is not supposed to---teach you to be a programmer.

Programming is a trade, computer science is a field of study. You can be a great programmer and a poor computer scientist and a great computer scientist and an awful programmer. It is important to understand the difference.

If you want to be a programmer, learn Java. If you want to be a computer scientist, learn at least three almost completely different languages. e.g. (assembler, c, lisp, ruby, smalltalk)

Starkii
The first one is not really controversial, at least not in the CS field.
wds
I disagree. I know many people studying computer science that think they are getting a degree in programming. Every time I hear whining about why CS programs don't teach everyone Java I offer up a pained sigh.
Starkii
Java doesn't really teach you how to be a real programmer, since there's so much you can't learn with it. It's like building a car with legos.
Lance Roberts
I may agree with the first point, but saying that knowing only Java could make a programmer ..... that's a crime, punishable with death!!!
hasen j
Can you move your second answer to another post so it can be rated separately.
Greg Domjan
@Greg: Done. Thanks for the suggestion.
Starkii
I agree with "does not", but not with "is not supposed to". Where else in academia are you supposed to learn to program? There is no analog in software to the Engineering disciplies (mechanical, electrical, civil etc.).
MusiGenesis
@MusiGenesis: My local community college has an "Associate in Applied Science Degree" in "Computer Programming". (Washtenaw Community College) That is where I would go to be a programmer. It is important not to confuse Computer Science with Computer Programmer. They are _NOT_ the same thing
Starkii
@MusiGenesis: I've actually just completed my degree in Engineering (Software). I'm certainly not a computer scientist, and I don't want to be.
A J Lane
A CS degree is indeed not a programming degree. But then again, a programming degree doesn't make you a good programmer either. Both can introduce you to the basics and some special subfields, but it's up to you to use that as one of many sources of information as you develop your skills. Now, you may be able to solve any problem your work poses to you using a single language, like Java. But is it the best way? Learning several different languages and paradigms can help expand your perception of how problems can be solved using program code, and allow you to create better solutions.
Lucas Lindström
I disagree that CS does not teach you to be a programmer. It DOES and SHOULD do that - incidentally by teaching multiple languages, not one only - but that's not ALL it should do. CS degrees should also teach you about as many different areas of CS as possible, eg basic programming, functional languages, databases, cryptography, AI, language engineering (ie compilers/parsing), architecture and math-leaning areas like computer graphics and various algorithms.
DisgruntledGoat
Programming is easier in some fields than in others. Web development and most of the work you do in Information Systems is not hard. If you have a bit of a knack for programming, you can do this stuff very well without a CS or engineering degree. If you want to be a game programmer, write device drivers, work with embedded systems, or other things of the like, you'll need to know certain things from the degree.
baultista
+8  A: 

Web services absolutely suck, and are not the way of the future. They are ridiculously inefficient and they don't guarantee ordered delivery. Web services should NEVER be used within a system where both client and server are being written. They are mostly useful for micky mouse mash-up type applications. They should definitely not be used for any kind of connection-oriented communication.

This stance has gotten myself and colleagues into some very heated discussions, since web services is such a buzzy topic. Any project that mandates the use of web services is doomed because it is clearly already having ridiculous demands pushed down from management.

Jesse Pepper
My company writes auto-insurance software, and we rely on several off-site web services to verify VIN numbers and run OFAC checks on people. We also make some of our APIs available through web services to third-party vendors. How would you suggest our software be written without web services?
Juliet
@Juliet: what in " Web services should NEVER be used within a system **where both client and server are being written** " do you not understand? It's clear that in your situation you don't control both parts of the system, so your rhetorical question is irrelevant.
MaD70
+17  A: 

My controversial opinion: Object Oriented Programming is absolutely the worst thing that's ever happened to the field of software engineering.

The primary problem with OOP is the total lack of a rigorous definition that everyone can agree on. This easily leads to implementations that have logical holes in them, or language like Java that adhere to this bizarre religious dogma about what OOP means, while forcing the programmer into doing all these contortions and "design patterns" just to work around the limitations of a particular OOP system.

So, OOP tricks the programmer into thinking they're making these huge productivity gains, that OOP is somehow a "natural" way to think, while forcing the programmer to type boatloads of unnecessary boilerplate.

Then since nobody knows what OOP actually means, we get vast amounts of time wasted on petty arguments about whether language X or Y is "truly OOP" or not, what bizarre cargo cultish language features are absolutely "essential" for a language to be considered "truly OOP".

Instead of demanding that this language or that language be "truly oop", we should be looking at what language features are shown by experiment, to actually increase productivity, instead of trying to force it into being some imagined ideal language, or indeed forcing our programs to conform to some platonic ideal of a "truly object oriented program".

Instead of insisting that our programs conform to some platonic ideal of "Truly object oriented", how about we focus on adhering to good engineering principles, making our code easy to read and understand, and using the features of a language that are productive and helpful, regardless of whether they are "OOP" enough or not.

Breton
It sounds like you're mixing programming methodologies and language design philosophies, while also recognizing the damage of zealotry. As a result, your potentially interesting thoughts are cluttered and unclear.
Jay Bazuzi
The "Truly XYZ" idiom is usually a case of the "No True Scotsman" fallacy. As far as the rest, have you read http://xahlee.org/Periodic_dosage_dir/t2/oop.html? Also, this seems very similar to a perlmonks post, have you written on this before?
dreftymac
a Language is user interface that can make a programming methodology easier. An OOP language, therefore, is a language designed to make OOP programming easier, making them closely related subjects. This position was argued better by Apocalisp, elsewhere in this question.
Breton
I've never hear anyone pontificate on the phrase "truly object oriented" in the past 10 years I've been programming. Never. Not even once. Are you actually quoting some obnoxious manager?
Juliet
Anyone who started with java, or C++, and then tried lua, or javascript, or some other language that doesn't have some arbirary java feature. Anyone entrenched in the Java world who has a self superior view that singletons are a terrible idea. Anyone who's read teh GoF book and thought it was future
Breton
Almost, IMHO. I think OOP is the ideal way to deal with some aspects of programming, but it's not what it's made out to be: It's not a replacement for every methodology and/or piece of code you ever come across; It's not immune from being taken too far; It's not your master; It's not irreplaceable.
jTresidder
Do you come from a VB6 background and never embraced OOP?
Velika
Incorrect. There's nothing wrong with OOP, it's just a strategy. What the problem is, is the attitude that I should have "embraced" it, or the only alternative is I'm some backwards beginner. It is not the end all be all, it is not a religion, and I don't have to be crucified in order to expunge me from the pool of programmers so that all "right" thinking programmers can live free of sin. I posted my answer to this question because it is the most controversial opinion I have. That was the question.
Breton
the reason it's the worst thing to happen to programming is that it prevents programmers from looking at other solutions that may actually be better suited to the problem, and it prevents us from looking ot or accepting new paradigms that might be better suited to most problems.
Breton
I hate when newcomers lecture me about the greatness of OOP when I program in OO languages from mid '80s. They are totally blind to OOP shortcomings, they don't know that "OOP" is an ill-defined concept and, worst of all, they ignore a whole world of options w.r.t. programming paradigms.
MaD70
+1 Wish I could upvote more. This field is rife with bandwagons, gurus, "right thinking", and occasionally good ideas made into religions. To a mechanical/electrical engineer (like me) this is so weird. I assume if something is true there's a scientific reason why. I also assume inventiveness is a good thing. Very little of that in this field.
Mike Dunlavey
+363  A: 

UML diagrams are highly overrated

Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.

Ludwig Wensauer
I usually need to sketch up classes when designing an object-oriented system. I may as well use a standardized syntax for sketching. I'm not even forced to use ALL of the syntax, just the parts that I like.
Lucas Lindström
The way I see it, using a "standardised" diagram notation forces you into using some unnecessary syntax much of the time. I do agree with what UML *does*, but I think a standard is pointless. Circles and arrows are perfectly fine for nearly every case.
DisgruntledGoat
Ok, let's say that UML is worthless. Do you have any diagram templates to use it place of UML? Do you think diagrams in general are a waste of time? Is this a personal preference, as in do you use the list of directions (turn left, go one mile, turn right, etc.) to get somewhere you've never been? Do maps confuse you? I'm not trying to be snide, I truly believe that there is a personality difference between the visual and non-visual preferences of people. That could be what causes people to dislike UML: it's usefulness depends on the visual nature of the individual which is subjective.
Kelly French
I have seen all of the various diagrams that UML outline, and there are great times to use all of them. The problems start to occur when diagrams are made "for completeness" when no one is asking for them. In these cases, find the best UML diagram (or two) and make it fully qualified.
TahoeWolverine
Best use of UML is to not take it too seriously. Opening up a big package piece of software for UML? = You're doing too much big-design-up-front. Sketching on notepads? = Good.
Ollie Saunders
Blobby-Grams are my preferred derogatory term for UML diagrams.
Warren P
Best use of UML is for documentation. Once you finished the proyect.
Random
UML is not intended to be used as documentation, although it is often used as such. It can be nice to have class diagrams and object interaction diagrams on record, but IMO they're much more useful when trying to conceptualize the innerworkings of new features and illustrate them to other developers.
baultista
Kent Beck described his Galactic Modelling Language (GML) -- on an index card of course. It has three primitives: Box, Line, Label. I find it works for 90% of discussions.
Mike Clark
+6  A: 

You shouldn't settle on the first way you find to code something that "works."

I really don't think this should be controversial, but it is. People see an example from elsewhere in the code, from online, or from some old "Teach yourself Advanced Power SQLJava#BeansServer in 3.14159 minutes" book dated 1999, and they think they know something and they copy it into their code. They don't walk through the example to find out what each line does. They don't think about the design of their program and see if there might be a more organized or more natural way to do the same thing. They don't make any attempt at keeping their skill sets up to date to learn that they are using ideas and methods deprecated in the last year of the previous millenium. They don't seem to have the experience to learn that what they're copying has created specific horrific maintenance burdens for programmers for years and that they can be avoided with a little more thought.

In fact, they don't even seem to recognize that there might be more than one way to do something.

I come from the Perl world, where one of the slogans is "There's More Than One Way To Do It." (TMTOWTDI) People who've taken a cursory look at Perl have written it off as "write-only" or "unreadable," largely because they've looked at crappy code written by people with the mindset I described above. Those people have given zero thought to design, maintainability, organization, reduction of duplication in code, coupling, cohesion, encapsulation, etc. They write crap. Those people exist programming in every language, and easy to learn languages with many ways to do things give them plenty of rope and guns to shoot and hang themselves with. Simultaneously.

But if you hang around the Perl world for longer than a cursory look, and watch what the long-timers in the community are doing, you see a remarkable thing: the good Perl programmers spend some time seeking to find the best way to do something. When they're naming a new module, they ask around for suggestions and bounce their ideas off of people. They hand their code out to get looked at, critiqued, and modified. If they have to do something nasty, they encapsulate it in the smallest way possible in a module for use in a more organized way. Several implementations of the same idea might hang around for awhile, but they compete for mindshare and marketshare, and they compete by trying to do the best job, and a big part of that is by making themselves easily maintainable. Really good Perl programmers seem to think hard about what they are doing and looking for the best way to do things, rather than just grabbing the first idea that flits through their brain.

Today I program primarily in the Java world. I've seen some really good Java code, but I see a lot of junk as well, and I see more of the mindset I described at the beginning: people settle on the first ugly lump of code that seems to work, without understanding it, without thinking if there's a better way.

You will see both mindsets in every language. I'm not trying to impugn Java specifically. (Actually I really like it in some ways ... maybe that should be my real controversial opinion!) But I'm coming to believe that every programmer needs to spend a good couple of years with a TMTOWTDI-style language, because even though conventional wisdom has it that this leads to chaos and crappy code, it actually seems to produce people who understand that you need to think about the repercussions of what you are doing instead of trusting your language to have been designed to make you do the right thing with no effort.

I do think you can err too far in the other direction: i.e., perfectionism that totally ignores your true needs and goals (often the true needs and goals of your business, which is usually profitability). But I don't think anyone can be a truly great programmer without learning to invest some greater-than-average effort in thinking about finding the best (or at least one of the best) way to code what they are doing.

skiphoppy
+5  A: 

Variable_Names_With_Bloody_Underscores

or even worse

CAPITALIZED_VARIABLE_NAMES_WITH_BLOODY_UNDERSCORES

should be globally expunged... with prejudice! CamelCapsAreJustFine. (Glolbal constants not withstanding)

GOTO statements are for use by developers under the age of 11

Any language that does not support pointers is not worthy of the name

.Net = .Bloat The finest example of microsoft's efforts for web site development (Expressionless Web 2) is the finest example of slow bloated cr@pw@re ever written. (try Web Studio instead)

Response: OK well let me address the Underscore issue a little. From the C link you provided:

-Global constants should be all caps with '_' separators. This I actually agree with because it is so BLOODY_OBVIOUS

-Take for example NetworkABCKey. Notice how the C from ABC and K from key are confused. Some people don't mind this and others just hate it so you'll find different policies in different code so you never know what to call something.

I fall into the former category. I choose names VERY carefully and if you cannot figure out in one glance that the K belongs to Key then english is probably not your first language.

  • C Function Names

    • In a C++ project there should be very few C functions.
    • For C functions use the GNU convention of all lower case letters with '_' as the word delimiter.

Justification

* It makes C functions very different from any C++ related names.

Example

int some_bloody_function() { }

These "standards" and conventions are simply the arbitrary decisions handed down through time. I think that while they make a certain amount of logical sense, They clutter up code and make something that should be short and sweet to read, clumsy, long winded and cluttered.

C has been adopted as the de-facto standard, not because it is friendly, but because it is pervasive. I can write 100 lines of C code in 20 with a syntactically friendly high level language.

This makes the program flow easy to read, and as we all know, revisiting code after a year or more means following the breadcrumb trail all over the place.

I do use underscores but for global variables only as they are few and far between and they stick out clearly. Other than that, a well thought out CamelCaps() function/ variable name has yet to let me down!

Mike Trader
Any justification for your positions?
Jay Bazuzi
So you see no value in using style (camelCase vs CamelCase vs ALL_CAPS) to indicate whether the reference is to a Class a variable an const or whatever? I can't agree. It seems you may not be aware of naming conventions as an idea. e.g. http://www.possibility.com/Cpp/CppCodingStandard.html#names
duncan
+9  A: 

Requirements analysis, specification, design, and documentation will almost never fit into a "template." You are 100% of the time better off by starting with a blank document and beginning to type with a view of "I will explain this in such a way that if I were dead and someone else read this document, they would know everything that I know and see and understand now" and then organizing from there, letting section headings and such develop naturally and fit the task you are specifying, rather than being constrained to some business or school's idea of what your document should look like. If you have to do a diagram, rather than using somebody's formal and incomprehensible system, you're often better off just drawing a diagram that makes sense, with a clear legend, which actually specifies the system you are trying to specify and communicates the information that the developer on the other end (often you, after a few years) needs to receive.

[If you have to, once you've written the real documentation, you can often shoehorn it into whatever template straightjacket your organization is imposing on you. You'll probably find yourself having to add section headings and duplicate material, though.]

The only time templates for these kinds of documents make sense is when you have a large number of tasks which are very similar in nature, differing only in details. "Write a program to allow single-use remote login access through this modem bank, driving the terminal connection nexus with C-Kermit," "Produce a historical trend and forecast report for capacity usage," "Use this library to give all reports the ability to be faxed," "Fix this code for the year 2000 problem," and "Add database triggers to this table to populate a software product provided for us by a third-party vendor" can not all be described by the same template, no matter what people may think. And for the record, the requirements and design diagramming techniques that my college classes attempted to teach me and my classmates could not be used to specify a simple calculator program (and everyone knew it).

skiphoppy
+34  A: 

A picture is not worth a thousand words.

Some pictures might be worth a thousand words. Most of them are not. This trite old aphorism is mostly untrue and is a pathetic excuse for many a lazy manager who did not want to read carefully created reports and documentation to say "I need you to show me in a diagram."

My wife studied for a linguistics major and saw several fascinating proofs against the conventional wisdom on pictures and logos: they do not break across language and cultural barriers, they usually do not communicate anywhere near as much information as correct text, they simply are no substitute for real communication.

In particular, labeled bubbles connected with lines are useless if the lines are unlabeled and unexplained, and/or if every line has a different meaning instead of signifying the same relationship (unless distinguished from each other in some way). If your lines sometimes signify relationships and sometimes indicate actions and sometimes indicate the passage of time, you're really hosed.

Every good programmer knows you use the tool suited for the job at hand, right? Not all systems are best specified and documented in pictures. Graphical specification languages that can be automatically turned into provably-correct, executable code or whatever are a spectacular idea, if such things exist. Use them when appropriate, not for everything under the sun. Entity-Relationship diagrams are great. But not everything can be summed up in a picture.

Note: a table may be worth its weight in gold. But a table is not the same thing as a picture. And again, a well-crafted short prose paragraph may be far more suitable for the job at hand.

skiphoppy
I don't agree that a picture is not worth a thousand words. I do agree with the sentiment in the answer. Perhaps it would be better to ask "Would you use a 1000 words when only a few (or even one) would do?". Using an image instead of well choosen text is may effectively be just that.
AnthonyWJones
Some words are worth thousands pictures. (What about sounds, music, odours, etc?)
moala
Yes but a 32,000 byte bitmap IS one thousand words. At least until you move to a 64-bit CPU.
Kelly French
+668  A: 

XML is highly overrated

I think too many jump onto the XML bandwagon before using their brains... XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.

My 5 cents

Can you give some specific examples of how you see XML being misused?
Jay Bazuzi
One specific example: Sitemaps (note capital “S”). What a f*cking waste of bandwidth, where a simple list would suffice. <http://sitemaps.org/>
Konrad Rudolph
it sucks even for web.
hasen j
configuration files that aren't changed after building the project - just use your programming language (optionally an embedded DSL)
Thomas Danecker
I work at a college and do lots of data returns, I personally prefer to send in xml. The normal requests are usually for csv or even worse fixed length records. It's a pain finding bugs and having to get all the documentation. XML certainly simplifies if I just need some example records.
PeteT
I store out alot of information to automatically generated XML files when an app is closed and then reload it again when it is started, so i would have to disagree!
xoxo
Data transmission. I've seen limited bandwith channels with things like <AVeryLongFieldName>A</AVeryLongFieldName>. In general, if you need concise, XML is probably not the solution.
David Thornley
You should only use XML for what it's designed, transporting data between different applications. It's no storage engine (defenitly no database! as some web developpers seem to think) and it's also not for storing your app state on shutdown.
Pim Jager
@Pim Jager: I disagree. It is very useful for storing data that may changed outside the app, without requiring a full GUI app (custom or otherwise) to make changes to that data.
Schmuli
I don't like XML too, i try to use json whenewer possible.
Edin
I strongly agree as well. Most XML Parsers/Generators are also over-engineered to the point of hopelessness.
Ryan Delucchi
I use INI for config files and CSV for data transmission.
Unkwntech
Joel says " XML is a Dumb Format For Storing Data" http://www.joelonsoftware.com/articles/fog0000000296.html
MarkJ
Agreed. XML is being used where DSLs (domain specific languages) make much more sense. Dealing with XML when defining a build file is painful. Never trust anyone who says something is "just" XML.
Lee Harold
XML is like violence. If it isn't working for you, you're not using enough of it. :)
Mikeage
XML is luggage, not a closet. And even then, often you don't need a roll-around suitcase with the pull-out handle when a duffle (or a Walmart bag) will do.
n8wrl
I wish I could vote this one up twice. Also check out: http://xmlsucks.org/
grieve
I think Ant buildfiles are a perfect example of XML abuse. XML is for data, not scripts.
Daniel Straight
I really like XML for the stuff I do, as long as you use it for what it's good at: storing recursable hierarchies of information.
Kevin Laity
JSON is usually a better format for "web" stuff.. ;)
Tracker1
Unkwntech: I think it depends, INI doesn't work for nested data structures, and I think trying to combine the two is a monstrosity (ex:Apache's configuration)
Tracker1
I think XML is only good where the format may change a bit (adding new fields, etc), or where there is a bunch of nesting and it needs to be human-readable. Otherwise, something like var:value list would suffice (like JSON)
Mark
<opinion><subject>god</subject><verb>bless</verb><object>you</object></opinion>
Steve B.
jSON is much better, but LUA is the best taken as configuration language (its deisgned that way)
majkinetor
I'd disagree if it weren't for the fact that SOO many people overstated XML's place in the world, hype hype hype. ... Nothing - even good ol' XML could live up to the original billing it got when it first entered the scene.
Gabriel
Good thing we have DataSet.ReadXml() function. It translates any god's forsaken xml hierarchy into database-like table-list. Relational database-people have been managing with simple 2d-tables for ages now. But noooo.. it's too technical for xml-people. Once again programmers are focusing on Extensibility, reinventing the wheel and making it possible for a car to have arbitrary amount of arbitrary sized wheels. "Nice job" I say! ;)
AareP
An example of XML being misused? I don't think I've ever seen an example of being used in a way I WOULDN'T call misuse. The worst misuse is, as stated before, ANT buildfiles... or XSLT. I don't think anyone on Earth could understand more than about 10 lines of XSLT without tools to help them.
Daniel Straight
If you absolutely MUST use XML-like markup, then at least use YAML. Its easier to read (less noise), small in size and equal to XML in every other way. I usually prefer simple "key=value" or "command parameters" files for configuration and such and in the rare rare cases where that isn't enough then YAML. Note that I've never had a case where I had to use YAML :-D but I imagine it will eventually happen.
Dan
Large XML is human readable, only if you *really* read hard :)
OnesimusUnbound
++ Yeah, IMHO XML is one of those popular things that get way overused. (But I'm jealous. Years ago I thought Lisp was a good syntax for exchanging information, and I think XML is just a bad Lisp.)
Mike Dunlavey
So, eight months at #5 and still not one Erik Naggum (RIP) quote. (P.S. and have you *seen* MathML?)
Cirno de Bergerac
XML is not efficient for either machine readability or human readability. It is however extensible. If THAT is what you need, you should use it.
DanO
One of the best uses of XML hasn't been mentioned - object serialization/deserialization. Sending stateful objects from one point to another has tremendous traction in distributed caching. In fact, whenever you need a disk-based storage of a hierarchy, I think XML is appropriate - and that includes configuration files.
joseph.ferris
XML is of no use for programmers, but very useful for smart people... and machines! :)
Max Toro
XML is a *document* format, not a random data serialization format. If you don't have like 10x more text than tags, you're doing it wrong :P
Nicolás
Amen! XML is not a "programming language" although many have tried to make it one. XML is a markup language. Its a poorly formatted style that could easily be replaced with CSV or delimited files. I have been anti-XML since day one.
Devtron
joseph.ferris, XML is bloated. Object serialization has nothing to do with XML. XML cannot serialize itself. It's not a language, its bloated and overused by the Microsoft hype community.
Devtron
XML databases are probably the most pernicious data storage scheme I've ever seen, and I'm including MS Access.
BobMcGee
One of XML misuses is its use to generate GUI, following quite a fairytaleworld idea, that one does not to be a programmer to write usable user interfaces.
Taisin
+1 XML was a horrible choice for Microsoft to base XAML on. Programming in XML is a nightmare.
chaiguy
"(any idiot can invent a data exchange format better than xml)" - Douglas Crockford
Dave
@Dave: Except for virtually all the idiots who have actually tried. YAML and JSON are only really suitable for simpler cases (e.g., where there aren't lots of namespaces) and ASN.1 is *horrible* if powerful. Mind you, some people should never have been let near a schema editor…
Donal Fellows
Text files usually do the job, with less parse time and complextity :)
Mohsen
Parsing XML is always a chore and inelegant. However, XML is useful because you don't need to roll your own parser or learn yacc/bison or BNF grammars.
burkestar
A: 

"Good Coders Code and Great Coders Reuse It" This is happening right now But "Good Coder" is the only ONE who enjoy that code. and "Great Coders" are for only to find out the bug in to that because they don't have the time to think and code. But they have time for find the bug in that code.

so don't criticize!!!!!!!!

Create your own code how YOU want.

Access Denied
In the working world it is not an option to rewrite code "the way you want it" you have to deal with what is there regardless of who wrote it. The rest of your post is incomprehensible.
duncan
I totally disagree with you: do not reinvent the wheel, they say!
Luis Filipe
+4  A: 

Reuse of code is inversely proportional to its "reusability". Simply because "reusable" code is more complex, whereas quick hacks are easy to understand, so they get reused.

Software failures should take down the system, so that it can be examined and fixed. Software attempting to handle failure conditions is often worse than crashing. ie, is it better to have a system reset after crashing, or should it be indefinitely hung because the failure handler has a bug?

Matthias Wandel
"failures should take down the system" - you're definitely on crack with this one! My entire system should ***NEVER*** die because **one** component hicoughed
warren
+18  A: 

It's okay to be Mort

Not everyone is a "rockstar" programmer; some of us do it because it's a good living, and we don't care about all the latest fads and trends; we just want to do our jobs.

Wayne M
I agree, with the caveat (and I'm turning and looking in the direction of several teams in Redmond, Washington) that Mort is often unfairly scoped and not always well understood.
Gabriel
I'm with you Wayne, though to stay in the industry, I think we all need to go Elvis and Einstein at times. And we need to put in effort outside of work too. I rested on my laurels for a while (got married, moved, had other stuff going on) and I can see tech moving beyond me and now I have to play catch up. Tech is moving too fast for extra effort not to be put in. I'm learning and doing side projects again, and I'm having fun. But I do resent the 14 hour a day folks. They will blossom, whither, and then fade. Balance is the key, but the day of being exclusively a Mort are numbered.
infocyde
+4  A: 

Java is not the best thing out there. Just because it comes with an 'Enterprise' sticker does not make it good. Nor does it make it fast. Nor does it make it the answer to every question.

Also, ROR is not all it is cracked up to be by the Blogsphere.

While I am at it, OOP is not always good. In fact, I think it is usually bad.

Alex UK
oop is really bad for small-size software because it has so much overhead. but, my prof said that it's super good for large scale software, and I think you can tell by my wording that I don't know so I will just believe my prof until proven false =P
hasen j
+4  A: 

Opinion: most code out there is crappy, because that's what the programmers WANT it to be.

Indirectly, we have been nurturing a culture of extreme creativeness. It's not that I don't think problem solving has creative elements -- it does -- it's just that it's not even remotely the same as something like painting (see Paul Graham's famous "Hackers and Painters" essay).

If we bend our industry towards that approach, ultimately it means letting every programmer go forth and whack out whatever highly creative, crazy stuff they want. Of course, for any sizable project, trying to put together dozens of unrelated, unstructured, unplanned bits into one final coherent bit won't work by definition. That's not a guess, or an estimate, it's the state of the industry that we face today. How many times have you seen sub-bits of functionality in a major program that were completely inconsistent with the rest of the code? It's so common now, it's a wonder anyone cause use any of these messes.

Convoluted, complicated, ugly stuff that just keeps getting worse and more unstable. If we were building something physical, everyone on the planet would call us out on how horribly ugly and screwed up the stuff is, but because it more or less hidden by being virtual, we are able to get away with some of the worst manufacturing processing that our species will ever see. (Can you imagine a car where four different people designed the four different wheels, in four different ways?)

But the sad part, the controversial part of it all, is that there is absolutely NO reason for it to be this way, other than historically the culture was towards more freedom and less organization, so we stayed that way (and probably got a lot worse). Software development is a joke, but it's a joke because that's what the programmers want it to be (but would never in a million years admit that it was true, a "plot by management" is a better reason for most people).

How long will we keep shooting ourselves in the foot, before we wake up and realize that we the ones holding the gun, pointing it and also pulling the trigger?

Paul.

Paul W Homer
That's just a lesson one has to learn through time and experience. Nevertheless, the "problem" won't get fixed because the "novices" don't realize or call it out, and too many "experienced" suffer from "not invented here" syndrome. By the way, this influences *every* profession to some extent.
dreftymac
You might want to check what the original meaning of "shoot yourself in the foot" means (as opposed to the 'new' meaning) and then think if maybe creating a bit of pain and confusion for the return of long-term survival is what is going on here. There is a survival strategy in hard to maintain code.
duncan
That type of survival strategy only works in a few large static corporate environments. If hard-to-maintain code causes the project to fail and be disbanded, it provides no long term gain. But even if it works, it's a miserable existence ...
Paul W Homer
Kudos for pointing this out. The truth is that sloppiness and heroism in software development are NOT self-evident. It's an effect of the (SW development) culture of the 60s/70s.
Thorsten79
"If we were building houses like we're building software, the first woodpecker would be the end of mankind." -- dunno who said that but he is still right ;)
Aaron Digulla
You sense the disease but the diagnosis is incorrect: writing software is **not** a manufacturing process, period. It is a wrong analogy. "Manufacturing" is reproducing a physical "thing" n times, starting from a blueprint. Now this process is not perfect, so you need to control this process of reproduction. Writing software is more akin to design, i.e. producing the blueprint. Given the blueprint (the program) a computer perfectly reproduce it, i.e. it accurately solves every instance of the problem for which it was designed (it "manufacture" each solution, given the blueprint).
MaD70
Now, designing something in engineering disciplines is certainly a creative process but equally certainly it is **not** unconstrained, undisciplined. For example: structural engineers use math, sciences and other disciplines. Their practice is founded on knowledge, theory, experience. What you correctly describe, with an uneasy that I concur, is a field not even at a level of good craftsmanship, not engineering and certainly not art.
MaD70
+43  A: 

A Clever Programmer Is Dangerous

I have spent more time trying to fix code written by "clever" programmers. I'd rather have a good programmer than an exceptionally smart programmer who wants to prove how clever he is by writing code that only he (or she) can interpret.

Tom Moseley
Real clever programmers are those that find the good answer while making it maintainable. Either that or those who hide their names from comments so users won't backfire asking for changes.
David Rodríguez - dribeas
Real genius is seing how really complex things can be solved in a really simple way. People who write needlesly complex code are just assholes who want to feel superior to the world around them.
Seventh Element
+1 Good programmers know their own limitations - if it's so clever you can only just understand it when you're writing it, well, it's probably wrong now, and you'll never understand it in 6 months time when it needs changing.
MarkJ
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --unknown
Robert J. Walker
Robert, great quote: BTW it's from Brian Kernighan not "unknown"
MarkJ
+31  A: 

If a developer cannot write clear, concise and grammatically correct comments then they should have to go back and take English 101.

We have developers and (the horror) architects who cannot write coherently. When their documents are reviewed they say things like "oh, don't worry about grammatical errors or spelling - that's not important". Then they wonder why their convoluted garbage documents become convoluted buggy code.

I tell the interns that I mentor that if you can't communicate your great ideas verbally or in writing you may as well not have them.

I agree clear communication is important. But grammar is secondary. Some people have poor grammar but can communicate clearly (I'm thinking of some non-native English speakers) and some people have perfect grammar but can hardly communicate at all.
John D. Cook
Ironically, there are many developers that think this is beneath them. Comments and documentation that looks like it's written by a retard should somehow convey that they are truly great hackers.
Seventh Element
This isn't just about grammar and spelling either. It is possible to write something that has correct grammar and spelling yet is nearly impossible for others to understand (just as you can write a program that compiles and runs yet is impossible to understand the code). Being able to express yourself clearly in writing is very important. Having taught a comp-sci course that involves writing design documents for the last six years I've found it distressing how few of my students seem to possess this ability. And it seems to be getting worse each year.
Kris
@John D CookPoor grammar is most often detrimental for communication. These rules weren't invented for no reason (goes to check if there are no grammar mistaeks in those comment).
quant_dev
"If a developer cannot write **a** clear, concise and grammatical comment **s**..."Deliberate irony?
Mark Bannister
+52  A: 

Realizing sometimes good enough is good enough, is a major jump in your value as a programmer.

Note that when I say 'good enough', I mean 'good enough', not it's some crap that happens to work. But then again, when you are under a time crunch, 'some crap that happens to work', may be considered 'good enough'.

John MacIntyre
+12  A: 

Most consulting programmers suck and should not be allowed to write production code.

IMHO-Probably about 60% or more

John MacIntyre
That is not controversial; that is fact!
icelava
Most non-consulting programmers are stuck in a rut and live in a company bubble maintaining dinosaur code while never being exposed to anything that challenges there assumptions; except for the occasional outside consultant. How's that for controversial? ;-)
Seventh Element
@Diego; true and consultants have an opportunity to become amazing programmers with everything they are exposed to. But in my experience, I've seen too much crap written by hacks who just picked up enough knowledge to make it work, knowing they'd never have to maintain it, and they just don't care.
John MacIntyre
I consulted for many years. There were cases where the company programmers were good but didn't understand how I was doing things, and so were inclined to criticize. Nevertheless, I'm inclined to agree with you - there are half-hearted programmers in contracting positions.
Mike Dunlavey
+5  A: 

Opinion: There should not be any compiler warnings, only errors. Or, formulated differently You should always compile your code with -Werror.

Reason: Either the compiler thinks it is something which should be corrected, in case it should be an error, or it is not necessary to fix, in which case the compiler should just shut up.

JesperE
I have to disagree. A really good warning system will warn you about things that are probably bad code, but may not be depending on how you use them. If you have lint set to full, I believe there are even cases where you can't get rid of all the warnings.
Bill K
That would mean I would have to throw out my C# compiler. I have 2 (AFAIK, unfixable) warnings about environment references (that are indeed set correctly) that don't appear to break anything. Unless -Werror merely supresses warnings and doesn't turn them into errors >_>
Dalin Seivewright
Finally, someone disagrees. It wouldn't really be a controversial opinion otherwise, now would it?
JesperE
Doesn't your C# compiler allow you to disable the warnings? If you know they are unfixable and "safe", why should the compiler keep warning? And yes, -Werror turns all warnings into errors.
JesperE
I try to get the warnings down to zero but some warnings are 50:50: They make sense in case A but not in case B. So I end up sprinkling my code with "ignore warning"... :(
Aaron Digulla
Well, as long as I'm the one writing the compiler, then I agree with you. But is someone else wrote the compiler, I would the ability to disagree with them when they claim perfectly valid constructs are warning worthy.
nosatalian
That is why most compilers allow you to disable warnings. That's fine. What I mean is that you either disable the warning or fix it. Don't just leave it there.
JesperE
+148  A: 

Most professional programmers suck

I have come across too many people doing this job for their living who were plain crappy at what they were doing. Crappy code, bad communication skills, no interest in new technology whatsoever. Too many, too many...

petr k.
Hardly controversial, is it? It's hard not to notice that this is the case.
jalf
+1: Good programmers are hard to find, I wouldn't hire 98% of the programmers I've met, even if they offered to work for free.
Robert Gamble
That is not controversial; that is fact!
icelava
I'd +1 this but it's not controversial
annakata
It's probably controversial to the 98%.
Fabian Steeg
@fsteeg: you stole my line. :)
MusiGenesis
90% of everything is crap...
Brian Postow
+100 (if I could)
Frank V
Most people doing a job think that they do their job better than everybody else. Most of them are wrong.
Aaron
@BrianPostow, the scary thing is when you realize that this also applies to medical doctors too!
jdkoftinoff
Very true, but not very controversial. Most doctors suck, most hairdressers suck, most mechanics suck, even most housekeepers suck. They suck compared to the best in the field, but to outsiders or the ignorant they are okay. They are just doing the minimum to get through the day.
Kirk Broadhurst
@ Brian Postow actually 99% of everithing is space (void, nothing, whatever).
Random
+5  A: 

A majority of the 'user-friendly' Fourth Generation Languages (SQL included) are worthless overrated pieces of rubbish that should have never made it to common use.

4GLs in general have a wordy and ambiguous syntax. Though 4GLs are supposed to allow 'non technical people' to write programs, you still need the 'technical' people to write and maintain them anyway.

4GL programs in general are harder to write, harder to read and harder to optimize than.

4GLs should be avoided as far as possible.

Alterlife
'non technical people' never want to write code, but some people will never get it.
01
"harder to optimize than ..." what?
Mike Dunlavey
Interesting opinion, though a bit harsh, maybe.
Mike Dunlavey
All 4GLs suck. Not a majority. 100%.
Warren P
+19  A: 

Don't comment your code

Comments are not code and therefore when things change it's very easy to not change the comment that explained the code. Instead I prefer to refactor the crap out of code to a point that there is no reason for a comment. An example:

if(data == null)  // First time on the page

to:

bool firstTimeOnPage = data == null;
if(firstTimeOnPage)

The only time I really comment is when it's a TODO or explaining why

Widget.GetData(); // only way to grab data, TODO: extract interface or wrapper
rball
Your "explaining why" rationale is also subject to change if the API you are working with, for example, gets updated or improved.
dreftymac
In my small example I'm trying to show why I already did what I did. Like there's a better way to grab data, but this is the only way right now. Kind of like a note to refactor or why something happened. Also it's mainly related to my own code and not an external dependency.
rball
Icky. Don't declare a variable if you're only going to use it once. Your suggestion is not much better than, "int i,this_is_a_counter;". If you're forced to *add* extra code to get rid of comments, you've made things MORE complicated!
Brian
I have to agree with Brian, nothing worst then having a bunch of one time use variables.
James McMahon
I'm sick of reading this crap. The reality is that the large majority of code out there is badly written, let alone reasonably refactored. If you can't write decent (understandable) code at least have the decency of adding comments.
Seventh Element
Why are one-time variables bad? They explain what you do, they don't cost anything (if you have a half decent compiler), and you can easily use them again for the same thing. Without the firstTimeOnPage, I would be very likely to put in the if (data == null) condition somewhere else as well.
erikkallen
-1: Comments are good. Comments are a cornerstone of code. I'd rather spend 10 seconds reading a one-line comment than spend two hours trying to figure out what some really complex code does.
tsilb
You might spend 10 seconds reading a one-line comment and then 3 hours finding out that the comment is outdated and led you down the wrong path. A well named variable or method is preferable, then I know what your intentions were and know that it hasn't changed. Also easily refactorable.
rball
@brian, one time variables can give names to faceless expressions, which is nice, especially in long parameter lists.
Thorbjørn Ravn Andersen
@rball: I agree and disagree, depending on how declarative or domain-specific the language is. You have a functional spec somewhere, if only in your head. If the language is declarative enough to directly encode the functional spec, then there's no need for comments. Usually, that is not the case, so IMO the purpose of comments is to express the mapping between implementation and functional spec, to the extent that the code itself is not able to. That way, when the spec changes, as it always does, you know what code to change.
Mike Dunlavey
+16  A: 

Only write an abstraction if it's going to save 3X as much time later.

I see people write all these crazy abstractions sometimes and I think to myself, "Why?"

Unless an abstraction is really going to save you time later or it's going to save the person maintaining your code time, it seems people are just writing spaghetti code more and more.

Paul Mendoza
Yay!Also look up "YAGNI"
Bjarke Ebert
If you're writing abstraction using spaghetti code, then you're doing something very, very, wrong.
JesperE
+40  A: 

Bad Programmers are Language-Agnostic

A really bad programmer can write bad code in almost any language.

CMS
And good programmers just about the same.
David Rodríguez - dribeas
Yeah, that's why it's controversial :)
CMS
+10  A: 

My controversial opinion? Java doesn't suck but Java API's do. Why do java libraries insist on making it hard to do simple tasks? And why, instead of fixing the APIs, do they create frameworks to help manage the boilerplate code? This opinion can apply to any language that requires 10 or more lines of code to read a line from a file.

Jeremy Wall
+60  A: 

Architects that do not code are useless.

That sounds a little harsh, but it's not unreasonable. If you are the "architect" for a system, but do not have some amount of hands-on involvement with the technologies employed then how do you get the respect of the development team? How do you influence direction?

Architects need to do a lot more (meet with stakeholders, negotiate with other teams, evaluate vendors, write documentation, give presentations, etc.) But, if you never see code checked in from by your architect... be wary!

kstewart
I think this was already covered.
Jay Bazuzi
Architects that *do* code are worse than those that don't. i.e. their productivity is negative.
finnw
Can we paraphrase... Architects need to respect coders?
gbn
+81  A: 

Don't use inheritance unless you can explain why you need it.

theschmitzer
Inheritance is the second strongest relationship in C++ and the strongest relationship in most other languages. It strongly couples your code with that of your ascendant. If you can just use it through interfaces go for it. Prefer composition over inheritance always.
David Rodríguez - dribeas
Most uses of inheritance as a form of reuse, overriding whatever is needed to change. They generally don't know/care if they violate LSP, and can achieve what they need with composition.
theschmitzer
I tend to think that delegation is cleaner in most cases where people use inheritance (esp. lib development) because:- abstraction is better- coupling is looser - maintenance is easierDelegation defines a contract between the delegating and the delegate that is easier to enforce among versions.
fbonnet
He's not saying don't use inheritance at all, just don't use it if you can't explain why you need it. If you're wanting to code an OO application and think throwing a little inheritance in here and there is just gonna make it OO, then you're dumb and should be fired from the ability program.
Wes P
Like many other programming constructs, the purpose of inheritance is to avoid duplicated code.
Kyralessa
interesting....
Frank V
Or as Sutter and Alexandrescu said in C++ Coding Standards: Inherit an interface, not the implementation.
blwy10
You should expand that to: "Don't ever code *anything* that you can't explain." Everything you do in code should have a reason.
Oorang
A: 
  • Global variables are ok (there are times where it is a very good solution)
  • gotos have their place, both are missed (i rarely use both.)
  • defines/macros are wonderful but incredibly evil
  • Singletons should NEVER be used*1

    and my most controversial yet...

  • COMMENTS ARE EVIL AND A WASTE OF TIME

*1 logging may be ok but i dont even do that. What if you would like to output log data on a per thread basis. You want to which thread is outputting that line, chances are you need a non static member unique to your own thread. So logging i see benefits of NOT using a singleton.

acidzombie24
Why are comments evil and a waste of time?
Scotty Allen
Comments can be outdated very easily and may be hard to tell if a comment is outdated. It waste programming time if comment before the func is finish. it will be changed and more time will be spent. Func shouldnt need comments and should be readable via variable names. For API, there should be a man
acidzombie24
A code in itself can easily explain HOW it does what it does, but it can't explain WHY something is done - comments can explain that.
Rene Saarsoo
I agree with some of what you've said, but I don't think you did a good job of presenting your ideas and justifying them, so downvoting.
Jay Bazuzi
There are some really good reasons for comments e.g explanation of intent, clarification, warning of consequences, TODO comments.But 98% of comments i've read evil and a waste of time.
Ludwig Wensauer
Comments are evil if you're often bored and need a 10 minute job to take all day. I prefer to find something new to tinker with :)
jTresidder
Do you really believe this stuff or are you just trying to be provocative?
Seventh Element
I believe in this stuff.
acidzombie24
I was with you on gotos... but if you like globals so much, how can you hate the OO global singleton.
Software Monkey
well, i didnt explain anything which is possibly why i am downvoted, controversial indeed.I use globals as only quick debug and test cases that are NOT meant for production code. They should be deleted as soon as the problem/test is solved. Singleton do not look like test/temp code to delete.
acidzombie24
... You believe in Globals but not Singletons? How is that consistent?
Christopher W. Allen-Poole
That's definitely controversial.
C. Ross
Christopher W. Allen-Poole: My comment before yours -> Singleton do not look like test/temp code to delete. I'm repeating to avoid confusion. I am kind of glad i got into the negatives w/o meaning to :)
acidzombie24
The only controversy here is how you qualify to answer questions on stack overflow.
Stefan Valianu
+22  A: 

If your text editor doesn't do good code completion, you're wasting everyone's time.

Quickly remembering thousands of argument lists, spellings, and return values (not to mention class structures and similarly complex organizational patterns) is a task computers are good at and people (comparatively) are not. I buy wholeheartedly that slowing yourself down a bit and avoiding the gadget/feature cult is a great way to increase efficiency and avoid bugs, but there is simply no benefit to spending 30 seconds hunting unnecessarily through sourcecode or docs when you could spend nil... especially if you just need a spelling (which is more often than we like to admit).

Granted, if there isn't an editor that provides this functionality for your language, or the task is simple enough to knock out in the time it would take to load a heavier editor, nobody is going to tell you that Eclipse and 90 plugins is the right tool. But please don't tell me that the ability to H-J-K-L your way around like it's 1999 really saves you more time than hitting escape every time you need a method signature... even if you do feel less "hacker" doing it.

Thoughts?

IMHO, if you need so badly code completion, it's a code smell, or even a design smell : it indicates that the design has grown too complicated, too interdependent, too tightly coupled to other module's responsibilities. It's a bit controversial too: refactor it until it fits into your brain !
vincent
Code completion slows typing. Even set to zero delay, there's the tiniest pause while you wait for code completion.I agree that if you need code completion on your own code, that may well be a sign something needs simplification. But libraries are so large now, I think it helps more than hurts.
Kendall Helmstetter Gelner
@vincent: Do you never use massive libraries (.NET Framework / Windows API etc)?
erikkallen
I'm using Django, and RoR before. Both encourage cohesion and small files. At the same time I'm helping out a beginner with VB.net, and I have to say VS is impressive, and it certainly influences the code style itself ; but code completion has to be a double-edged sword.( BTW, I *HATE* eclipse )
vincent
VS has really fast completion @Kendall: it doesn't impede my typing. Half the time I write Con.Wr[Down]( for Console.WriteLine(. That's 10 keystrokes less. @vincent, I agree, Eclipse needs to improve their code completion.
Jonathan C Dickinson
Vim can do completion.
Benoit Myard
I work with only one other developer on a project with 240k lines of code and almost a thousand files. We couldn't live without code completion.
Matthew Iselin
+17  A: 

You don't always need a database.

If you need to store less than a few thousand "things" and you don't need locking, flat files can work and are better in a lot of ways. They are more portable, and you can hand edit them in a pinch. If you have proper separation between your data and business logic, you can easily replace the flat files with a database if your app ever needs it. And if you design it with this in mind, it reminds you to have proper separation between your data and business logic.

--
bmb

bmb
True, but Sqlite is very portable too. I'm not gonna start with flat files if there is a change it should be moved to Sqlite.
tuinstoel
There are other benefits of a DB. Shared access across a network for a client/server program. Easy access and manipulation of data (although technologies like LINQ help with that).
Cameron MacFarland
There are thousands of benefits of a database and reasons why we need them most of the time. But not *always*.
bmb
having a database from the start is easier than first having proper separation between data storage and biznis logic with flat files so that you can switch to a database later :)
hasen j
Are you saying it's easier to do it wrong with a database than it is to do it right without one?
bmb
I am 100% convinced that developers over use databases. The crutch that kills.
Stu Thompson
@Stu Thompson, I'm not. At work I'm refactoring an application so that it stores its data in a database instead of xml files. It is a lot of work and I hope it is the last time that I have to do this.
tuinstoel
tuinstoel, don't blame XML files for a missing or poorly designed data access layer.
bmb
@bmb, Even refactoring 'just' a data access layer can be a lot of work. And it is totally unnecessary work.
tuinstoel
+62  A: 

Pagination is never what the user wants

If you start having the discussion about where to do pagination, in the database, in the business logic, on the client, etc. then you are asking the wrong question. If your app is giving back more data than the user needs, figure out a way for the user to narrow down what they need based on real criteria, not arbitrary sized chunks. And if the user really does want all those results, then give them all the results. Who are you helping by giving back 20 at a time? The server? Is that more important than your user?

[EDIT: clarification, based on comments]

As a real world example, let's look at this Stack Overflow question. Let's say I have a controversial programming opinion. Before I post, I'd like to see if there is already an answer that addresses the same opinion, so I can upvote it. The only option I have is to click through every page of answers.

I would prefer one of these options:

  1. Allow me to search through the answers (a way for me to narrow down what I need based on real criteria).

  2. Allow me to see all the answers so I can use my browser's "find" option (give me all the results).

The same applies if I just want to find an answer I previously read, but can't find anymore. I don't know when it was posted or how many votes it has, so the sorting options don't help. And even if I did, I still have to play a guessing game to find the right page of results. The fact that the answers are paginated and I can directly click into one of a dozen pages is no help at all.

--
bmb

bmb
Google does pagination, Google is very popular.
tuinstoel
Good point. I would argue that google is narrowing down what users need based on real criteria -- the criteria is "ten best results." I'm not saying that showing less than the full results is always bad, if you give the user what they want.
bmb
maybe you should give conrete example of a thing that's paginated but shouldn't. for example, how would you "narrow down" answers to this question?
hasen j
@bmb: Where does this put this thread? @tuinstoel: I claim that nobody ever (i.e. about 0.1% of all page views, probably much more for image search) use more than the first page of results. Pagination done right.
Konrad Rudolph
@Konrad Rudolph, Once of twice each year I search on my own name, I use all the page results (I'm not famous). That is probably the only time I use all the pages.
tuinstoel
Sometimes it's easier for the user to read if all the controls are visible at the same time (no scroll bars). But in any case, you have to ask: Should I use paging or scrollbars? Either way it's still a click to the user.
T Pops
@tuinstoel google does a lot of things but is not cooking fish. That google is doing pagination has no consequence in its popularity. Pagination is an antiquated model from books time. It will disappear soon in favor of ajax like refreshes, used by Google Reader for example.
Elzo Valugi
I really, really hate the default 10 results from Google. I turn it up to 100 on every browser I use. I'd probably turn it to 1000 if there were an option (and it still was speedy)
nos
You'll have much more trouble coming up with those query-based requirements than just implementing a simple pagination system. Sure, if you can suggest an alternative, go right ahead and reduce the number of items to return but not every problem will be as amenable.
Kelly French
In the end pagination isn't really interesting. What's more important is the question: do you count all the search results and show the exact count or do you just provide an estimaton? Google shows only an estimation, showing only an approximation has great performance benefits. Ajax like refreshes don't change this.
tuinstoel
"Who are you helping by giving back 20 at a time? The server? Is that more important than your user?" If only 1% of users actually need this feature, then the server and thus the other 99% of users.
Brian Ortiz
Ortzinator, I would agree with you if I thought the number was really 99%. But since my (controversial) contention is that pagination is "never" what the user wants, then I think helping the server helps no one. However, users who don't want all the results don't have to get them. Then everyone is happy.
bmb
I came across this answer while paging through and searching every answer to this question to see if anyone had already posted about anonymous functions. Just sayin'
Larry Lustig
So what about resultsets that have thousands or millions of results? What if it's only hundreds but each one shows a bunch of detail? Returning over 100K violates web best practices and such result sets could result in *huge* server loads.
tsilb
tsilb, then "allow the user to narrow down what they need based on real criteria". The point here is not that subsets are always bad, it's that pagination is not a method of subsetting that helps anyone. And huge server loads? Boo hoo. Did you build your app to make your server happy? Or your users?
bmb
slashdot uses an approach where if you try to scroll below the last entry an extra set is added to the page. I love it!
Thorbjørn Ravn Andersen
Thorbjørn Ravn Andersen, that helps a little, but it would still be tedious if you want to use your browser's "find" function.
bmb
+1  A: 

I think its fine to use goto-statements, if you use them in a sane way (and a sane programming language). They can often make your code a lot easier to read and don't force you to use some twisted logic just to get one simple thing done.

woop
The key concept is "in a sane way". I would be shy of this idea if it were running for Grand Poo-Bah, but I understand Linus Torvalds agrees with it passionately :-)
Mike Dunlavey
+3  A: 

Use type inference anywhere and everywhere possible.

Edit:

Here is a link to a blog entry I wrote several months ago about why I feel this way.

http://blogs.msdn.com/jaredpar/archive/2008/09/09/when-to-use-type-inference.aspx

JaredPar
I'd love to see reasoning about this. Very controversial, and room for lots of good points from both sides.
Jon Skeet
@Jon, added a blog link to the reasons I feel this way.
JaredPar
Jared, your blog post is about local variable declaration with `var`, but your title is much more general. Please clarify.
Jay Bazuzi
@Jay, most of the problem with type inference is around "var" vs. overload resolution and generic method type inference. I really should have added a sample or two to the article though it was discussed in the comments.
JaredPar
+31  A: 

C++ is a good language

I practically got lynched in another question a week or two back for saying that C++ wasn't a very nice language. So now I'll try saying the opposite. ;)

No, seriously, the point I tried to make then, and will try again now, is that C++ has plenty of flaws. It's hard to deny that. It's so extremely complicated that learning it well is practically something you can dedicate your entire life to. It makes many common tasks needlessly hard, allows the user to plunge head-first into a sea of undefined behavior and unportable code, with no warnings given by the compiler.

But it's not the useless, decrepit, obsolete, hated language that many people try to make it. It shouldn't be swept under the carpet and ignored. The world wouldn't be a better place without it. It has some unique strengths that, unfortunately, are hidden behind quirky syntax, legacy cruft and not least, bad C++ teachers. But they're there.

C++ has many features that I desperately miss when programming in C# or other "modern" languages. There's a lot in it that C# and other modern languages could learn from.

It's not blindly focused on OOP, but has instead explored and pioneered generic programming. It allows surprisingly expressive compile-time metaprogramming producing extremely efficient, robust and clean code. It took in lessons from functional programming almost a decade before C# got LINQ or lambda expressions.

It allows you to catch a surprising number of errors at compile-time through static assertions and other metaprogramming tricks, which eases debugging vastly, and even beats unit tests in some ways. (I'd much rather catch an error at compile-time than afterwards, when I'm running my tests).

Deterministic destruction of variables allows RAII, an extremely powerful little trick that makes try/finally blocks and C#'s using blocks redundant.

And while some people accuse it of being "design by committee", I'd say yes, it is, and that's actually not a bad thing in this case. Look at Java's class library. How many classes have been deprecated again? How many should not be used? How many duplicate each others' functionality? How many are badly designed?

C++'s standard library is much smaller, but on the whole, it's remarkably well designed, and except for one or two minor warts (vector<bool>, for example), its design still holds up very well. When a feature is added to C++ or its standard library, it is subjected to heavy scrutiny. Couldn't Java have benefited from the same? .NET too, although it's younger and was somewhat better designed to begin with, is still accumulating a good handful of classes that are out of sync with reality, or were badly designed to begin with.

C++ has plenty of strengths that no other language can match. It's a good language

jalf
true dat. My main beef is that every 3rd party library has its own string class. I waste too much time converting between CString to std::string to WxString to char*. Can't everyone just use std::string or const char*.
Doug T.
Not true "C++ has plenty of strengths that no other language can match. It's a good language." EVERY language has strengths that no longer language can match (even LOLCODE, hey it's a lot of fun).
Jonathan C Dickinson
Perhaps. But C++'s strengths are a bit more commonly useful. Let me know when your language of choice supports compile-time metaprogramming or RAII.
jalf
+88  A: 

A degree in Computer Science or other IT area DOES make you a more well rounded programmer

I don't care how many years of experience you have, how many blogs you've read, how many open source projects you're involved in. A qualification (I'd recommend longer than 3 years) exposes you to a different way of thinking and gives you a great foundation.

Just because you've written some better code than a guy with a BSc in Computer Science, does not mean you are better than him. What you have he can pick up in an instant which is not the case the other way around.

Having a qualification shows your commitment, the fact that you would go above and beyond experience to make you a better developer. Developers which are good at what they do AND have a qualification can be very intimidating.

I would not be surprized if this answer gets voted down.

Also, once you have a qualification, you slowly stop comparing yourself to those with qualifications (my experience). You realize that it all doesn't matter at the end, as long as you can work well together.

Always act mercifully towards other developers, irrespective of qualifications.

Maltrap
"degree in Computer Science or other IT area DOES make you more well rounded" ... "realize that it all doesn't matter at the end, as long as you can work well together" <- sounds a tiny bit inconsistent and self-contradictory.
dreftymac
IT referring to the fact that the other guy has a degree. It's strange, once you have a qualification, you might stop comparing yourself to others.
Maltrap
Agree - qualifications are indicators of commitment. They can be more but if even if that's all they are then they have value. It is only those without pieces of paper who decry them. Those with them know the limits of their value but know their value too.
duncan
From past experience I'd generally rather work with someone that at least has an EE degree, than someone who came into the field after college.
Kendall Helmstetter Gelner
i would even say a good university degree. i met a programmer at my work who finished some small it-schoold i've never heard of and didn't know how many different numbers can be written on 8 bits!
agnieszka
A degree in ANY area (except maybe post-modern literary criticism) makes you a more well-rounded programmer, especially if it's in mathematics or science or engineering. Comp Sci and IT degrees tend to have incredibly narrow scope and focus.
MusiGenesis
In the spirit of healthy discussion I'll just say that I vehemently disagree (and I've got one). Past deliverables shows commitment, not that you lived somewhere for 4 years and read some books.
SnOrfus
I don't believe in degrees as measurements of value or skill, but studying at a university gives you the opportunity to learn the foundations of many different fields that can be useful to you in a work situation. I'm doubtful if being able to graduate is an acceptable proof that you've learned anything, but I know that you CAN learn a lot of useful skills, if you're ambitious enough.
Lucas Lindström
"What you have he can pick up in an instant" - Not necessarily. The ability to write good code is something that tends to come with experience, though some people pick it up quickly and some never seem to get there.The guy with the CS degree will certainly be able to pick up the languages and APIs you use in an instant, but there's no guarantee he'll ever be a good programmer. And he certainly won't become one overnight if he's not one now.
Mark Baker
I learned far more from my college library than the classes them selfs.
gradbot
Disagree - Self learning can be quite better than university learning. As for University, they make you think they way they want (as better marks for thinking their way). A self learner will think far better (for a given value of better) that a person teached to lern one way. I'm fascinated that you agree with me, btw: "You realize that it all doesn't matter at the end, as long as you can work well together."
Random
As someone about to complete a degree in Information Technology (with a specialization in Applications Development, no less), let me assure you that it is a small step above useless for someone interested in software development. You're more than likely to learn UML and object-orientedness which is supposedly good, but beyond that you're on your own.
baultista
+13  A: 

New web projects should consider not using Java.

I've been using Java to do web development for over 10 years now. At first, it was a step in the right direction compared to the available alternatives. Now, there are better alternatives than Java.

This is really just a specific case of the magic hammer approach to problem solving, but it's one that's really painful.

pansapien
Did you mean "New web projects should *not* consider" ?
dreftymac
That doesn't sound very controversial to me.
finnw
WOW! Some people in this thread really have extremist views! ;-)
Seventh Element
This is absolutely not controversial. Perhaps you want to say *New web projects **should not** consider using Java*
flybywire
+264  A: 

Less code is better than more!

If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.

Jas Panesar
I dissagree, readability is crucial and if you take myMethod(mVar++) / myMethod(++myVar) vs myVar++; myMethod(myVar). Give me the latter it's clearer and more readable. If less code is better do you name all variables i,j,k etc...
JoshBerke
Good point, and since your point is a variant of Hemingway's approach to writing, very appropriately written.
MusiGenesis
What I meant is coding things as simply and clearly as possible, but no simpler. Sometimes more lines of code are created in trying to break down a process. More lines of code = more bugs = more debugging. The cost of maintaining each line of code gets to be exponential.
Jas Panesar
"Perfection is attained not when you have nothing left to add, but when there is nothing left to take away" --Antoine de Saint-Exupery
TokenMacGuy
This does not seem very controversial to me.
Richard
I don't think people realize quite how true this is. I've noticed a gradual improvement in my code quality as I've got more extreme with my minimalism. I think I'm going to go take out some methods from a few classes now actually...
Ollie Saunders
@Ollie, little is more satisfying than getting the same amount done with less code. I find sometimes I'll write a bit extra while figuring out how to best do it and then get rid of some code. Our design meetings often are about finding the shortest and best path. Complexity increases the headaches with scaling and extensibility in the future. When we have a system we no longer want to work on as much... it's trouble.
Jas Panesar
For anyone who doesn't think this is controversial, I offer into evidence: Java.
Chuck
The belief that less code = simpler code is utterly false.
Stuart
Less code does not mean compressed/obfuscated code. I'm sure many have seen someone create a whole family-tree of classes to solve a problem you could solve much simply.
MAK
+13  A: 

Developers are all different, and should be treated as such.

Developers don't fit into a box, and shouldn't be treated as such. The best language or tool for solving a problem has just as much to do with the developers as it does with the details of the problem being solved.

commondream
And therefore the bozo bit must be flipped for some :-D
icelava
+7  A: 

Test Constantly

You have to write tests, and you have to write them FIRST. Writing tests changes the way you write your code. It makes you think about what you want it to actually do before you just jump in and write something that does everything except what you want it to do.

It also gives you goals. Watching your tests go green gives you that little extra bump of confidence that you're getting something accomplished.

It also gives you a basis for writing tests for your edge cases. Since you wrote the code against tests to begin with, you probably have some hooks in your code to test with.

There is not excuse not to test your code. If you don't you're just lazy. I also think you should test first, as the benefits outweigh the extra time it takes to code this way.

PJ Davis
OMG how did anyone down vote this. Amazing, i'd + 1000 if i could
acidzombie24
Sometimes, watching all your test go green gives you a FALSE confidence, while your code fails somewhere your test didn't anticipate.
Cameron MacFarland
@acidzombie24, you should vote for it if you think it is controversial, not when you agree it.
tuinstoel
@Cameron MacFarland there is no excuse for not doing user testing. The point of the test isn't to cover every edge case from the beginning, it's to make sure your code meets the requirements for what it's supposed to do. No matter how much you test, you'll never cover everything that could happen.
PJ Davis
@Cameron MacFarland, having a test suite helps you even when your code fails in the sense that you can easily add a new test case, correct the bug and remain sure that the bug will be detected if some dev introduce it again.
Petar Repac
You're accruing "offensive" votes... suggest you remove the profanity.
Marc Gravell
+435  A: 

Your job is to put yourself out of work.

When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.

If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.

Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.

Mike Hofer
Very nicely put...
AndyUK
This is the mantra that I've been living by for a long time. Not only is it our job to automate other people's job, but our job is to also automate our own job in the process. Crap Code != Job Security.
Lusid
I've been living by this for years now. It your employer knows that you have their best interest at heart they are more willing to keep you around.
mrdenny
or more extreme: replace yourself with a script you wrote
Vardhan Varma
I was going to disagree with you but then I realized you're right. If you do get hit by a bus, then the project that you work on should suffer greatly. But not because your code is unreadable, but because your were so valuable to the team.
Mark Beckwith
"If you follow all these rules religiously, you will even guarantee yourself a lifetime of employment." http://mindprod.com/jgloss/unmain.html
Nikhil Chelliah
Some unfortunate people write code only they can understand, for job security.
Wahnfrieden
If you can't be replaced then you can't be promoted!
rezzif
Would you rather be paid to write clean code for fun new projects, or maintain that big, ugly ball of mud which you wrote for your current project? (Sadly, I suspect that answers to this question will vary considerably).
Todd Owen
so true. i have always tried to work myself out of a job, and have always failed. how is that?
Peter
would "hit by a bus" be a bus error?
Thorbjørn Ravn Andersen
+1 rezzif: That hadn't occured to me! Nice one!
AndreasT
I did this once. I had a temp job at a nearby city business office. I streamlined a few of the things I was doing to the point that they let me go...
Cogwheel - Matthew Orlando
My corollary: If you're the only one who can maintain your code, then maintaining your code is all that you will do. Better projects, new technologies, new opportunities, and so forth will not be available to you since your boss "can't afford" to not have you available to fix your own code. You build your own prison.
Mike DeSimone
Nice! Thanks for sharing. I have similar opinion but have not experienced what you have experienced: "The more I strive to be disposable, the more valuable I become to them".
Viet
If you can't be replaced on a critical job function, then you'll never be invited to work on any of the new (exciting, resume-improving) job functions.
Jason
People isn't reeplazable
Hernán Eche
@rezzif hah, nice. some inherent benevolence in the business world for once.
Rei Miyasaka
+30  A: 

One I have been tossing around for a while:

The data is the system.

Processes and software are built for data, not the other way around.

Without data, the process/software has little value. Data still has value without a process or software around it.

Once we understand the data, what it does, how it interacts, the different forms it exists in at different stages, only then can a solution be built to support the system of data.

Successful software/systems/processes seem to have an acute awareness, if not fanatical mindfulness of "where" the data is at in any given moment.

Jas Panesar
Or to put it another way: "Data outlives code".
Dan Dyer
Hey, I like that a lot! Thanks for sharing.
Jas Panesar
It's an interesting idea but I think it depends what kind of program you're writing. Five worlds man. http://www.joelonsoftware.com/articles/FiveWorlds.html
MarkJ
I couldn't agree more (granted I'm a DBA so all we deal with is data).
mrdenny
The system also seems to lose it's way if the data is out.
Jas Panesar
I'd take the relations of the data into account, too, so "The model is the system". I mean the second letter of a name is relatively useless without the rest and the first name needs the family name and the employee the department, etc.
Aaron Digulla
A: 

Developers should be able to modify production code without getting permission from anyone as long as they document their changes and notify the appropriate parties.

Eric Mills
What does this even mean? "Hey, I just released a patch that deleted the customer's requested functionality because I felt like it but it's ok because I have documented it and told you that I did it." Is that the kind of thing you were suggesting?
duncan
This could happen if a programmer has poor judgement, but I ultimately believe developers have better judgement than they are given credit for. They should be allowed to fix bugs without a bunch of friction. I believe in trust over regulation with the developers I work with.
Eric Mills
+1. Why the downvotes? Maybe doing the kind of work that demands that level of scrutiny removes the ability to see that there's more than one kind of coding environment? There's no manager to lean on when your world-view-interpreter algorithms are wonky.
jTresidder
I could count on one hand the number of programmers I know that I would trust in that sort of environment - too many cowboys out there.
Evan
Ok, I would start by modifying all the code you wrote. It would be interesting to see if you would still feel the same way.
Seventh Element
@Eric Mills: Go work for a bank, or qualify your answer. Maybe you are unaware or underestimating the impact erroneous (or even malicious) code changes can have on a company. Hours of work lost, bazillions of space credits blown. Careers have been destroyed over these kinds of things, people fired on the spot. Probably not something you'll understand until you are personally responsible for an insanely important system...and some cowboy wants to tweak it at will.
Stu Thompson
At least, in all systems i worked with, this logic would be a very bad policy. Could you provide us with an environment where you would like this to occur?
Luis Filipe
+4  A: 

Uncommented code is the bane of humanity.

I think that comments are necessary for code. They visually divide it up into logical parts, and provide an alternative representation when reading code.

Documentation comments are the bare minimum, but using comments to split up longer functions helps when writing new code and allows quicker analysis when returning to existing code.

Jeff M
"using comments to split up longer functions" means your functions are too long.
Jay Bazuzi
If you can't understand code WITHOUT comments, you can't understand it WITH, either.
Aaron Digulla
Voted up, because this surely is controversial; I disagree with you :-) I'm on the side that says “Don't comment bad code, re-write it so it's clear”. If your justification for comments is to break up code visually, that's far better done with separate well-named functions with whitespace between.
bignose
+1  A: 

Hardcoding is good!

Really ,more efficient and much easier to maintain in many cases!

The number of times I've seen constants put into parameter files really how often will you change the freezing point of water or the speed of light?

For C programs just hard code these type of values into a header file, for java into a static class etc.

When these parameters have a drastic effect on your programs behaviour you really want to do a regresion test on every change, this seems more natural with hard coded values. When things are stored in parameter/property files the temptation is to think "this is not a program cahnge so I dont need to test it".

The other advantage is it stops people messing with vital values in the parameter/property files because there aren't any!

James Anderson
Q - "how often will change the freezing point of water" A - Every time you change altitude (barometric pressure) or salt density or... (assumptions start with those three letters for a reasons)
duncan
the speed of light depends on the medium it's traveling through
Ferruccio
The assumption that a constant won't change (like in this post, indicated by the responses) is EXACTLY the problem and the reason you should just never hardcode.
Bill K
+1  A: 

Having a process that involves code being approved before it is merged onto the main line is a terrible idea. It breeds insecurity and laziness in developers, who, if they knew they could be screwing up dozens of people would be very careful about the changes they make, get lulled into a sense of not having to think about all the possible clients of the code they may be affecting. The person going over the code is less likely to have thought about it as much as the person writing it, so it can actually lead to poorer quality code being checked in... though, yes, it will probably follow all the style guidelines and be well commented :)

Jesse Pepper
Approvals are the bad thing? Or you just don't trust one person to do the approvals? I'd say "one person can never approve anything". Meaningful approval means everybody should have the ability to black-ball, and approval should be by stake-holder consensus. Then everybody is to blame when it fails, which it still will. :-) How's that for punchy?
Warren P
+10  A: 

The worst thing about recursion is recursion.

Mike
But what about recursion?
LarryF
Before you understand recursion, you must first understand recursion.
Velika
Recursion, n. See recursion.
David Thornley
+29  A: 

Design Patterns are a symptom of Stone Age programming language design

They have their purpose. A lot of good software gets built with them. But the fact that there was a need to codify these "recipes" for psychological abstractions about how your code works/should work speaks to a lack of programming languages expressive enough to handle this abstraction for us.

The remedy, I think, lies in languages that allow you to embed more and more of the design into the code, by defining language constructs that might not exist or might not have general applicability but really really make sense in situations your code deals with incessantly. The Scheme people have known this for years, and there are things possible with Scheme macros that would make most monkeys-for-hire piss their pants.

dwf
I agree with your general feeling. Try and see them as temporal observations. See http://blog.plover.com/prog/design-patterns.html for example.
JB
+9  A: 

For a good programmer language is not a problem.

It may be not very controvertial but I hear a lot o whining from other programmers like "why don't they all use delphi?", "C# sucks", "i would change company if they forced me to use java" and so on.
What i think is that a good programmer is flexible and is able to write good programms in any programming language that he might have to learn in his life

agnieszka
On the other hand, I *would* change company if, say, I was told that the rest of my job (forever) would be in GWBasic. There's a significant difference in how easy it is to express designs in different languages.
Jon Skeet
yeah, of course it's not applicable to all situations. but still a programmer has to be flexible to some extent because this is what computer science is all about - constant change.
agnieszka
Totally agreed. I hate those religious language wars :/
driAn
While I agree that a good programmer can understand any language, working with it 40+ hours a week is a different story. I can understand VB.NET just fine, but I don't want to spend most of my day plowing through it!
Cameron MacFarland
I can agree with this. The real truth here is that there is a tool for every job. Sometimes that tool may be Perl. Sometimes it may be vbScript, sometimes Java, sometimes C#, and sometime even C++... The good developer knows WHICH tool is right for the job.
LarryF
While it may be true that you can learn the *syntax* of a new language in a few hours, you can't learn a *language* in a few hours. It takes years to master a new language with all the corner cases, etc.
Aaron Digulla
Lisp! Lisp! Lisp!
Thorbjørn Ravn Andersen
"A good carpenter can cut wood with a hammer..." (I'm sure: carpenters are much more knowledgeable than programmers.)
MaD70
+12  A: 

Non-development staff should not be allowed to manage development staff.

Correction: Staff with zero development experience should not be allowed to manage development staff.

Chris
Better non-development staff with management skills than developer staff without management skills.
tuinstoel
So you reckon every company that employs any developers should have a developer as CEO?
finnw
Yes, if you going to manage people with a special skill set it would be helpful if you also had a background in that skill set. Would you hire a CEO with no Management experience?
Chris
Stu Thompson
+6  A: 

VB sucks
While not terribly controversial in general, when you work in a VB house it is

rotard
That this is not generally controversial shows how generally up themselves so many programmers are. Have a preference - fine. But when it comes down to whether you have a word (that you don't even have to type) or a '}' to terminate a block, it's just a style choice...
ChrisA
... plenty of VB programmers suck, though. As do plenty of C# programmers.
ChrisA
VB doesn't suck. People who use VB like VBA suck.
Chris
VB *does* suck. So many things have been shoe-horned into what was originally a simple instructional language to allow novices to enter the domain of professionals that it's no longer appropriate for either novices nor professionals.
P Daddy
It's not the language that sucks but a lot of the programmers that (used to) program in VB.
Seventh Element
+23  A: 

Generated documentation is nearly always totally worthless.

Or, as a corollary: Your API needs separate sets of documentation for maintainers and users.

There are really two classes of people who need to understand your API: maintainers, who must understand the minutiae of your implementation to be effective at their job, and users, who need a high-level overview, examples, and thorough details about the effects of each method they have access to.

I have never encountered generated documentation that succeeded in either area. Generally, when programmers write comments for tools to extract and make documentation out of, they aim for somewhere in the middle--just enough implementation detail to bore and confuse users yet not enough to significantly help maintainers, and not enough overview to be of any real assistance to users.

As a maintainer, I'd always rather have clean, clear comments, unmuddled by whatever strange markup your auto-doc tool requires, that tell me why you wrote that weird switch statement the way you did, or what bug this seemingly-redundant parameter check fixes, or whatever else I need to know to actually keep the code clean and bug-free as I work on it. I want this information right there in the code, adjacent to the code it's about, so I don't have to hunt down your website to find it in a state that lends itself to being read.

As a user, I'd always rather have a thorough, well-organized document (a set of web pages would be ideal, but I'd settle for a well-structured text file, too) telling me how your API is architectured, what methods do what, and how I can accomplish what I want to use your API to do. I don't want to see internally what classes you wrote to allow me to do work, or files they're in for that matter. And I certainly don't want to have to download your source so I can figure out exactly what's going on behind the curtain. If your documentation were good enough, I wouldn't have to.

That's how I see it, anyway.

chazomaticus
When I use Doxygen, I use /internal tags very often. This makes it easy to generate two sets of documentation exactly as you describe. (Of course, I also continue to use regular comments throughout code where required.)
Zooba
I don't just like JavaDoc. I love it.
Seventh Element
+3  A: 

Extension Methods are the work of the Devil

Everyone seems to think that extension methods in .Net are the best thing since sliced bread. The number of developers singing their praises seems to rise by the minute but I'm afraid I can't help but despise them and unless someone can come up with a brilliant justification or example that I haven't already heard then I will never write one. I recently came across this thread and I must say reading the examples of the highest voted extensions made me feel a little like vomiting (metaphorically of course).

The main reasons given for their extensiony goodness are increased readability, improved OO-ness and the ability to chain method calls better.

I'm afraid I have to differ, I find in fact that they, unequivocally, reduce readability and OO-ness by virtue of the fact that they are at their core a lie. If you need a utility method that acts upon an object then write a utility method that acts on that object don't lie to me. When I see aString.SortMeBackwardsUsingKlingonSortOrder then string should have that method because that is telling me something about the string object not something about the AnnoyingNerdReferences.StringUtilities class.

LINQ was designed in such a way that chained method calls are necessary to avoid strange and uncomfortable expressions and the extension methods that arise from LINQ are understandable but in general chained method calls reduce readability and lead to code of the sort we see in obfuscated Perl contests.

So, in short, extension methods are evil. Cast off the chains of Satan and commit yourself to extension free code.

Stephen Martin
I am still undecided but there seem to be genuine practical uses for extension methods.
Seventh Element
I'm totally with you, buddy.
Dan Tao
A: 

Don't worry too much about what language to learn, use the industry heavy weights like c# or python. Languages like Ruby are fun in the bedroom, but don't do squat in workplace scenarios. Languages like c# and Java can handle small to the very large software projects. If anyone says otherwise, then your talking about a scripting language. Period!

Before starting a project, consider how much support and code samples are available on the net. Again, choosing a language like Ruby which has very few code samples on the web compared to Java for example, will only cause you grief further down the road when your stuck on a problem.

You can't post a message on a forum and expect an answer back while your boss is asking you how your coding is going. What are you going to say? "I'm waiting for someone to help me out on this forum"

Learn one language and learn it good. Learning multiple languages may carry over skills and practices, but you'll only even be OK at all of them. Be good at one. There are entire books dedicated to Threading in Java which, when you think about it, is only one namespace out of over 100.

Master one or be ok at lots.

Sir Psycho
+11  A: 

This one is mostly web related but...

Use Tables for your web page layouts

If I was developing a gigantic site that needed to squeeze performance I might think about it, but nothing gives me an easier way to get a consistent look out on the browser than tables. The majority of applications that I develop are for around 100-1000 users and possible 100 at a time max. The extra bloat of the tables aren't killing my server by any means.

rball
Its not so much about code bloat but more about letting the page degrade gracefully.
Ólafur Waage
And you think div's and css does this? I don't.
rball
I always try to make a layout that avoids tables, and I always fail. Div-based layouts just don't have the flexibility of a table. +1
Marcus Downing
http://giveupandusetables.com/. I have nothing to add. :)
Tomalak
Marcus: Are you kidding? Use tables for what they were meant for - tabular data.
Tom
Try using a screen reader with that table based layout. :(
spooner
I'm starting to believe in using CSS frameworks like blueprint and 960. These seem to be giving me the consistency along with it being a lot easier to make the layout. Seems to be meeting my needs so I'm pretty jazzed.
rball
+2  A: 

That software can be bug free if you have the right tools and take the time to write it properly.

too much php
And what is about Gödel's completeness theorem?
Totophil
Sorry I'm not familiar with that one, and I'll have to read the Wikipedia page about 10 times more to make sense of it I feel.
too much php
+2  A: 

Opinion: Not having function definitions, and return types can lead to flexible and readable code.

This opinion probably applies more to interpreted languages than compiled. Requiring a return type, and a function argument list, are great for things like intellisense to auto document your code, but they are also restrictions.

Now don't get me wrong, I am not saying throw away return types, or argument lists. They have their place. And 90% of the time they are more of a benefit than a hindrance.

There are times and places when this is useful.

J.J.
+11  A: 

coding is not typing

It takes time to write the code. Most of the time in the editor window, you are just looking at the code, not actually typing. Not as often, but quite frequently, you are deleting what you have written. Or moving from one place to another. Or renaming.

If you are banging away at the keyboard for a long time you are doing something wrong.

Corollary: Number of lines of code written per day is not a linear measurement of a programmers productivity, as programmer that writes 100 lines in a day is quite likely a better programmer then the one that writes 20, but one that writes 5000 is certainly a bad programmer

Hemal Pandya
Very much agree with this. Did you see that recent thread where the consensus seemed to be that if you can't touch type at 80wpm you aren't a real programmer? Complete nonsense, although people seem to like that sort of testosterone-driven "productivity".
ChrisA
@ChrisA: I actually read that thread and came back to write this response. While coding, I like to take time dotting my i's and crossing my t's, so to say.
Hemal Pandya
That's why I love vim.
hasen j
The typing issue isn't that typing faster allows you to type more code. The issue is that if typing is really a second nature, all of your attention can be on what you are coding rather than on typing. Conversely if you are constantly looking at the keyboard and correcting typos, you are wasting a lot of your attention on typing. Your train of thought is interrupted all the time by the mechanical action of typing. Doesn't mean that you are a bad programmer, but you are certainly not as good as you could be if 30% of your attention is stuck on the keyboard. Programmer, master your tools.
Sylverdrag
+10  A: 

The vast majority of software being developed does not involve the end-user when gathering requirements.

Usually it's just some managers who are providing 'requirements'.

Vulcan Eager
+15  A: 

The word 'evil' is an abused and overused word on Stackoverflow and simular forums.

People who use it have too little imagination.

tuinstoel
I think this is an evil opinion by an evil man out to do evil.
Seventh Element
Can't remember to have ever read this word on stackoverflow.
Stefan Steinegger
In other words: 'evil' is evil.
Daniel Daranas
"People who use it have too little imagination." ..and are evil. :)
mlvljr
+3  A: 

Development teams should be segregated more often by technological/architectural layers instead of business function.

I come from a general culture where developers own "everything from web page to stored procedure". So in order to implement a feature in the system/application, they would prepare the database table schemas, write the stored procs, match the data access code, implement the business logic and web service methods, and the web page interfaces.

And guess what? Everybody has their own way to doing things! Everyone struggles to learn the ASP.NET AJAX and Telerik or Infragistic suites, Enterprise Library or other productivity and data layer and persistence frameworks, Aspect-oriented frameworks, logging and caching application blocks, DB2 or Oracle percularities. And guess what? Everybody takes heck of a long time to learn how to do things the proper way! Meaning, lots of mistakes in the meantime and plenty of resulting defects and performance bottlenecks! And heck of a longer time to fix them! Across each and every layer! Everybody has a hand in every Visual Studio project. Nobody is specialised to handle and optmise one problem/technology domain. Too many chefs spoil the soup. All the chefs result in some radioactive goo.

Developers may have cross-layer/domain responsibilities, but they should not pretend that they can be masters of all disciplines, and should be limited to only a few. In my experience, when a project is not a small one and utilises lots of technologies, covering more business functions in a single layer is more productive (as well as encouraging more test code test that layer) than covering less business functions spanning the entire architectural stack (which motivates developers to test only via their UI and not test code).

icelava
+3  A: 

XHTML is evil. Write HTML

You will have to set the MIME type to text/html anyway, so why fooling yourself into believing that you are really writing XML? Whoever is going to download your page is going to believe that it is HTML, so make it HTML.

And with that, feel free and happy to not close your <li>, it isn't necessary. Don't close the html tag, the file is over anyway. It is valid HTML and it can be parsed perfectly.

It will create more readable, less boilerplate code and you don't lose a thing. HTML parsers work good!

And when you are done, move on to HTML5. It is better.

I agree with this. For a while I tried using XHTML on my personal website, but it was too much work for practically no benefit (I just used it to make sure I kept the markup well-formed). I do close all the tags though, but that's just to satisfy my own neuroses.
Matthew Crumley
I can't agree less. XML makes the code work *much* nicer with validators and this in turn makes debugging complex nested structures much easier. Perhaps other people can work without this but for me, advanced HTML documents benefit a lot from XML and its strictness.
Konrad Rudolph
I've never thought of XHTML as XML at all. I simply consider HTML and XHTML to be the same thing until I see lazy HTML code. Not closing your tags is a bad habbit and doesn't improve readability at all... especially when dealing with a large file. Tags should all be lowercase as well.
Dalin Seivewright
+6  A: 

Relational Databases are a waste of time. Use object databases instead!

Relational database vendors try to fool us into believing that the only scaleable, persistent and safe storage in the world is relational databases. I am a certified DBA. Have you ever spent hours trying to optimize a query and had no idea what was going wrong? Relational databases don't let you make your own search paths when you need them. You give away much of the control over the speed of your app into the hands of people you've never met and they are not as smart as you think.

Sure, sometimes in a well-maintained database they come up with a quick answer for a complex query. But the price you pay for this is too high! You have to choose between writing raw SQL every time you want to read an entry of your data, which is dangerous. Or use an Object relational mapper which adds more complexity and things outside your control.

More importantly, you are actively forbidden from coming up with smart search algorithms, because every damn roundtrip to the database costs you around 11ms. It is too much. Imagine you know this super-graph algorithm which will answer a specific question, which might not even be expressible in SQL!, in due time. But even if your algorithm is linear, and interesting algorithms are not linear, forget about combining it with a relational database as enumerating a large table will take you hours!

Compare that with SandstoneDb, or Gemstone for Smalltalk! If you are into Java, give db4o a shot.

So, my advice is: Use an object-DB. Sure, they aren't perfect and some queries will be slower. But you will be surprised how many will be faster. Because loading the objects will not require all these strange transofmations between SQL and your domain data. And if you really need speed for a certain query, object databases have the query optimizer you should trust: your brain.

nes1983
Wow that is controversial! Surprised you haven't been flamed by the other DBAs here ;)
Meff
Even more important than performance: Development is much much faster with oo-databases!
wilth
"Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments", Justin Kruger and David Dunning, Cornell University, Journal of Personality and Social Psychology, 1999, Vol. 77, No. 6., 121-1134. Fortunately it is curable (I'm the evidence): ".. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities."
MaD70
+8  A: 

The code is the design

bjnortier
+4  A: 

Debuggers are a crutch.

It's so controversial that even I don't believe it as much as I used to.

Con: I spend more time getting up to speed on other people's voluminous code, so anything that help with "how did I get here" and "what is happening" either pre-mortem or post-mortem can be helpful.

Pro: However, I happily stand by the idea that if you don't understand the answers to those questions for code that you developed yourself or that you've become familiar with, spending all your time in a debugger is not the solution, it's part of the problem.

Before hitting 'Post Your Answer' I did a quick Google check for this exact phrase, it turns out that I'm not the only one who has held this opinion or used this phrase. I turned up a long discussion of this very question on the Fog Creek software forum, which cited various luminaries including Linus Torvalds as notable proponents.

Liudvikas Bukys
+1 the best debugger is your brain.
ceretullis
I totally agree, though I'd go a bit further: *testing your code* is a crutch. I know too many programmers who don't concentrate enough when writing code, and rely on failed compiles and runtime errors to save them... And how many bugs *don't* get caught?
Artelius
-1 There is nothing wrong with using a crutch when your leg is broken - why should there be anything wrong with using one when your code is broken?
Kramii
+3  A: 

Hibernate is useless and damaging to the minds of developers.

Dan Howard
How has it damaged your mind? ;-)
Seventh Element
my controversial opinion to this: There are more developers who do not understand Hibernate than those who do.
Stefan Steinegger
+10  A: 

Any sufficiently capable library is too complicated to be useable and any library simple enough to be usable lacks that capabilities needed to be a good general solution.

I run in to this constantly. Exhaustive libraries that are so complicated to use I tear my hair out and simple easy to use libraries that don't quite do what I need them to do.

Starkii
Oh man, I wish I could give you 100 upvotes.
benjismith
+16  A: 

Newer languages, and managed code do not make a bad programmer better.

LarryF
Agree. New running shoes won't make the average runner run any faster.
Seventh Element
+5  A: 

There are far too many programmers who write far too much code.

keithb
Wally (from Dilbert) should be an example to all of us! ;-)
Seventh Element
+3  A: 

This one is not exactly on programming, because html/css are not programming languages.

Tables are ok for layout

css and divs can't do everything, save yourself the hassle and use a simple table, then use css on top of it.

hasen j
I used to think this way until I really got deep into CSS to see if I could prove myself wrong. I did and he helped to land me a job that required tableless layouts for accessiblity reasons. Do you have any examples on what you can't do?
JamesEggers
see, this "deep" thing is just hacks and black magic, you end up with a an unmaintainable css mess, and if you change an attribute by mistake, the whole thing could collapse into a hairy mess, even if the attribute doesn't seem too important.
hasen j
upvoted because I cant decide whether to agree or disagree; controversial indeed
iandisme
http://giveupandusetables.com/
frunsi
+11  A: 

90 percent of programmers are pretty damn bad programmers, and virtually all of us have absolutely no tools to evaluate our current ability level (although we can generally look back and realize how bad we USED to suck)

I wasn't going to post this because it pisses everyone off and I'm not really trying for a negative score or anything, but:

A) isn't that the point of the question, and

B) Most of the "Answers" in this thread prove this point

I heard a great analogy the other day: Programming abilities vary AT LEAST as much as sports abilities. How many of us could jump into a professional team and actually improve their chances?

Bill K
http://en.wikipedia.org/wiki/Sturgeon's_law applies to everything, even programmers.
Cameron MacFarland
I agree, unfortunatly almost 90% of the bad programmers think they fall in the 10% category of programmers who don't suck.
Seventh Element
@Diego awesome way to put it. That's kind in line with what I said about not having the tools to evaluate ourselves, but much more clear.
Bill K
+1  A: 

As most others here, I try to adhere to principles like DRY and not being a human compiler.

Another strategy I want to push is "tell, don't ask". Instead of cluttering all objects with getters/setters essentially making a sieve of them, I'd like to tell them to do stuff.

This seems to got straight against good enterprise practices with dumb entity objects and thicker service layer(that does plenty of asking). Hmmm, thoughts?

Tommy
I actually agree with this. If you find too much logic happening outside of the class via querying accessor functions something may be wrong with the design. However, there are different ways of using objects... sometimes you just want an object that holds state and doesn't do anything else.
ceretullis
Primitive getters setters are important in complex applications, the tell methods should be composed of them. Each level on indirection is a blessing in reducing complexity. Ignore this in a complex application (i.e., not business transaction processing or web sites) at your own peril.
Hassan Syed
+4  A: 

We're software developers, not C/C#/C++/PHP/Perl/Python/Java/... developers.

After you've been exposed to a few languages, picking up a new one and being productive with it is a small task. That is to say that you shouldn't be afraid of new languages. Of course, there is a large difference between being productive and mastering a language. But, that's no reason to shy away from a language you've never seen. It bugs me when people say, "I'm a PHP developer." or when a job offer says, "Java developer". After a few years experience of being a developer, new languages and APIs really shouldn't be intimidating and going from never seeing a language to being productive with it shouldn't take very long at all. I know this is controversial but it's my opinion.

You're correct, but after investing years mastering a language, starting over in a new language has somewhat less appeal. It isn't necessarily fear, but the joy of higher order productivity that stems the desire to learn something new.
ceretullis
That said, hacks cling to their one language like Grandpa to his comb-over.
ceretullis
Someone who calls himself a Java developer (substitute with language of choice) means that he/she is an expert in the "platform", not just the language. But it sounds kinda stupid to say I'm a "Java platform" programmer. The language is only a tiny fraction of the platform.
Seventh Element
Introducing a new language with little syntactic and semantic variation (w.r.t. mainstream) every year (just an hyperbole) is totally, utterly cretin, an enormous vast of resources. Nothing controversial here, is the usual way with which this "industry" distracts people from real issues.
MaD70
+6  A: 

Two brains think better than one

I firmly believe that pair programming is the number one factor when it comes to increasing code quality and programming productivity. Unfortunatly it is also a highly controversial for management who believes that "more hands => more code => $$$!"

Martin Wickman
I sometimes dream about extreme extreme programming. How cool would it be if everyone in a group sat down to do the architecture and implementation as a group (4-8 devs). I wonder if it would work or be completely dysfunctional. I tend to think it could work with the "right" group.
ceretullis
+2  A: 

You can't write a web application without a remote debugger

Web applications typically tie together interactions between multiple languages on the client and server side, require interaction from a user and often include third-party code that can be anything from a simple API implementation to a byzantine framework.

I've lost count of the number of times I've had another developer sat with me while I step into and follow through what's actually going on in a complex web application with a decent remote debugger to see them flabbergasted and amazed that such tools exist. Often they still don't take the trouble to install and setup these kinds of tools even after seeing them in action.

You just can't debug a non trivial web application with print statements. Times ten if you didn't right all the code in your application.

If your debugger can step through all the various languages in use and show you the http transactions taking place then so much the better.

You can't develop web applications without Firebug

Along similar lines, once you have used Firebug (or very near equivalent) you look on anyone trying to develop web applications with a mixture of pity and horror. Particularly with Firebug showing computed styles, if you remember back to NOT using it and spending hours randomly changing various bits of CSS and adding "!important" in too many places to be funny you will never go back.

reefnet_alex
I agree to firebug, but ive been a web dev. for 3 years and done everything from mid to large. I've never felt the need to use a remote debugger
Click Upvote
exackly - and before you used firebug I bet you didn't realise you needed it either :) seriously though, give it a try and then say that
reefnet_alex
Since I can unit test my whole webapp, why would I need a remote debugger? I can run any line of code I want locally...
Aaron Digulla
1) remote doesn't mean "not local" in this case, it means it running the debugger on the php interpreter as run up by your web server and following all the interactions with the browser through. whether running locally or on a live server you need a remote debugger to see what's actually happening
reefnet_alex
2) live server != dev machine: there are some bugs you will only see on your live (or exact copy of your live) server
reefnet_alex
+3  A: 

The latest design patterns tend to be so much snake oil. As has been said previously in this question, overuse of design patterns can harm a design much more than help it.

If I hear one more person saying that "everyone should be using IOC" (or some similar pile of turd), I think I'll be forced to hunt them down and teach them the error of their ways.

ZombieSheep
+3  A: 

Upfront design - don't just start writing code because you're excited to write code

I've seen SO many apps that are poorly designed because the developer was so excited to get coding that they just opened up a white page and started writing code. I understand that things change during the development lifecycle. However, it's difficult working with applications that have several different layouts and development methodologies from form to form, method to method.

It's difficult to hit the target your application is to handle if you haven't clearly defined the task and how you plan to code it. Take some time (and not just 5 minutes) and make sure you've laid out as much of it has you can before you start coding. This way you'll avoid a spaghetti mess that your replacement will have to support.

asp316
+38  A: 

Avoid indentation.

Use early returns, continues or breaks.

instead of:

if (passed != NULL)
{
   for(x in list)
   {
      if (peter)
      {
          print "peter";
          more code.
          ..
          ..
      }
      else
      {
          print "no peter?!"
      }
  }
}

do:

if (pPassed==NULL)
    return false;

for(x in list)
{
   if (!peter)
   {
       print "no peter?!"
       continue;
   }

   print "peter";
   more code.
   ..
   ..
}
Jon Clegg
I wouldn't apply this as a **rule**, but I definitely don't hesitate to take this route when it can reduce complexity and improve readability. +1 Why do you need peter so badly, though?
P Daddy
Not a fan of 'canvern code' are we? :) I have to agree however. I've actually worked on 'cavern code' that more that an ENTIRE PAGE of just closing braces.... And that was on a 1920x1600 monitor (or whatever the exact res is).
LarryF
You should check out "Spartan programming" - this seems like a similar style.
Keith
It is not indentation you are arguing against, its deeply nested conditional and loop blocks. I fully concur in that regard. I've found that enforcing a code style with a maximum line length tends to discourage this behavior somewhat.
Kris
Don't forget braces for "if"! use foreach! use (condition ? valueIfTrue : valueIfFalse) If you don't understand, search engine, learn!
moala
I don't like the continue here.
Loren Pechtel
This is a dupe of the higher-ranked answer http://stackoverflow.com/questions/406760/whats-your-most-controversial-programming-opinion/407507#407507
Ether
+13  A: 

I have a few... there's exceptions to everything so these are not hard and fast but they do apply in most cases

Nobody cares if your website validates, is XHTML strict, is standards-compliant, or has a W3C badge.

It may earn you some high-fives from fellow Web developers, but the rest of people looking at your site could give a crap whether you've validated your code or not. the vast majority of Web surfers are using IE or Firefox, and since both of those browsers are forgiving of nonstandards, nonstrict, invalidated HTML then you really dont need to worry about it. If you've built a site for a car dealer, a mechanic, a radio station, a church, or a local small business, how many people in any of those businesses' target demographics do you think care about valid HTML? I'd hazard a guess it's pretty close to 0.

Most open-source software is useless, overcomplicated crap.

Let me install this nice piece of OSS I've found. It looks like it should do exactly what I want! Oh wait, first I have to install this other window manager thingy. OK. Then i need to get this command-line tool and add it to my path. Now I need the latest runtimes for X, Y, and Z. now i need to make sure i have these processes running. ok, great... its all configured. Now let me learn a whole new set of commands to use it. Oh cool, someone built a GUI for it. I guess I don't need to learn these commands. Wait, I need this library on here to get the GUI to work. Gotta download that now. ok, now its working...crap, I can't figure out this terrible UI.

sound familiar? OSS is full of complication for complication's sake, tricky installs that you need to be an expert to perform, and tools that most people wouldn't know what to do with anyway. So many projects fall by the wayside, others are so niche that very few people would use them, and some of the decent ones (FlowPlayer, OSCommerce, etc) have such ridiculously overcomplicated and bloated source code that it defeats the purpose of being able to edit the source. You can edit the source... if you can figure out which of the 400 files contains the code that needs modification. You're really in trouble when you learn that its all 400 of them.

nerdabilly
I wish I could vote to make you God. Really, this is amazing stuff.
Jonathan C Dickinson
On the other hand the best OSS packages are huge force multipliers. These are the well-designed, well-maintained ones that have big communities of users and developers (and real published books). Some examples of these are Rhino (Javascript interpreter), Xerces (XML Parser), Restlet (REST Web Services), and jQuery (Javascript GUI development). Others really do suck, like Axis 1.x.
Jim Ferrans
Screen readers and other accessibility tools perform better if the HTML conforms to standards. As for OSS .. your reasoning is deeply flawed in applying your own negative experience to all OSS works. Sure modifying OSS projects can be difficult (impossible for many) but I've lost count of the OSS libraries I've used to save myself tons of work on various projects. If most OSS is useless it is only because there is so much of it. There is a lot of very useful OSS out there.
Kris
Everything WWW sucks anyway, so for the first point I cannot care less.+100 for the second.
MaD70
long live the `sudo apt-get install`
hasen j
+2  A: 

Believe it or not, my belief that, in an OO language, most of the (business logic) code that operates on a class's data should be in the class itself is heresy on my team.

moffdub
I second your opinion. I can't stand it when someone goes with the excuse that "Classes should be minimal, clean, simple" and writes a close to useless class that merely aggregates data - and then builds the logic about this data everywhere else.
Daniel Daranas
Hmm. So, the `cut` method should be a member of which class? `meat`, `vegetable`, `knife`, `scissors`, `kitchen_table`, `workbench` ...?
Svante
Without any further information, I'd say that the knife cuts a cuttable object: Knife.cut(ICuttable something). Of course, if you only have one cuttable object, like meat, and many things that cut the meat, then you want Meat.cutWith(ICutter something).
moffdub
+37  A: 

I'm probably gonna get roasted for this, but:

Making invisible characters syntactically significant in python was a bad idea

It's distracting, causes lots of subtle bugs for novices and, in my opinion, wasn't really needed. About the only code I've ever seen that didn't voluntarily follow some sort of decent formatting guide was from first-year CS students. And even if code doesn't follow "nice" standards, there are plenty of tools out there to coerce it into a more pleasing shape.

Paul Wicks
A well configured editor can help you here. Most editors can show invisibles and vim for one can highlight those invisible mistakes in red to make them really obvious.
mcrute
I think that the bad idea becomes more obvious when you think about the ridiculous limitation of `lambda` in Python.
Svante
The number of times I've had a python script fail because I put a blank line in my code in a for loop, and the blank line didn't have enough spaces... Makes me want to not space my code with blank lines.
Cameron MacFarland
I don't agree with you, but +1 because it _is_ controversial
hasen j
It was also true of the original Unix make command. Actions had to be one tab space in; if you used spaces instead, an action looked like a syntax error. Ugh!
Jim Ferrans
History repeats itself. We didn't learn from Fortran output formatting or from make files so why be surprised that someone thought it was a good idea for python? It won't be the last time.
Kelly French
@mcrute: if you have to build a special-purpose tool just to work with the language, that sounds like a problem to me.
Paul Nathan
"About the only code I've ever seen that didn't voluntarily follow some sort of decent formatting guide was from first-year CS students."So how is this a problem?
fengb
@Paul Nathan: if you have to build a special-purpose tool to write well-indented code with a braces language, that sounds like a problem to me.
Beni Cherniavsky-Paskin
+1  A: 

Opinion: Duration in the development field does not always mean the same as experience.

Many trades look at "years of experience" in a language. Yes, 5 years of C# can make sense since you may learn new tricks and what not. However, if you are with the company and maintaining the same code base for a number of years, I feel as if you are not gaining the amount of exposure to different situations as a person who works on different situations and client needs.

I once interviewed a person who prided himself on having 10 years of programming experience and worked with VB5, 6, and VB.Net...all in the same company during that time. After more probing, I found out that while he worked with all of those versions of VB, he was only upgrading and constantly maintaining his original VB5 app. Never modified the architecture and let the upgrade wizards do their thing. I have interviewed people who only have 2 years in the field but have worked on multiple projects that have more "experience" than him.

JamesEggers
+6  A: 

1. You should not follow web standards - all the time.

2. You don't need to comment your code.

As long as it's understandable by a stranger.

davethegr8
+6  A: 

As there are hundreds of answers to this mine will probably end up unread, but here's my pet peeve anyway.

If you're a programmer then you're most likely awful at Web Design/Development

This website is a phenomenal resource for programmers, but an absolutely awful place to come if you're looking for XHTML/CSS help. Even the good Web Developers here are handing out links to resources that were good in the 90's!

Sure, XHTML and CSS are simple to learn. However, you're not just learning a language! You're learning how to use it well, and very few designers and developers can do that, let alone programmers. It took me ages to become a capable designer and even longer to become a good developer. I could code in HTML from the age of 10 but that didn't mean I was good. Now I am a capable designer in programs like Photoshop and Illustrator, I am perfectly able to write a good website in Notepad and am able to write basic scripts in several languages. Not only that but I have a good nose for Search Engine Optimisation techniques and can easily tell you where the majority of people are going wrong (hint: get some good content!).

Also, this place is a terrible resource for advice on web standards. You should NOT just write code to work in the different browsers. You should ALWAYS follow the standard to future-proof your code. More often than not the fixes you use on your websites will break when the next browser update comes along. Not only that but the good browsers follow standards anyway. Finally, the reason IE was allowed to ruin the Internet was because YOU allowed it by coding your websites for IE! If you're going to continue to do that for Firefox then we'll lose out yet again!

If you think that table-based layouts are as good, if not better than CSS layouts then you should not be allowed to talk on the subject, at least without me shooting you down first. Also, if you think W3Schools is the best resource to send someone to then you're just plain wrong.

If you're new to Web Design/Development don't bother with this place (it's full of programmers, not web developers). Go to a good Web Design/Development community like SitePoint.

EnderMB
Goes for GUI design too. Especially with new technologies like WPF making GUI design more like web design with CSS like files defining styles for the interface.
Cameron MacFarland
I completely agree, unfortunately, I find at most companies I'm the developer and the designer at the same time. Its like saying "hey, you're a good writer, you'd be a great illustrator too!" -- ummm, no.
Juliet
+8  A: 

Sometimes jumping on the bandwagon is ok

I get tired of people exhibiting "grandpa syndrome" ("You kids and your newfangled Test Driven Development. Every big technology that's come out in the last decade has sucked. Back in my day, we wrote real code!"... you get the idea).

Sometimes things that are popular are popular for a reason.

Jason Baker
http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html
Cameron MacFarland
Not controversial enough. To be controversial, replace sometimes with always.
Coding With Style
My problem is that otherwise good ideas become bandwagons. My favorite example is OOP, a useful idea that became a binge. In most of the performance tuning I do, the culprit, ultimately, is that a Queen Mary was built, when a rowboat would have sufficed.
Mike Dunlavey
@Mike Dunlavey - I agree 100%. But it's not fair to reject an idea on that basis (which a lot of people do).
Jason Baker
Sadly, you're right. I've seen it too.
Mike Dunlavey
... talk about old-time code, how about this: `//SYSUT2 DD UNIT=(TAPE1600,,DEFER),VOL=SER=SPROOOF,LABEL=(1,NL),DISP=(,KEEP)` cranked out standing up at a keypunch.
Mike Dunlavey
+3  A: 

Women make better programmers than men.

The female programmers I've worked with don't get wedded to "their" code as much as men do. They're much more open to criticism and new ideas.

WOPR
While my experience agrees with your explanation (based on only 2 data points however), I don't agree with your assessment that not being wedded to their code is all it takes to be a better programmer.
Cameron MacFarland
Good programmers form a strong emotional attachment to thier code because they're passionate about creating the best solutions they can. It almost represents the best they can be .. hence the ego attachment. It's both blinding to better solutions and a key driver to improve. IMHO; it's mostly good!
John MacIntyre
There exist woman programmers ??? ;-)
Seventh Element
I've not seen a correlation between sex and code-base ownership, either. (Though only two data points, also.) Care to expand on your answer?
Stu Thompson
+1 for controversy
Coding With Style
I've never seen this to be true.. though I *have* seen women make better *designers* than men
warren
+3  A: 

If you can only think of one way to do it, don't do it.

Whether it's an interface layout, a task flow, or a block of code, just stop. Do something to collect more ideas, like asking other people how they would do it, and don't go back to implementing until you have at least three completely different ideas and at least one crisis of confidence.

Generally, when I think something can only be done one way, or think only one method has any merit, it's because I haven't thought through the factors which ought to be influencing the design thoroughly enough. If I had, some of them would clearly be in conflict, leading to a mess and thus an actual decision rather than a rote default.

Being a solid programmer does not make you a solid interface designer

And following all of the interface guidelines in the world will only begin to help. If it's even humanly possible... There seems to be a peculiar addiction to making things 'cute' and 'clever'.

Kim Reece
+4  A: 

When someone dismisses an entire programming language as "clumsy", it usually turns out he doesn't know how to use it.

Jami
+5  A: 

Separation of concerns is evil :)

Only separate concerns if you have good reason for it. Otherwise, don't separate them.

I have encountered too many occasions of separation only for the sake of separation. The second half of Dijkstra's statement "Minimal coupling, maximal cohesion" should not be forgotten. :)

Happy to discuss this further.

+1 for the Dijkstra quote... but I disagree with you... so +1 for the controversial opinion... everything in moderation.
ceretullis
you don't work for the same guys I do, do you???
Chris Huang-Leaver
+2  A: 

2 space indent.

No discussion. It just has to be that way ;-)

Fergie
How about a compromise? You add a space and I give up a space and everybodies happy? ;-)
Seventh Element
There was an argument among the Delphi programmers at my company about whether to use 2-space indents or 4-space indents. We settled on 3-spaces to offend all parties equally.
Juliet
3-space is the ugliest indent I've ever seen. It just does not fit into my happy 2^n world. 2 is just for those who want to write code with 20-way and more indentation, i.e. for those whose application consists of one monster class. Or a monster function.
phresnel
+5  A: 

I hate universities and institutes offering short courses for teaching programming to new comers. It is outright disgrace and contempt for the art1 and science of programming.

They start teaching C, Java, VB (disgusting) to the people without good grasp on hardware and fundamental principals of computers. The should first be taught about the MACHINE by books like Morris Mano's Computer System Architecture and then taught the concept of instructing machine to solve problems instead of etching semantics and syntax of one programming language.

Also I don't understand government schools, colleges teaching children basics of computers using commercial operating systems and softwares. At least in my country (India) not many students afford to buy operating systems and even discounted office suits let alone the development software juggernaut (compilers, IDEs etc). This prompts theft and piracy and make this act of copying and stealing software from their institutes' libraries a justified act.

Again they are taught to use some products not the fundamental ideas.

Think about it if you were taught only that 2x2 is 4 and not the concept of multiplication?

Or if you were taught now to measure the length of pole inclined to some compound wall of your school but not the Pythagoras theorem

TheVillageIdiot
+3  A: 

Programmers take their (own little limited stupid) programming language as a sacrosanct religion.

Its so funny how programmers take these discussions almost like religious believers do: no critics allowed, (often) no objective discussion, (very often) arguing based upon very limited or absent knowledge and information. For a confirmation, just read the previous answers, and especially the comments.

Also funny and another confirmation: by definition of the question "give me a controversial opinion", any controversion opinion should NOT qualify for negative votes - actually the opposite: the more controversial, the better. But how do our programmers react: like Pavlov's dogs, voting negative on disliked opinions.

PS: I upvoted some others for fairness.

blabla999
Thanks daddy (mum?) for reminding us the rules of the game.
MaD70
P.S.: I don't have the habit to salivate on Stackoverflow... ok, it is a site for perverts, but is not porn.
MaD70
+1  A: 

Software engineers should not work with computer science guys

Their differences : SEs care about code reusability, while CSs just suss out code SEs care about performance, while CSs just want to have things done now SEs care about whole structure, while CSs do not give a toss ...

I'm a computer science guy and I can agree whole heartedly. The CS guys come in, the SE guys are jealous/threatened so they spend all their time trying to prove that the CS guy can't program using "good software engineering" (whatever that is)... meanwhile the CS guy thinks his job is sh*t because
ceretullis
he would rather be working on algorithms, AI, or something else interesting/useful at a larger scope. Not solving some stupid brainless SE problem that could have been avoided. Best to keep separated.
ceretullis
"Don't call me a computer scientist. I'm a coder. I hate computer scientists. If I wanted to deal with people who're more concerned with correctness according to some set of made-up rules than with functionality, I'd go to a church."
chaos
+25  A: 

Cowboy coders get more done.

I spend my life in the startup atmosphere. Without the Cowboy coders we'd waste endless cycles making sure things are done "right".

As we know it's basically impossible to forsee all issues. The Cowboy coder runs head-on into these problems and is forced to solve them much more quickly than someone who tries to forsee them all.

Though, if you're Cowboy coding you had better refactor that spaghetti before someone else has to maintain it. ;) The best ones I know use continuous refactoring. They get a ton of stuff done, don't waste time trying to predict the future, and through refactoring it becomes maintainable code.

Process always gets in the way of a good Cowboy, no matter how Agile it is.

JD Conley
If they're refactoring where appropriate, I probably wouldn't call them cowboys...
Jon Skeet
To me a cowboy is someone who just jumps into a problem and recklessly writes code, rather than thinking about, estimating, and designing something first. They do it without any regard to a process or accountability other than "it better get done, as fast as possible".
JD Conley
You! You're the idiot who came up with the legacy system that 5 years later I'm hired to deal with. I've spent most of my life working on 5+ year old code that because cowboys worked on it has ossified into an inflexible mess that is too brittle to be modified or added to.
Cameron MacFarland
Cameron: I think you need a new profession. Sounds like your job sucks. :)
JD Conley
No my current job doesn't suck, but that's because I'm not working on a creaking legacy system. I suppose it's unfair to only blame the cowboys for those systems, as they started ok, and then 5+ years of patches got applied. Now I ask how old the code is in interviews.
Cameron MacFarland
I'd like the cowboys to think a little, but not so much they need to write a supporting design document first or anything like that. I agree that often designers get stuck in the "what about this scenario" syndrome.
Cameron MacFarland
@Cameron: Yes, it's unfair to blame only the cowboys. Blame their managers.
Daniel Daranas
Wel call them "Ninja programmers" because there's nothing they can't do. (Just like ninjas)
Faruz
A: 

USE of Desgin patterns and documentation

in web devlopment whats use of these things never felt any use of it

+3  A: 

Member variables should never be declared private (in java)

If you declare something private, you prevent any future developer from deriving from your class and extending the functionality. Essentially, by writing "private" you are implying that you know more now about how your class can be used than any future developer might ever know. Whenever you write "private", you ought to write "protected" instead.

Classes should never be declared final (in java)

Similarly, if you declare a class as final (which prevents it from being extended -- prevents it from being used as a base class for inheritance), you are implying that you know more than any future programmer might know, about what is the right and proper way to use your class. This never a good idea. You don't know everything. Someone might come up with a perfectly suitable way to extend your class that you didn't think of.

Java Beans are a terrible idea.

The java bean convention -- declaring all members as private and then writing get() and set() methods for every member -- forces programmers to write boilerplate, error-prone, tedious, and lengthy code, where no code is needed. Just make public members variables public! Trust in your ability to change it later, if you need to change the implementation (hint: 99% of the time, you never will).

Protected member variables are so very wrong! It breaks encapsulation and leads to SERIOUS problems. Only methods should ever be declared as protected.
Seventh Element
+7  A: 

Assembly is the best first programming language.

+1 for that... probably too hard for most people to grasp... nothing like weeding out the weak ones. ;)
ceretullis
MaD70
+2  A: 

Code as Design: Three Essays by Jack W. Reeves

The source code of any software is its most accurate design document. Everything else (specs, docs, and sometimes comments) is either incorrect, outdated or misleading.

Guaranteed to get you fired pretty much everywhere.

fbonnet
+85  A: 

Lazy Programmers are the Best Programmers

A lazy programmer most often finds ways to decrease the amount of time spent writing code (especially a lot of similar or repeating code). This often translates into tools and workflows that other developers in the company/team can benefit from.

As the developer encounters similar projects he may create tools to bootstrap the development process (e.g. creating a DRM layer that works with the company's database design paradigms).

Furthermore, developers such as these often use some form of code generation. This means all bugs of the same type (for example, the code generator did not check for null parameters on all methods) can often be fixed by fixing the generator and not the 50+ instances of that bug.

A lazy programmer may take a few more hours to get the first product out the door, but will save you months down the line.

Jonathan C Dickinson
You are mistaken "lazy" for "clever". A clever programmer will actually have to work less, which may make him/her look "lazy".
Seventh Element
@Diego, tnx, changed it to make it more appropriate.
Jonathan C Dickinson
@Diego: I disagree! The Term "lazy" as applied to programmers is something I've heard and used many times before. (I think I first read it in a article by Larry Wall) It is a badge of honor!
scraimer
lazyness is the fulcrum of all human advancements. if we were not lazy we would still be hunting boars with spears for supper.
AZ
I like to say, "I'm not lazy; I'm efficient."
Tracy Probst
I agree with what you're trying to say, but I disagree with your definition of lazy. A lazy programmer does not look ahead; they will copy-paste a block of code between 4 different functions if it's the easiest thing to do at the time.
DisgruntledGoat
lazy/clever programmer... Programmers have to be clever to be reasonable programmers, so that's a given. A lazy programmer picks the shortest/easiest path to the solution of a problem. And this is not about copy/pasting the same code snippet 400 times, but rather finding a way to avoid copying the same code 400 times. That way the code can be easily changed in once place! The lazy programmer likes to only change the code in once place ;)The lazy programmer also knows that the code is likely to be changed several times. And the lazy programmer just hate finding the 400 snippets twice.
Zuu
Though I agree with your explanation Lazy it isn't really the best word to describe this.Lazy - Resistant to work or exertion;I know a lazy programmer that is too lazy to create a bat file to automate a simple task that I see him type out all the time. If he would just spend a little time to make a few bat files it would increase his productivity.It turns out he is a good developer however he could be even better.
gradbot
For the most part, I agree. However in HTML coding this is not the case. Lazy HTML coders use tables for layouts, and lazy back end ciders cut and paste instead of using includes. Having just slogged through someone else's code, I am very much aware of this phenomena. *shudder*
Elizabeth Buckwalter
It's hard to tell whether programmers are the hardest-working lazy people on the planet, or the laziest hard-working people on the planet.
baultista
-1 . I'm VERY lazy + I never wrote tools to automate things because I never saw any value in them. Developing tools is a one time huge additional amount of work that no true lazy person will be able to commit to.
Blub
+1 for Seventh Element/Zuu. Lazy programmers = much code. Smart programmers = less + better code.
Exa
+2  A: 

Tcl/Tk is the best GUI language/toolkit combo ever

It may lack specific widgets and be less good-looking than the new kids on the block, but its model is elegant and so easy to use that one can build working GUIs faster by typing commands interactively than by using a visual interface builder. Its expressive power is unbeatable: other solutions (Gtk, Java, .NET, MFC...) typically require ten to one hundred LOC to get the same result as a Tcl/Tk one-liner. All without even sacrificing readability or stability.

pack [label .l -text "Hello world!"] [button .b -text "Quit" -command exit]
fbonnet
Oh my god is that controversial! :P
Slace
I'd say WPF gives Tcl/Tk a run for it's money.
Cameron MacFarland
I suggest to have a look at pygtk and or Groovy Swingbuilder. Both concepts are more powerful than the strange Tcl syntax and achieve the same result with a modern widget set and little code. And if you want to see something really cool, goole for "enthought traits".
Aaron Digulla
Good one. Ugh. Tcl/Tk. because uglier is gooder.
Warren P
WPF should have been called WTF. Because hand-tuning XML is my idea of a RAD good time.
Warren P
+5  A: 

Design patterns are a waste of time when it comes to software design and development.

Don't get me wrong, design patterns are useful but mainly as a communication vector. They can express complex ideas very concisely: factory, singleton, iterator...

But they shouldn't serve as a development method. Too often developers architect their code using a flurry of design pattern-based classes where a more concise design would be better, both in term of readability and performance. All that with the illusion that individual classes could be reused outside their domain. If a class is not designed for reuse or isn't part of the interface, then it's an implementation detail.

Design patterns should be used to put names on organizational features, not to dictate the way code must be written.

(It was supposed to be controversial, remember?)

fbonnet
A: 

Managers know everything

It's been my experience that managers didn't get there by knowing code usually. No matter what you tell them it's too long, not right or too expensive.

And another that follows on from the first:

There's never time to do it right but there's always time to do it again

A good engineer friend once said that in anger to describe a situation where management halved his estimates, got a half-assed version out of him then gave him twice as much time to rework it because it failed. It's a fairly regular thing in the commercial software world.

And one that came to mind today while trying to configure a router with only a web interface:

Web interfaces are for suckers

The CLI on the previous version of the firmware was oh so nice. This version has a web interface, which attempts to hide all of the complexity of networking from clueless IT droids, and can't even get VLANs correct.

Adam Hawes
agree with the second statement.
Blub
I think the down voter failed to sarcasm. I 100% agree with the statements - sarcastically... :P
sims
+8  A: 

A good developer needs to know more than just how to code

superwiren
It's not that I don't agree but I like to see more explanation or examples.
tuinstoel
Stu Thompson
Is this controversial?
Stefan Steinegger
+7  A: 

Write your spec when you are finished coding. (if at all)

In many projects I have been involved in, a great deal of effort was spent at the outset writing a "spec" in Microsoft Word. This process culminated in a "sign off" meeting when the big shots bought in on the project, and after that meeting nobody ever looked at this document again. These documents are a complete waste of time and don't reflect how software is actually designed. This is not to say there are not other valuable artifacts of application design. They are usually contained on index cards, snapshots of whiteboards, cocktail napkins and other similar media that provide a kind of timeline for the app design. These are usually are the real specs of the app. If you are going to write a Word document, (and I am not particularly saying you should) do it at the end of the project. At least it will accurately represent what has been done in the code and might help someone down the road like the the QA team or the next version developers.

Andrew Cowenhoven
"nobody ever looked at this document again" Not true. When I started at a new job I was given "the spec" folder, and told to read it as my first task. Then I started asking "where's this feature" the answer was "I didn't know that was in the spec." The spec folder was given to all new employees.
Cameron MacFarland
Yes, that happened to me too - several times. I stand corrected.
Andrew Cowenhoven
I think this is *highly* situational. Do you need a spec for an internal project that you'll write in 3 days? Probably not. Are you willing to bet a multi-million dollar project on the client understanding everything you say to the letter? I would hope not.
Jason Baker
+30  A: 

It's a good idea to keep optimisation in mind when developing code.

Whenever I say this, people always reply: "premature optimisation is the root of all evil".

But I'm not saying optimise before you debug. I'm not even saying optimise ever, but when you're designing code, bear in mind the possibility that this might become a bottleneck, and write it so that it will be possible to refactor it for speed, without tearing the API apart.

Hugo

Rocketmagnet
That sounds very much like my way of thinking: optimise the architecture/design, not the implementation.
Jon Skeet
+2  A: 

What strikes me as amusing about this question is that I've just read the first page of answers, and so far, I haven't found a single controversial opinion.

Perhaps that says more about the way stackoverflow generates consensus than anything else. Maybe I should have started at the bottom. :-)

Dominic Cronin
There's definitely a mixture of controversial and not. Look for ones with lots of comments :)
Jon Skeet
hahayou know what, i'll mark you down so ppl will read this when looking through these.I agree with you 100%
acidzombie24
lots of controversy on page 2, though :)
Juliet
+9  A: 

VB 6 could be used for good as well as evil. It was a Rapid Application Development environment in a time of over complicated coding.

I have hated VB vehemently in the past, and still mock VB.NET (probably in jest) as a Fisher Price language due to my dislike of classical VB, but in its day, nothing could beat it for getting the job done.

johnc
+1  A: 

Haven't tested it yet for controversy, but there may be potential:

The best line of code is the one you never wrote.

flq
The best lines of code are the ones you don't need to write
Pyrolistical
+2  A: 

Dependency Management Software Does More Harm Than Good

I've worked on Java projects that included upwards of a hundred different libraries. In most cases, each library has its own dependencies, and those dependent libraries have their own dependencies too.

Software like Maven or Ivy supposedly "manage" this problem by automatically fetching the correct version of each library and then recursively fetching all of its dependencies.

Problem solved, right?

Wrong.

Downloading libraries is the easy part of dependency management. The hard part is creating a mental model of the software, and how it interacts with all those libraries.

My unpopular opinion is this:

If you can't verbally explain, off the top of your head, the basic interactions between all the libraries in your project, you should eliminate dependencies until you can.

Along the same lines, if it takes you longer than ten seconds to list all of the libraries (and their methods) invoked either directly or indirectly from one of your functions, then you are doing a poor job of managing dependencies.

You should be able to easily answer the question "which parts of my application actually depend on library XYZ?"

The current crop of dependency management tools do more harm than good, because they make it easy to create impossibly-complicated dependency graphs, and they provide virtually no functionality for reducing dependencies or identifying problems.

I've seen developers include 10 or 20 MB worth of libraries, introducing thousands of dependent classes into the project, just to eliminate a few dozen lines of simple custom code.

Using libraries and frameworks can be good. But there's always a cost, and tools which obscure that cost are inherently problematic.

Moreover, it's sometimes (note: certainly not always) better to reinvent the wheel by writing a few small classes that implement exactly what you need than to introduce a dependency on a large general-purpose library.

benjismith
I disagree. I think DM is a good thing, except Maven did it wrong. So much code depends on other stuff, but we still after so many year haven't figured out how to manage it all. That's one thing we need to fix for SW dev to move forward.
Pyrolistical
+2  A: 

There are some (very few) legitimate uses for goto (particularly in C, as a stand-in for exception handling).

DWright
And bytecodes. Bytecodes NEED gotos. :D
luiscubal
+3  A: 

Inversion of control does not eliminate dependencies, but it sure does a great job of hiding them.

benjismith
+58  A: 

Objects Should Never Be In An Invalid State

Unfortunately, so many of the ORM framework mandate zero-arg constructors for all entity classes, using setters to populate the member variables. In those cases, it's very difficult to know which setters must be called in order to construct a valid object.

MyClass c = new MyClass(); // Object in invalid state. Doesn't have an ID.
c.setId(12345); // Now object is valid.

In my opinion, it should be impossible for an object to ever find itself in an invalid state, and the class's API should actively enforce its class invariants after every method call.

Constructors and mutator methods should atomically transition an object from one valid state to another. This is much better:

MyClass c = new MyClass(12345); // Object starts out valid. Stays valid.

As the consumer of some library, it's a huuuuuuge pain to keep track of whether all the right setters have been invoked before attempting to use an object, since the documentation usually provides no clues about the class's contract.

benjismith
TOTALLY agree! And I get very frustrated when I see concepts like this become so popular. +1
John MacIntyre
Invalid States lead to exceptions in my experience.
Cameron MacFarland
@Cameron, are you saying that you should be able to initialize with a default constructor, then set each property, then setting checking for an invalid state in each setter and throwing an exception? If so, how can you possibly handle a situation where 2 properties need to be in synch to be valid?
John MacIntyre
That's why I hate ORM frameworks, despite the fact I need them all the time.
Eduardo León
I feel your pain Eduardo. I can't stand ORM frameworks, but sometimes they're the least-worst way to solve a particular problem. But yeah, I hate them too.
benjismith
-1: good idea, but not at all controversial.
Juliet
I dunno. If was uncontroversial, then all of the major frameworks for Java (notably, Spring and Hibernate) wouldn't require me to break the rule in order to use their code.
benjismith
@John: If two properties should be in sync, they are obviously related and should be edited together in a method: SetBothProperties( a, b )
Lennaert
Sadly serialization requires the existince of zero-arg constructors.
tuinstoel
RAII - resource aquisition is instantation. FTW
George
Sometimes it's sufficient to have protected zero arg constructors. That might help a little.
hstoerr
This sort of reminds me of structs in Windows API programming. I could never figure out which fields I needed to set in order to have a valid instance of a struct like STARTUPINFO for example. Very frustrating.
dacris
I had never heard anyone explicitly this before. It is brilliantly simple--I like it.
Stargazer712
+4  A: 

Sometimes it's appropriate to swallow an exception.

For UI bells and wistles, prompting the user with an error message is interuptive, and there is ussually nothing for them to do anyway. In this case, I just log it, and deal with it when it shows up in the logs.

John MacIntyre
I always took the 'rule' as don't do the following, rather than "don't raise to the user": try {evil();} catch(Exception e){//swallow}
Stu Thompson
+33  A: 

Don't write code, remove code!

As a smart teacher once told me: "Don't write code, Writing code is bad, Removing code is good. and if you have to write code - write small code..."

Gal Goldman
A: 

Microsoft Sucks
I don't think I need to say more.

Unkwntech
On the contrary: if I told you you suck, you'd probably want to know why. Right?
Seventh Element
The answer was more of a joke then a real answer, so if you told me I sucked then started laughing I probably wouldn't care.
Unkwntech
Not controversial
Christopher W. Allen-Poole
I don't think it sucks
Jader Dias
argh! you dethrone my most controversial opinion.
acidzombie24
+7  A: 

That best practices are a hazard because they ask us to substitute slogans for thinking.

Flinkman
http://en.wikipedia.org/wiki/Thought-terminating_clich%C3%A9
Demur Rumed
The Blind following the blind.
WolfmanDragon
+9  A: 

Code Generation is bad

I hate languages that require you to make use of code generation (or copy&paste) for simple things, like JavaBeans with all their Getters and Setters.

C#'s AutoProperties are a step in the right direction, but for nice DTOs with Fields, Properties and Constructor parameters you still need a lot of redundancy.

Lemmy
Code Generation is bad... so do you hate compilers also?(Hint: code generation is a broad subject, don't be deceived by crappy languages/frameworks).
MaD70
+3  A: 

Good Performance VS Elegant Design

They are not mutually exclusive but I can't stand over-designed class structures/frameworks that completely have no clue about performance. I don't need to have a string of new This(new That(new Whatever())); to create an object that will tell me it's 5 AM in the morning oh by the way, it's 217 days until Obama's birthday, and the weekend is 2 days away. I only wanted to know if the gym was open.

Having balance between the 2 are crucial. The code needs to get nasty when you need to pump out all the processor do something intensive such as reading terabytes of data. Save the elegance for the places that consume the 10% of resources which is probably more than 90% of the code.

Jeremy Edwards
Ironically, reading a lot of data is often *not* CPU intensive, because CPUs are so much faster than disks :)
Jon Skeet
I agree, I wasn't super keen in making that distinguishment when writing my response but there are also many times that just the memory overhead will cause you to start swapping.
Jeremy Edwards
A: 

Controversial to self, because some things are better be left unsaid, so you won't be painted by others as too egotist. However, here it is:

If it is to be, it begins with me

Hao
A: 

Programming is so easy a five year old can do it.

Programming in and of itself is not hard, it's common sense. You are just telling a computer what to do. You're not a genius, please get over yourself.

mweiss
I'm not a genius and I don't need to get over myself. In fact, not a day goes by that I don't question myself and wonder if just maybe I am a moron. And that's because I'm trying to tell a computer what it should do, and me realising that I'm not explaining it well enough.
Seventh Element
Please submit your five year olds resume to my HR personell. ;)
Eddie Parker
Explain memory management to a 5-year old.
Eduardo León
Explain x = 4 * 7 to a 5-year old.
Cameron MacFarland
this is a pretty controversial opinion - so why the downvote? i'm confused
Simon_Weaver
Maybe if you program like a five year old. I think a programmer requires a certain amount of maturity and understanding to help them understand it all a little more clearly; like why Bubble Sort is not scalable and what an Octree is and what it is useful for...
Robert Massaioli
I started when I was 4, so +1.
Coding With Style
Programming *can* be done by a 5-year-old. *Good* programming takes experience, self-discipline, and self-criticism, not traits found in your average 5-year-old (or many professionals, either).
DevSolar
This isn't controversial, it's both stupid and factually incorrect. You are only telling a computer "what" to do if you are using a purely function language. You'll find that writing functional code takes a bit more than "common sense" and more mental capacity than that of a five year old. If you are using an imperative language such as C then you are not just telling the computer what to do, you have to explicitly state "how" to do "what" you want.
Tim
i started programming when i was 1.i used a 1bit stream to tell my mother change my pampers.it was {guess}.
Behrooz
+8  A: 

If you need to read the manual, the software isn't good enough.

Plain and simple :-)

simonjpascoe
I agree that there is a lot of software that could do without a manual if it had been designed with a greater emphasis on usability. But even when you can figure out stuff without a manual, having a manual might let you figure out stuff quicker!
Seventh Element
I kinda agree with this. I've seen quite a few applications that were badly designed but sometimes the bad design is by the customers own request (this is how our previous application worked and we don't want to change to much even if it sucks). In these circumstances, whether something is good enough or not is not decided by the development team but by the customer. You may disagree but the customer is always right ;-)
Seventh Element
+4  A: 

Microsoft Windows is the best platform for software development.

Reasoning: Microsoft spoils its developers with excellent and cheap development tools, the platform and its API's are well documented, the platform is evolving at a rappid rate which creates a lot of opportunities for developers, The OS has a large user base which is important for obvious commercial reasons, there is a big community of Windows developers, I haven't yet been fired for choosing Microsoft.

Seventh Element
There are plenty of free well documented stuff for Linux, and the message boards are filled with activity. I've done both and I always have just as much hassle setting up a development environment on either OS.
Bernard
I think the keyword here is "spoils". Little else in my life (no even the bullies at school) have caused me so much pain and suffering as anything which originated from M$.
Aaron Digulla
Microsoft Windows is the best platform for developing Desktop Applications. That isn't controversial. It is the worst platform for developing anything low level - such as filesystems, or kernel code. It is also worse in general for webapps.
nosatalian
The only platform that 'spoils' its dev with "excellent and cheap development tools" is Apple with Xcode. Sure - VisualStudio Express is free. But VS isn't. Linux tools are just as free as the Mac OS X ones, but harder to setup merely because you don't just copy Xcode to your Applications folder and start going.
warren
I never had the same experience with windows. I switched to Linux and am much happier with it.
Shawn B
+9  A: 

Most developers don't have a clue

Yup .. there you go. I've said it. I find that from all the developers that I personally know .. just a handful are actually good. Just a handful understand that code should be tested ... that the Object Oriented approach to developing is actually there to help you. It frustrates me to no end that there are people who get the title of developer while in fact all they can do is copy and paste a bit of source code and then execute it.

Anyway ... I'm glad initiatives like stackoverflow are being started. It's good for developers to wonder. Is there a better way? Am I doing it correctly? Perhaps I could use this technique to speed things up, etc ...

But nope ... the majority of developers just learn a language that they are required by their job and stick with it until they themselves become old and grumpy developers that have no clue what's going on. All they'll get is a big paycheck since they are simply older than you.

Ah well ... life is unjust in the IT community and I'll be taking steps to ignore such people in the future. Hooray!

SpoBo
Sometimes I wonder if employers are to blame for not acknowledging, demanding, and rewarding good talent.
Bernard
+7  A: 

Social skills matter more than technical skills

Agreable but average programmers with good social skills will have a more successful carreer than outstanding programmers who are disagreable people.

Seventh Element
+1 I couldn't agree more. Building software is a social activity more than a technical one.
JuanZe
+3  A: 

Software Development is a VERY small subset of Computer Science.

People sometimes seem to think the two are synonymous, but in reality there are so many aspects to computer science that the average developer rarely (if ever) gets exposed to. Depending on one's career goals, I think there are a lot of CS graduates out there who would probably have been better off with some sort of Software Engineering education.

I value education highly, have a BS in Computer science and am pursuing a MS in it part time, but I think that many people who obtain these degrees treat the degree as a means to an end and benefit very little. I know plenty of people who took the same Systems Software course I took, wrote the same assembler I wrote, and to this day see no value in what they did.

Tina Orooji
++ I wish it worked both ways. I think software engineers could learn a lot from CS, and I *know* CS could learn a lot from SE, such as what *actually matters* to the real world.
Mike Dunlavey
+29  A: 

Source files are SO 20th century.

Within the body of a function/method, it makes sense to represent procedural logic as linear text. Even when the logic is not strictly linear, we have good programming constructs (loops, if statements, etc) that allow us to cleanly represent non-linear operations using linear text.

But there is no reason that I should be required to divide my classes among distinct files or sort my functions/methods/fields/properties/etc in a particular order within those files. Why can't we just throw all those things within a big database file and let the IDE take care of sorting everything dynamically? If I want to sort my members by name then I'll click the member header on the members table. If I want to sort them by accessibility then I'll click the accessibility header. If I want to view my classes as an inheritence tree, then I'll click the button to do that.

Perhaps classes and members could be viewed spatially, as if they were some sort of entities within a virtual world. If the programmer desired, the IDE could automatically position classes & members that use each other near each other so that they're easy to find. Imaging being able to zoom in and out of this virtual world. Zoom all the way out and you can namespace galaxies with little class planets in them. Zoom in to a namespace and you can see class planets with method continents and islands and inner classes as orbitting moons. Zoom in to a method, and you see... the source code for that method.

Basically, my point is that in modern languages it doesn't matter what file(s) you put your classes in or in what order you define a class's members, so why are we still forced to use these archaic practices? Remember when Gmail came out and Google said "search, don't sort"? Well, why can't the same philosophy be applied to programming languages?

Walt D
Stored proc code like T-SQL or PL/SQL is not stored in files.
tuinstoel
The main problem is that a picture is worth 1000 vague words while you can be very specific in text. But I agree that we really need a "birds eye development" mode where you can hack together a rough outline and let the IDE fill 99% of the gaps with defaults.
Aaron Digulla
I believe smalltalk does this. Yet strangely it's still not a widely used language.
Jeremy Wall
+1 for really cool idea. -1 not terribly controversial. I like the idea of perhaps seeing method declarations in a 3d space with calls to other methods shown using lines / color / something. Perhaps it would be a mess, perhaps it would make an overall code overview easier to grasp? Dunno how much of this smalltalk does as suggested above.
Arj
YES! I really wish programmers get over the cult of linear plaintext soon. Modern IDEs do take steps in the right direction, but it's not enough - they are still just annotating and working on the plaintext, bending it already almost to the breaking point. Instead of hackarounds, we should be shifting the paradigm into expressing application design in forms that are much more suitable for it!
Ilari Kajaste
YES! I miss Visual Age for Java every day I get into Eclipse. VAJ had no source files, just some kind of binary repository :S
JuanZe
Boo! I currently work on a project that is doing all of its development in one of your magic IDEs. The problem with abstracting details is that they have to be defined somewhere. The more obscure (according to your IDE designer), the harder the detail is to find. And if the detail happens to be causing a bug or compiler error, you get to hunt through the IDE for where that detail is. It may be possible to have one of those magic IDEs some day, but not without the Mozart of the HCI field and not without enormous vendor lock-in.
thebretness
The stuff you're talking about is called Model Driven Development. I don't know what the right alternative is, but I suspect it is a blending of scripts, OO, and low-level code in one program, using the right language for each job.
thebretness
IDEs are overrated
hasen j
+6  A: 

You can't measure productivity by counting lines of code.

Everyone knows this, but for some reason the practice still persists!

Noel Walters
Do you realise the topic of the thread is "controversy"? How is your statement controversial?
Seventh Element
it depends who you're talking to. Metrics obsessed managers at my last job found it a very controversial point of view.
Noel Walters
+1  A: 

I don't believe that any question related to optimization should be flooded with a chant of the misquoted "Premature optimization is the root of all evil"s because code that is optimized into obfuscation is what makes coding fun

Demur Rumed
+2  A: 

Never implement anything as a singleton.

You can decide not to construct more than one instance, but always ensure you implementation can handle more.

I have yet to find any scenario where using a singleton is actually the right thing to do.

I got into some very heated discussions over this in the last few years, but in the end I was always right.

Andreas
I can give you one: to abstract away, transparently, support for different JVM versions, using reflection to load a support instance for J5, gracefully defaulting to one for J2 if the JVM is < J5 - e.g. getting time is nano seconds.
Software Monkey
+7  A: 

Premature optimization is NOT the root of all evil! Lack of proper planning is the root of all evil.

Remember the old naval saw

Proper Planning Prevents P*ss Poor Performance!

WolfmanDragon
+1 Amen! I couldn't agree more.
torial
Naval, not navel. :)
John
Thanks John for the catch.
WolfmanDragon
+2  A: 

Useful and clean high-level abstractions are significantly more important than performance

one example:

Too often I watch peers spending hours writing over complicated Sprocs, or massive LINQ queries which return unintuitive anonymous types for the sake of "performance".

They could achieve almost the same performance but with considerably cleaner, intuitive code.

andy
A: 

"Don't call virtual methods from constructors". This is only sometimes a PITA, but is only so because in C# I cannot decide at which point in a constructor to call my base class's constructor. Why not? The .NET framework allows it, so what good reason is there for C# to not allow it?

Damn!

Peter Morris
+4  A: 

"Everything should be made as simple as possible, but not simpler." - Einstein.

Jeff O
Well that is controversial, quoting Einstein :)
tuinstoel
+1  A: 

Here's mine:

"You don't need (textual) syntax to express objects and their behavior."

I subscribe to the ideas of Jonathan Edwards and his Subtext project - http://alarmingdevelopment.org/

Bjarke Ebert
Johnathan Edwards?Okay, I'm getting a class:"Person"Anyone here need a "Person" class?This Person class passed quite abruptly, something to do with the chest.The Person class also has a Validate() method, it wants me to validate it. You should always validate the classes around you every day!
Peter Morris
+25  A: 

The users aren't idiots -- you are.

So many times I've heard developers say "so-and-so is an idiot" and my response is typically "he may be an idiot but you allowed him to be one."

Austin Salonen
I say: If someone does something stupid, I'm missing an important fact.
Aaron Digulla
Even though i'm guilty of this sometimes, I have to agree.
Neil Aitken
+23  A: 

To produce great software, you need domain specialists as much as good developers.

Daniel Daranas
This is as controversial as a cup of coffee.
Andrew from NZSG
@Andrew from NZSG Like many of the sentences posed here, it has been "controversial" during my past work experience, because more usually than not software projects have been developed without keeping that in mind. If something happens most of the times and I disagree with it, I qualify my own opinion as somewhat "controversial", even though it is obviously that I am right.
Daniel Daranas
@Andrew: I once phoned a company about a Java developer job ad, long time ago. They asked me, "Do you know Java?" Yes. "Could you write a book-keeping application?" Err, by myself? No. With a financial advisor next to me, yes. "I see. Thank you for your interest." WTF?
Amadan
+4  A: 

"Programmers are born, not made."

Elroy
+4  A: 

I believe in the Zen of Python

Andrew Szeto
+3  A: 

That, erm, people should comment their code? It seems to be pretty controversial around here...

The code only tells me what actually it does; not what it was supposed to do

The time I see a function calculating the point value of an Australian Bond Future is the time I want to see some comments that indicate what the coder thought the calculation should be!

oxbow_lakes
+1  A: 

People complain about removing 'goto' from the language. I happen to think that any sort of conditional jump is highly overrated and that 'if' 'while' 'switch' and a general purpose 'for' loop are highly overrated and should be used with extreme caution.

Everytime you make a comparison and conditional jump a tiny bit of complexity is added and this complexity adds up quickly once the call stack gets a couple hundred items deep.

My first choice is to avoid the conditional, but if it isn't practical my next preference is to keep the conditional complexity in constructors or factory methods.

Clearly this isn't practical for many projects and algorithms (like control flow loops), but it is something I enjoy pushing on.

-Rick

Rick
+11  A: 

Estimates are for me, not for you

Estimates are a useful tool for me, as development line manager, to plan what my team is working on.

They are not a promise of a feature's delivery on a specific date, and they are not a stick for driving the team to work harder.

IMHO if you force developers to commit to estimates you get the safest possible figure.

For instance -

I think a feature will probably take me around 5 days. There's a small chance of an issue that would make it take 30 days.

If the estimates are just for planning then we'll all work to 5 days, and account for the small chance of an issue should it arise.

However - if meeting that estimate is required as a promise of delivery what estimate do you think gets given?

If a developer's bonus or job security depends on meeting an estimate do you think they give their most accurate guess or the one they're most certain they will meet?

This opinion of mine is controversial with other management, and has been interpreted as me trying to worm my way out of having proper targets, or me trying to cover up poor performance. It's a tough sell every time, but one that I've gotten used to making.

Keith
+1 "do you want the estimate for average case or worst case?" "average case" "then don't treat that estimate as a hard limit" duh!
OJW
+12  A: 

Most Programmers are Useless at Programming

(You did say 'controversial')

I was sat in my office at home pondering some programming problem and I ended up looking at my copy of 'Complete Spectrum ROM Disassembly' on my bookshelf and thinking:

"How many programmers today could write the code used in the Spectrum's ROM?"

The Spectrum, for those unfamiliar with it, had a Basic programming language that could do simple 2D graphics (lines, curves), file IO of a sort and floating point calculations including transendental functions all in 16K of Z80 code (a < 5Mhz 8bit processor that had no FPU or integer multiply). Most graduates today would have trouble writing a 'Hello World' program that was that small.

I think the problem is that the absolute number of programmers that could do that has hardly changed but as a percentage it is quickly approaching zero. Which means that the quality of code being written is decreasing as more sub-par programmers enter the field.

Where I'm currently working, there are seven programmers including myself. Of these, I'm the only one who keeps up-to-date by reading blogs, books, this site, etc and doing programming 'for fun' at home (my wife is constantly amazed by this). There's one other programmer who is keen to write well structured code (interestingly, he did a lot of work using Delphi) and to refactor poor code. The rest are, well, not great. Thnking about it, you could describe them as 'brute force' programmers - will force inappropriate solutions until they work after a fashion (e.g. using C# arrays with repeated array.Resize to dynamically add items instead of using a List).

Now, I don't know if the place I'm currently at is typical, although from my previous positions I would say it is. With the benefit of hindsight, I can see common patterns that certainly didn't help any of the projects (lack of peer review of code for one).

So, 5 out of 7 programmers are rubbish.

Skizz

Skizz
There are fewer programmers with the skillset to tackle a problem that no longer matters. Now we have higher levels of abstraction that allow the big picture to come together in more loosely coupled, highly OO ways. Its not that I'm not smart enough to write it, its that I can write something better
Steve
BIOS's and hardware drivers probably feature a lot of assembler. Many embedded systems are assembler only (or primitive C compilers if you're lucky). Even with high level OO, how many coders could write the equivalent of a Spectrum basic interpreter.
Skizz
+2  A: 

Automatic Updates Lead to Poorer Quality Software that is Less Secure

The Idea

A system to keep users' software up to date with the latest bug fixes and security patches.

The Reality

Products have to be shipped by fixed deadlines, often at the expense of QA. Software is then released with many bugs and security holes in order to meet the deadline in the knowledge that the 'Automatic Update' can be used to fix all the problems later.

Now, the piece of software that really made me think of this is VS2K5. At first, it was great, but as the updates were installed the software is slowly getting worse. The biggest offence was the loss of macros - I had spent a long time creating a set of useful VBA macros to automate some of the code I write - but apparently there was a security hole and instead of fixing it the macro system was disabled. Bang goes a really useful feature: recording keystrokes and repeated replaying of them.

Now, if I were really paranoid, I could see Automatic Updates as a way to get people to upgrade their software by slowly installing code that breaks the system more often. As the system becomes more unreliable, users are tempted to pay out for the next version with the promise of better reliablity and so on.

Skizz

Skizz
So security updates make software less secure?
Greg Dean
No, software is released early, before being fully tested, because it can be updated later - there is less emphasis on creating bug-free code and more on getting it released. This gives a 'window of opportunity' to malicious code.
Skizz
It's not as if MS had a shining reputation for secure, bug-free software before automatic updates came along.
Nate C-K
I'd actually say the reverse is true, overall security and stability have improved in more recent MS software.
Nate C-K
+34  A: 

It's fine if you don't know. But you're fired if you can't even google it.

Internet is a tool. It's not making you stupider if you're learning from it.

Tordek
+1  A: 

There is a difference between a programmer and a developer. An example: a programmer writes pagination logic, a developer integrates pagination on a page.

+20  A: 

The customer is not always right.

In most cases that I deal with, the customer is the product owner, aka "the business". All too often, developers just code and do not try to provide a vested stake in the product. There is too much of a misconception that the IT Department is a "company within a company", which is a load of utter garbage.

I feel my role is that of helping the business express their ideas - with the mutual understanding that I take an interest in understanding the business so that I can provide the best experience possible. And that route implies that there will be times that the product owner asks for something that he/she feels is the next revolution in computing leaving someone to either agree with that fact, or explain the more likely reason of why no one does something a certain way. It is mutually beneficial, because the product owner understands the thought that goes into the product, and the development team understands that they do more than sling code.

This has actually started to lead us down the path of increased productivity. How? Since the communication has improved due to disagreements on both sides of the table, it is more likely that we come together earlier in the process and come to a mutually beneficial solution to the product definition.

joseph.ferris
this is one of those answers I would like to give +100 to
DanSingerman
sigh.. so true! can I buy you a beer?
AlexanderN
+5  A: 

I can live without closures.

Looks like nowadays everyone and their mother want closures to be present in a language because it is the greatest invention since sliced bread. And I think it is just another hype.

serg
I thought along the same lines before I used LINQ, at which point I became a complete convert.
Jon Skeet
I agreed before I used them with multithreading in C#. Access to the previous thread's local variables is enormously useful and greatly simplifies syntax.
Steve
+4  A: 

It IS possible to secure your application.

Every time someone asks a question about how to either prevent users from pirating their app, or secure it from hackers, the answer is that it's impossible. Nonsense. If you truly believe that, then leave your doors unlocked (or just take them off the house!). And don't bother going to the doctor, either. You're mortal - trying to cure a sickness is just postponing the inevitable.

Just because someone might be able to pirate your app or hack your system doesn't mean you shouldn't try to reduce the number of people who will do so. What you're really doing is making it require more work to break in than the intruder/pirate is willing to do.

Just like a deadbolt and ADT on your house will keep the burglars out, reasonable anti-piracy and security measures will keep hackers and pirates out of your way. Of course, the more tempting it would be for them to break in, the more security you need.

Jon B
It is not possible to make an application 100% secure because, in the end, applications are just a collection of bits on a storage device that can be copied and modified. Encryption is not copy protection. It's a trade off between the inevitable pirate and time to develop the defenses.
Skizz
@Skizz: My point is that the impossibility of 100% security is not a reason to give up on "ample" security. You can make your app not worth pirating/hacking just like you can make your house not worth breaking into.
Jon B
A: 

Logger configs are a waste of time. Why have them if it means learning a new syntax, especially one that fails silently? Don't get me wrong, I love good logging. I love logger inheritance and adding formatters to handlers to loggers. But why do it in a config file?

Do you want to make changes to logging code without recompiling? Why? If you put your logging code in a separate class, file, whatever, what difference will it make?

Do you want to distribute a configurable log with your product to clients? Doesn't this just give too much information anyway?

The most frustrating thing about it is that popular utilities written in a popular language tend to write good APIs in the format that language specifies. Write a Java logging utility and I know you've generated the javadocs, which I know how to navigate. Write a domain specific language for your logger config and what do we have? Maybe there's documentation, but where the heck is it? You decide on a way to organize it, and I'm just not interested in following your line of thought.

David Berger
"Do you want to make changes to logging code without recompiling?Why?"All the time. I have a deployed server that has no reason to log the finest detail when it's serving production traffic, but I have to be able to turn logging on when something goes wrong. Perhaps you just don't work on the type of applications for which this is necessary, but it's not a superfluous feature.
Kai
Fair enough. That's actually a scenario that I have some experience with...but the difference is that the compile time in the cases I deal with is < 2 min. I know I have to restart the server if I change the logging config...recompiling doesn't seem like such a big deal to me in light of that.
David Berger
+4  A: 

Keep your business logic out of the DB. Or at a minimum, keep it very lean. Let the DB do what it's intended to do. Let code do what code is intended to do. Period.

If you're a one man show (basically, arrogant & egotistical, not listening to the wisdom of others just because you're in control), do as you wish. I don't believe you're that way since you're asking to begin with. But I've met a few when it comes to this subject and felt the need to specify.

If you work with DBA's but do your own DB work, keep clearly defined partitions between your business objects, the gateway between them and the DB, and the DB itself.

If you work with DBA's and aren't allowed to do your DB work (either by policy or because they're premadonnas), you're very close to being a fool placing your reliance on them to get anything done by putting code-dependant business logic in your DB entities (sprocs, functions, etc.).

If you're a DBA, make developers keep their DB entities clean & lean.

Boydski
I'm keeping my fingers crossed I don't have to work with you, or ever maintain your leavings. Being a one man show should be an incentive to do as well as possible--because without other developers to cross-check your work you are already predisposed to writing strange and queer code.
STW
As for the database: if your database is just a bucket that holds anything then I agree that business logic has no place (SQLite is a great DB for these systems)--however if you are holding business data in the database then it is ultimately the DBs responsibility to ensure that its contents are valid. This is never more true than in cases where a database is consumed or maintained by multiple clients.
STW
Boydski
And sorry, but I disagree with your last statement. It's not up to the DB to validate data beyond relational database theory of data retention. It "can", but ultimately it's up to those placing the data there. Most enterprise orgs don't allow their dev's the DBA hat. The DBA's make sure things are run properly according to standards and know nothing of the business behind the data.
Boydski
In those cases, business logic should predominantly be kept where it can be controlled by those who know the business logic: in front of the DAL and out of the database.
Boydski
@Bodosky: If integrity of data is spread in each application that access the data I wish good luck to your clients/employer. A DB **Architect** necessarily needs to know intimately about the business, a DB **Administrator** not.
MaD70
Agreed. However, most enterprises would be lucky to get an architect vs. an admin. Why? Because the enterprise just won't loosen the purse strings enough to pay those folks what they're worth to keep them around long enough to have a vested interest and thereby become intimate with the business. So they end up with DB admins who aren't as interested in the business as they are in RDBMS principles.
Boydski
It's not unusual in an Oracle shop to have large parts of the application inside the DB. PL/SQL is actually a good language to express business logic.
ammoQ
I'm actually an Oracle n00b (slightly more than a year now). And I'm finding out that PL/SQL is much different than SQL Server in a lot of ways. So my paradigm is slowly shifting concerning your comment. At least where Oracle is concerned. However, even in the shop I'm at now, there's minimal BL and it all resides in packages. I'd be very curious to see how performance is affected by tens of millions of transactions per day.
Boydski
+2  A: 

MS Access* is a Real Development Tool and it can be used without shame by professional programmers

Just because a particular platform is a magnet for hacks and secretaries who think they are programmers shouldn't besmirch the platform itself. Every platform has its benefits and drawbacks.

Programmers who bemoan certain platforms or tools or belittle them as "toys" are more likely to be far less knowledgable about their craft than their ego has convinced them they are. It is a definite sign of overconfidence for me to hear a programmer bash any environment that they have not personally used extensively enough to know well.

* Insert just about any maligned tool (VB, PHP, etc.) here.

JohnFx
I agree by proxy... a former colleague manipulated and massaged an Access-backed production system into a highly efficient system that was perfectly suited for its needs. Although with the availability of other desktop-based DB platforms such as SQL Compact (aka SQL Compact Edition, aka SQL Mobile) Access is becomming more of a developer's occasional assistant than his tool. It's like a toothpick kind of--developers can use it, and maybe even use it professionally (give me back my CD!)...
STW
I do have to disagree about PHP, at least in the pre/early asp.net days. PHP was a very valid competitor to classic ASP, and it wasn't until ASP.NET came along and IIS 6 was released that PHP began to lose its functionality. LAMP blew away IIS/asp in my opinion, and judging by the dominance of Apache servers running the web I would say the internet would more or less agree.
STW
I got one agree and one disagree comment. I should at least get an upvote for that since the OP wanted controversial opinions. =)
JohnFx
disagree.i decided to not to use it when i was 11.because autonumber s..ks.
Behrooz
Ok, I agree, but they really should standardize the SQL. Its crap having to work with access specific queries.
JL
@JL: Access SQL is a superset of ANSI SQL as far as I know. So you don't HAVE to use Access specific SQL.
JohnFx
+17  A: 

All source code and comments should be written in English

Writing source code and/or comments in languages other than English makes it less reusable and more difficult to debug if you don't understand the language they are written in.

Same goes for SQL tables, views, and columns, especially when abbrevations are used. If they aren't abbreviated, I might be able to translate the table/column name on-line, but if they're abbreviated all I can do is SELECT and try to decipher the results.

Scott
If English is the main language of wherever you work, I guess. Otherwise, that's just stupid. This suggestion seems pointless imo.
Coding With Style
Especially when you code ABAP in SAP-Systems it's always funny to read some German comments, that nobody out of German speaking regions will ever understand.(I'm a native German speaker so it's double funny)
capfu
All comments in English is great - if you speak English, and the maintainers will as well. I am a native English speaker, but ocassionally plop other languages in just because I can. If I were coding for an app that would be used and eventually maintained in, say, France - I'd expect the comments to be in French
warren
Using multiple languages in code makes it harder to read as you have to switch between the two languages in your head. English only (with native terms if needed in parenthesis).
Thorbjørn Ravn Andersen
That's not controversial, is simply idiotic when you know that a piece of code will never leave a non English-speaking country. I know perfectly that my English sucks and I don't want to inflict it on my fellow countryman programmers. Of course, if I'm quoting documentation in English I don't translate it.
MaD70
This only makes sense for open source application where you expect (or hope) to get a number of people from all over the place to help. Otherwise just use what ever language suits you best.
hasen j
You guys may not intend for your code to leave your country, but none of us can see the future. Our ERP system is written half in Dutch and half in English because a Dutch company bought an American company and rolled two different products into one. How can I be expected to know what gbkmut means?
Scott
+3  A: 

Enable multiple checkout If we improve enough discipline of the developers, we will get much more efficiency from this setting by auto merge of source control.

sesame
If you have enough discipline you don't need source control. But nobody can have that much discipline.
GoatRider
Main ability of source control is to return to any earlier state of the design. Current middle or large software project cannot run without it.Though enabling multiple checkout option is only my prefered little setting.
sesame
+12  A: 

A programming task is only fun while it's impossible, that is up til the point where you've convinced yourself you'll be able to solve it successfully.

This, I suppose, is why so many of my projects end up halfway finished in a folder called "to_be_continued".

Banang
+7  A: 

My controversial opinion: OO Programming is vastly overrated [and treated like a silver bullet], when it is really just another tool in the toolbox, nothing more!

torial
+7  A: 

Developers overuse databases

All too often, developers store data in a DBMS that should be in code or in file(s). I've seen a one-column-one-row table that stored the 'system password' (separate from the user table.) I've seen constants stored in databases. I've seen databases that would make a grown coder cry.

There is some sort of mystical awe that the offending coders have of the DBMS--the database can do anything, but they don't know how it works. DBAs practice a black art. It also allows responsibility transference: "The database is too slow," "The database did it" and other excuses are common.

Left unchecked, these coders go on develop databases-within-databases, systems-within-systems. (There is a name to this anti-pattern, but I forget what it is.)

Stu Thompson
Guessing you are talking about EAV (Entity Attribute Value) database design, which has been the bane of my life for about a year now :)
spooner
+7  A: 

Most of programming job interview questions are pointless. Especially those figured out by programmers.

It is a common case, at least according to my & my friends experience, where a puffed up programmer, asks you some tricky wtf he spent weeks googling for. The funny thing about that is, you get home and google it within a minute. It's like they often try to beat you up with their sophisticated weapons, instead of checking if you'd be a comprehensive, pragmatic team player to work with.

Similar stupidity IMO is when you're being asked for highly accessible fundamentals, like: "Oh wait, let me see if you can pseudo-code that insert_name_here-algorithm on a sheet of paper (sic!)". Do I really need to remember it while applying for a high-level programming job? Should I efficiently solve problems or puzzles?

ohnoes
+1 fully agree, its also usually the case that during the interview they check to see if you are the rocket scientist they require. Asking you all sorts of rough questions. Then we you get the job, you realize actually what they were after was a coding monkey, who shouldn't get too involved in business decisions. I know this is not always the case, but usually the work you end up doing is very easy compared to the interview process, where you would think they were looking for someone to develop organic rocket fuel.
JL
You say it like it is bro'
Seventh Element
+5  A: 

BAD IDE's make the programming language weak

Good programming IDEs really make working with certain languages easier and better to oversee. I have been bit spoiled in my professional carreer, the companies I worked for always had the latest Visual Studio's ready to use.

For about 8 months, I have been doing a lot of Cocoa next to my work and the Xcode editor makes working with that language just way too difficult. Overloads are difficult to find and the overal way of handling open files just makes your screen really messy, really fast. It's really a shame, because Cocoa is a cool and powerful language to work with.

Ofcourse die-hard Xcode fans will now vote down my post, but there are so many IDEs that are really a lot better.

People making a switch to IT, who just shouldn't

This is a copy/paste from a blog post of mine, made last year.


The experiences I have are mainly about the dutch market, but they also might apply to any other market.

We (as I group all Software Engineers together) are currently in a market that might look very good for us. Companies are desperately trying to get Software Engineers (from now on SE) , no matter the price. If you switch jobs now, you can demand almost anything you want. In the Netherlands there is a trend now to even give 2 lease cars with a job, just to get you to work for them. How weird is that? How am I gonna drive 2 cars at the same time??

Of course this sounds very good for us, but this also creates a very unhealthy situation..

For example: If you are currently working for a company which is growing fast and you are trying to attract more co-workers, to finally get some serious software development from the ground, there is no-one to be found without offering sky high salaries. Trying to find quality co-workers is very hard. A lot of people are attracted to our kind of work, because of the good salaries, but this also means that a lot of people without the right passion are entering our market.

Passion, yes, I think that is the right word. When you have passion for your job, your job won’t stop at 05:00 PM. You will keep refreshing all of your development RSS feeds all night. You will search the internet for the latest technologies that might be interesting to use at work. And you will start about a dozen new ‘promising’ projects a month, just to see if you can master that latest technology you just read about a couple of weeks ago (and find an useful way of actually using that technology).

Without that passion, the market might look very nice (because of the cars, money and of course the hot girls we attract), but I don’t think it will be that interesting very long as, let’s say: fireman or fighter-pilot.

It might sound that I am trying to protect my own job here and partly that is true. But I am also trying to protect myself against the people I don’t want to work with. I want to have heated discussions about stuff I read about. I want to be able to spar with people that have the same ‘passion’ for the job as I have. I want colleagues that are working with me for the right reasons.

Where are those people I am looking for!!

Wim Haanstra
Cocoa isn't a language - it's an API http://en.wikipedia.org/wiki/Cocoa_(API) in Objective-C
warren
I use one IDE for every langauge.... gvim.
Chisum
+1  A: 

There are only 2 kinds of people who use C (/C++): Those who don't know any other language, and those who are too lazy to learn a new one.

I worked with a guy who was doing C/C++ for 15 years, and flat out refused to learn anything else. He considered anything other than C/C++ to be a child's toy, which included any managed frameworks .NET or Java and any web related technology. Therefore the only thing he could program is Win32 desktop applications. And he made it very clear he's not going to learn anything new, and will be doing C/C++ to his retirement.
WebMatrix
And those who do real, interesting work like kernel development..
nosatalian
And those who feel C++ is still the best way despite knowing Java and C#.
luiscubal
And those who work with embedded systems.
Jeanne Pindar
+3  A: 

Schooling ruins creativity *

*"Ruins" means "potentially ruins"

Granted, schooling is needed! Everyone needs to learn stuff before they can use it - however, all those great ideas you had about how to do a certain strategy for a specific business-field can easily be thrown into that deep brain-void of ours if we aren't careful.

As you learn new things and acquire new skills, you are also boxing your mindset on those new things and skills, since they apparently are "the way to do it". Being humans, we tend to listen to authorities - being it a teacher, a consultant, a co-worker or even a site / forum you like. We should ALWAYS be aware of that "flaw" in how our mind works. Listen to what other people say, but don't take what they say for granted. Always keep a critic point-of-view on every new information you receive.

Instead of thinking "Wow, that's smart. I will use that from now on", we should think "Wow, that's smart. Now, how can I use that in my personal toolbox of skills and ideas".

cwap
+5  A: 

Commenting is bad

Whenever code needs comments to explain what it is doing, the code is too complicated. I try to always write code that is self-explanatory enough to not need very many comments.

Zifre
I was going to vote this down, but then I realized these are SUPPOSED to be controversial, and voted it up.
GoatRider
I don't think good code replaces comments any more than comments replace good code. You have to do both. Plus, these days there's a half decent chance that your comments might well be generating the documentation (and IntelliSense) so you'd better get used to adding those comments!
Tim Long
+11  A: 

I don't know if it's really controversial, but how about this: Method and function names are the best kind of commentary your code can have; if you find yourself writing a comment, turn the the piece of of code you're commenting into a function/method.

Doing this has the pleasant side-effect of forcing you to decompose your program well, avoids having comments that can quickly become out of sync with reality, gives you something you can grep the codebase for, and leaves your code with a fresh lemon odour.

Keith Gaughan
This can be taken too far. Often there is a subtle business case for a particular method or implimentation strategy that you cannot convey without several lines of comments.
Tom Leys
Quite true, but it's a rule of thumb rather than a hard rule. Indicating subtleties is, after all, what comments are best used for.
Keith Gaughan
+5  A: 

HTML 5 + JavaScript will be the most used UI programming platform of the future.Flash,Silverlight,Java Applets etc. etc. are all going to die a silent death

HashName
I don't think so, but I sure hope so!
Zifre
Even Operas' CEO think so http://news.zdnet.co.uk/software/0,1000000121,39655473,00.htm
HashName
I think that the death of flash will be noisy. :-)
Warren P
I wouldn't shed a tear for Silverlight, activeX is another one that needs to die!
JL
+12  A: 

Programming is in its infancy.

Even though programming languages and methodologies have been evolving very quickly for years now, we still have a long way to go. The signs are clear:

  1. Language Documentation is spread haphazardly across the internet (stackoverflow is helping here).

  2. Languages cannot evolve syntactically without breaking prior versions.

  3. Debugging is still often done with printf.

  4. Language libraries or other forms of large scale code reuse are still pretty rare.

Clearly all of these are improving, but it would be nice if we all could agree that this is the beginning and not the end=).

Evan Moran
I have upvoted it although I believe this is completely uncontroversial to anyone who knows a *minimum* about programming methodology and history. We've got a long road ahead, hence the many insulting jokes about programmers’ abilities compared to architects, airplane pilots etc.
Konrad Rudolph
Actual there are many who would say the opposite. Everything interesting to do with programming languages was done in 60s with Lisp. We are just waiting for people to figure this out - Witness the growing popularity of Python/Java closures, etc. So this _is_ controversial.
nosatalian
printf debugging is actually mentioned on a higher-rated comment in this thread as being a good idea
OJW
+5  A: 

Nobody Cares About Your Code

If you don't work on a government security clearance project and you're not in finance, odds are nobody cares what you're working on outside of your company/customer base. No one's sniffing packets or trying to hack into your machine to read your source code. This doesn't mean we should be flippant about security, because there are certainly a number of people who just want to wreak general havoc and destroy your hard work, or access stored information your company may have such as credit card data or identity data in bulk. However, I think people are overly concerned about other people getting access to your source code and taking your ideas.

brokenbeatnik
Hmmm, so basically you've combined "don't take yourself so seriously, nobody else does" with "it's not the implementation that is valuable but the idea".
STW
...and I forget "why lock the door, if someone wants to break in it's one more thing to have to replace"
STW
I disagree with your assessment. To follow your analogy, it's more like thinking someone wants to break into your house to steal some timbers out of or take pictures of your collection of model ships that you painstakingly built because the finished ships might be valuable on the open market. If they bother to break in, they'd much rather just take your cash or TV. My third sentence clearly states that I think security is still important, just for different reasons.
brokenbeatnik
+3  A: 

It takes less time to produce well-documented code than poorly-documented code

When I say well-documented I mean with comments that communicate your intention clearly at every step. Yes, typing comments takes some time. And yes, your coworkers should all be smart enough to figure out what you intended just by reading your descriptive function and variable names and spelunking their way through all your executable statements. But it takes more of their time to do it than if you had just explained your intentions, and clear documentation is especially helpful when the logic of the code turns out to be wrong. Not that your code would ever be wrong...

I firmly believe that if you time it from when you start a project to when you ship a defect-free product, writing well-documented code takes less time. For one thing, having to explain clearly what you're doing forces you to think it through clearly, and if you can't write a clear, concise explanation of what your code is accomplishing then it's probably not designed well. And for another purely selfish reason, well-documented and well-structured code is far easier to dump onto someone else to maintain - thus freeing the original author to go create the next big thing. I rarely if ever have to stop what I'm doing to explain how my code was meant to work because it's blatantly obvious to anyone who can read English (even if they can't read C/C++/C# etc.). And one more reason is, frankly, my memory just isn't that good! I can't recall what I had for breakfast yesterday, much less what I was thinking when I wrote code a month or a year ago. Perhaps your memory is far better than mine, but because I document my intentions I can quickly pick up wherever I left off and make changes without having to first figure out what I was thinking when I wrote it.

That's why I document well - not because I feel some noble calling to produce pretty code fit for display, and not because I'm a purist, but simply because end-to-end it lets me ship quality software in less time.

JayMcClellan
It does indeed take less time, but also more skill, but that is the case with all languages whether spoken or written. Words do not inheritly have meaning, rather they have meanings that individuals associate with them. If I were to comment that "cows are retromingent" the huge majority of people would not know what I meant, whereas saying "when cows pee it goes backwards" would give a better understanding.I comment in less time by typing faster :-D
STW
..and my apologies for that comment, but that term is one of the few things I learned in college so I have to jump at the opportunity to use it. Time for me to masticate.
STW
+2  A: 

...That the "clarification of ideas" should not be the sole responsibility of the developer...and yes xkcd made me use that specific phrase...

To often we are handed project's that are specified in psuedo-meta-sorta-kinda-specific "code" if you want to call it that. There are often product managers who draw up the initial requiements for a project and perform next to 0% of basic logic validation.

I'm not saying that the technical approach shouldn't be drawn up by the architect, or that the speicifc implemntation shouldn't be the responsibility of the developer, but rather that it should the requirement of the product manager to ensure that their requirements are logically feasible.

Personally I've been involved in too many "simple" projects that encounter a little scope creep here and there and then come across a "small" change or feature addition which contradicts previous requirements--whether implicitly or explicitly. In these cases it is all too easy for the person requesting the borderline-impossible change to become enraged that developers can't make their dream a reality.

STW
+18  A: 

A Good Programmer Hates Coding

Similar to "A Good Programmer is a Lazy Programmer" and "Less Code is Better." But by following this philosophy, I have managed to write applications which might otherwise use several times as much code (and take several times as much development time). In short: think before you code. Most of the parts of my own programs which end up causing problems later were parts that I actually enjoyed coding, and thus had too much code, and thus were poorly written. Just like this paragraph.

A Good Programmer is a Designer

I've found that programming uses the same concepts as design (as in, the same design concepts used in art). I'm not sure most other programmers find the same thing to be true; maybe it is a right brain/left brain thing. Too many programs out there are ugly, from their code to their command line user interface to their graphical user interface, and it is clear that the designers of these programs were not, in fact, designers.

Although correlation may not, in this case, imply causation, I've noticed that as I've become better at design, I've become better at coding. The same process of making things fit and feel right can and should be used in both places. If code doesn't feel right, it will cause problems because either a) it is not right, or b) you'll assume it works in a way that "feels right" later, and it will then again be not right.

Art and code are not on opposite ends of the spectrum; code can be used in art, and can itself be a form of art.

Disclaimer: Not all of my code is pretty or "right," unfortunately.

Definitely agree! Making beautiful applications requires beautiful code.
Matt
Only just seen this: agreed 100%. Ugly code is far more likely to be buggy. An appreciation of elegance and beauty is essential to good coding.
Keith Williams
+7  A: 

Tools, Methodology, Patterns, Frameworks, etc. are no substitute for a properly trained programmer

I'm sick and tired of dealing with people (mostly managers) who think that the latest tool, methodology, pattern or framework is a silver bullet that will eliminate the need for hiring experienced developers to write their software. Although, as a consultant who makes a living rescuing at-risk projects, I shouldn't complain.

Jeff Leonard
I will second "Thou Shalt Not Complain". Those who manage based on idealistic expedience and feel good tools always find themselves in trouble like this. Unfortunately I have noticed that no matter how many times you deliver the reality that you need to use good people. The bottom line bean counters always try to find the cheap/easy way out. In the end they always have to poney up the money. They either pony up to get it done correctly the first time or they pony up to have it fied properly by someone who chanrges a premium. Sometimes far in excess of the cost to do it right the 1st time.
Axxmasterr
+1  A: 

"else" is harmful.

Blank Xavier
What do you propose as an alternative?
Matt Grande
A series of if()s, each fully enumerating the situation where it occurs. Else of course only implicitly states; the reader has to maintain state in his own head, and there's a pretty low limit before people are overwhelmed and start getting it wrong. Another way is to have if()s which set a state variable, and then switch on that state.
Blank Xavier
Another issue here is reading code from top to bottom. Multiple elses, especially with a chunks of code of any size in them, require the reader to scoot up and down the code, matching elses to ifs. I find it immesureably better to have a purely linear flow of code.
Blank Xavier
Consider --if(a) a = false; else print("x");-- and --if(a) a = false; if(!a) print("x");--They are not the same thing. As for issues with understanding code, I believe proper indentation solves most of the problems.
luiscubal
+1.. interesting point.. At university I was taught to -always- write an else statement explicitly to improve testability (an omitted else clause should be considered as executing a null statement in code path analysis), but I agree that for readability it can sometimes indeed be better to refactor.
Wouter van Nifterick
@Wouter van Nifterick: I have to criticize that point of view too, however. If the if statement has a return statement, then you can just write code outside the if stuff.
luiscubal
-1, else is valid for controlling program flow - excuse the pun, but how ELSE are you going to do it?
JL
Please see the second comment. With regard to your point about program flow, all you've stated is what else is to be used for. That does not indicate in any way whether or not it is harmful. GOTO is also intended for controlling program flow. It has it's place (infinite loops) but beyond that, over-use is harmful.
Blank Xavier
+1  A: 

That the Law of Demeter, considered in context of aggregation and composition, is an anti-pattern.

chaos
How can this be controversial when most of us didn't understand what you meant. :-)
Warren P
+1  A: 

I am of the opinion that there are too many people making programming decisions that shouldn't be worried about implementation.

Andrew Sledge
+13  A: 

If you want to write good software then step away from your computer

Go and hang out with the end users and the people who want and need the software. Only from them will you understand what your software needs to accomplish and how it needs to do that.

  • Ask them what the love & hate about the existing processes.
  • Ask them about the future of their processes, where it is headed.
  • Hang out and see what they use now and figure out their usage patterns. You need to meet and match their usage expectations. See what else they use a lot, particularly if they like it and can use it efficiently. Match that.

The end user doesn't give a rat's how elegant your code is or what language it's in. If it works for them and they like using it, you win. If it doesn't make their lives easier and better - they hate it, you lose.

Walk a mile in their shoes - then go write your code.

CAD bloke
Great answer - I always try to do this... but sometimes you got to protect users from their own ideas. Because e.g. in business software (financial) I always encounter some users with the tendency to wish for "creative bookkeeping". Hehe.
capfu
This is why I love being a domain expert. For my whole career I've worked alongside people who use the type of software I write.
Jeanne Pindar
@Jeanne: Ditto - my major project is based on what I do for a living. I do a lot of talking to myself.
CAD bloke
+3  A: 

Never let best practices or pattern obsessesion slave you.

These should be guidelines, not laws set in stone.

And I really like the patterns, and the GoF book more or less says it that way too, stuff to browse through, providing a common jargon. Not a ball and chain gospel.

Marco van de Voort
+10  A: 

Coding is an Art

Some people think coding is an art, and others think coding is a science.

The "science" faction argues that as the target is to obtain the optimal code for a situation, then coding is the science of studying how to obtain this optimal.

The "art" faction argues there are many ways to obtain the optimal code for a situation, the process is full of subjectivity, and that to choose wisely based on your own skills and experience is an art.

Jonathan
Electronics designers will always tell you that designing electronic circuits is 'an imprecise science'. I think the opposite is true of constructing computer programs - it is an exact art. I think this partly because I don;t know where my programming ability comes from. I sit at the keyboard and "it just happens". I'm not following any rules or processes when I write code, thereore it is an art. But whatever I write has to be exactly right, or it will not work. Hence, it is an exact art.
Tim Long
A: 

If it's not native, it's not really programming

By definition, a program is an entity that is run by the computer. It talks directly to the CPU and the OS. Code that does not talk directly to the CPU and the OS, but is instead run by some other program that does talk directly to the CPU and the OS, is not a program; it's a script.

This was just simple common sense, completely non-controversial, back before Java came out. Suddenly there was a scripting language with a large enough feature set to accomplish tasks that had previously been exclusively the domain of programs. In response, Microsoft developed the .NET framework and some scripting languages to run on it, and managed to muddy the waters further by slowly reducing support for true programming among their development tools in favor of .NET scripting.

Even though it can accomplish a lot of things that you previously had to write programs for, managed code of any variety is still scripting, not programming, and "programs" written in it do and always will share the performance characteristics of scripts: they run more slowly and use up far more RAM than a real (native) program would take to accomplish the same task.

People calling it programming are doing everyone a disservice by dumbing down the definition. It leads to lower quality across the board. If you try and make programming so easy that any idiot can do it, what you end up with are a whole lot of idiots who think they can program.

Mason Wheeler
This sounds like argumentative nonsense to me. Suppose I compile a program which satisfies your definition... but then run it in VMWare or something like that. Does that make it a "script" because it's running virtually? Of course not. Likewise if you're dismissing Java as "not programming" would your view change if at any point anyone brought out a "Java CPU" (if such things don't exist already)?Yes, there are plenty of arguments for not trying to "dumb down" programming too much - but the way you're putting it takes that *much* too far.
Jon Skeet
With all due respect for you and your obvious intelligence, I have to disagree. A VM is just an abstraction of the hardware. The program is still capable of running directly on the hardware and talking to it. By contrast, if someone built a Java CPU, you still wouldn't be able to write an OS or device drivers for it in Java. (No pointers, etc.)
Mason Wheeler
So it would have to be able to do *more* than just Java - but it would still be able to execute Java code natively. Would all the "non-programmers" in the world who are currently writing Java suddenly become programmers in your view? Sorry, I still can't see this as a sensible or useful delineation at all.
Jon Skeet
You might also want to try convincing the Wikipedians, who certainly include scripts as programs, even leaving aside the question of whether Java is a script or not: http://en.wikipedia.org/wiki/Computer_program
Jon Skeet
I seem to remember that UCSD Pascal compiled to p-code, which was then interpreted, but Pascal has certainly always been considered a programming language and not a scripting language. The colege I was at did also have something they called a Pascal Microengine, which could execute p-code natively. So the distinction is somewhat arbitrary and defies definition.
Tim Long
"Execute p-code natively" is an oxy-moron.
kmarsh
Ever heard of gcj?
luiscubal
Gee, a Delphi programmer ridiculing code that runs on a framework! What a surprise! Self-deluded, elitist crap.
Ash
Delphi has a framework too. It's called the VCL. The difference is that it's native code, and it tends to add a few hundred kilobytes to your application, as opposed to .NET, which adds a few hundred MEGABYTES of dependencies.
Mason Wheeler
also, what about the Burroughs machines that ran COBOL natively as their assembly language?
warren
www.ajile.com. Hardware CPU runs java natively, realtime with direct access to the hardware.
Tim Williscroft
+3  A: 

Sometimes it's okay to use regexes to extract something from HTML. Seriously, wrangle with an obtuse parser, or use a quick regex like /<a href="([^"]+)">/? It's not perfect, but your software will be up and running much quicker, and you can probably use yet another regex to verify that the match that was extracted is something that actually looks like a URL. Sure, it's hacky, and probably fails on several edge-cases, but it's good enough for most usage.

Based on the massive volume of "How use regex get HTML?" questions that get posted here almost daily, and the fact that every answer is "Use an HTML parser", this should be controversial enough.

Chris Lutz
+3  A: 

Cleanup and refactoring are very important in (team) development

A lot of work in team development has to do with management. If you are using a bug tracker than it is only useful if someone takes the time to close/structure things and lower the amount of tickets. If you are using a source code management somebody needs to cleanup here and restructure the repository quite often. If you are programming than there should be people caring about refactoring of the lazy produced stuff of others. It is part of most of the aspects some will face while doing software development.

Everybody agrees to the necessity of this kind of management. And it is always the first thing that is skipped!

Norbert Hartl
+1 - but this isn't controversial, you even noted that everybody agrees :)
SnOrfus
+5  A: 

It's not the tools, it's you

Whenever developers try to do something new like doing UML diagrams, charts of any sort, project management they first look for the perfect tool to solve the problem. After endless searches finding not the right tool their motivation starves. All that is left then is complaints about the lack of useable software. It is the insight that the plan to be organized died in absence of a piece of software.

Well, it is only yourself dealing with organization. If you are used to organize you can do it with or without the aid of a software (and most do without). If you are not used to organize nobody can help you.

So "not having the right software" is just the simplest excuse for not being organized at all.

Norbert Hartl
not really "controversial" - but useful :)
warren
I think this is true in spite of people agreeing with it (figure that out). I make a pest of myself telling people that to do performance tuning you do not need a tool, in fact you may do better without one.
Mike Dunlavey
+3  A: 
  1. Good architecture is grown, not designed.

  2. Managers should make sure their team members always work below their state of the art, whatever that level is. When people work withing their comfort zone they produce higher quality code.

zvolkov
But if you never try to do something different, you'll never expand your comfort zone. I found that getting out of my "comfort zone" to be quite enjoyable(though certainly not productive, it *is* needed, sometimes).
luiscubal
Upvoted for first part, downvoted for second.
iandisme
+2  A: 

switch-case is not object oriented programming

I often see a lot of switch-case or awful big if-else constructs. This is merely a sign for not putting state where it belongs and don't use the real and efficient switch-case construct that is already there: method lookup/vtable

Norbert Hartl
+2  A: 

To be really controversial:

You know nothing!

or in other words:

I know that I know nothing.

(this could be paraphrased in many kinds but I think you get it.)

When starting with computers/developing, IMHO there are three stages everyone has to walk through:

The newbie: knows nothing (this is fact)

The intermediate: thinks he knows something/very much(/all) (this is conceit)

The professional: knows that he knows nothing (because as a programmer most time you have to work on things you have never done before). This is no bad thing: I love to familiarize myself to new things all the time.

I think as a programmer you have to know how to learn - or better: To learn to learn (because remember: You know nothing! ;)).

Inno
Strange logic, I agree be humble and learn, but to say you know nothing would just be silly.
JL
+8  A: 

Sometimes you have to denormalize your databases.

An opinion that doesn't go well with most programmers but you have to sacrifice things like noramlization for performance sometimes.

Artem Russakovskii
Hardware is cheap - logic costs a fortune.
SnOrfus
Corollary: most time, performance suffers with less than 5NF.
just somebody
+4  A: 

Getting paid to program is generally one of the worst uses of a man's time.

For one thing, you're in competition with the Elbonians, who work for a quarter a day. You need to convince your employer that you offer something the Elbonians never can, and that your something is worth a livable salary. As the Elbonians get more and more overseas business, the real advantage wears thin, and management knows it.

For another thing, you're spending time solving someone else's problems. That's time you could spend advancing your own interests, or working on problems that actually interest you. And if you think you're saving the world by working on the problems of other men, then why don't you just get the Elbonians to do it for you?

Last, the great innovations in software (visicalc, Napster, Pascal, etc) were not created by cubicle farms. They were created by one or two people without advance pay. You can't forcibly recreate that. It's just magic that sometimes happens when a competent programmer has a really good idea.

There is enough software. There are enough software developers. You don't have to be one for hire. Save your talents, your time, your hair, your marriage. Let someone else sell his soul to the keyboard. If you want to program, fine. But don't do it for the money.

Ian
Strange logic there. Hmmmm.
Dana Holt
> "Last, the great innovations in software (visicalc, Napster, Pascal, etc)" - so many examples to the contrary that I won't even start. Bell labs to name just one location. But if I read between the lines well then I agree with you: you need a new job.
SnOrfus
+1. controversial but interesting view (at least the 2 commenters above don't seem to agree). Ian makes some good points if you ask me.
Wouter van Nifterick
+5  A: 

Reflection has no place in production code

Reflection breaks static analysis including refactoring tools and static type checking. Reflection also breaks the normal assumptions developers have about code. For example: adding a method to a class (that doesn't shadow some other method in the class) should never have any effect, but when reflection is being used, some other piece of code may "discover" the new method and decide to call it. Actually determining if such code exists is intractable.

I do think it's fine to use reflection and tests and in code generators.

Yes, this does mean that I try to avoid frameworks that use reflection. (it's too bad that Java lacks proper compile-time meta-programming support)

Laurence Gonsalves
Wouldn't this negate the possibility of developing an application that supports 3rd party plugins?
SnOrfus
You're right, I should have been more clear. When I said "reflection" I meant java.lang.reflect. For plug-ins you just need Class.forName() and Class.newInstance(). I still consider the latter a "bad smell" (it's overused) but if you're implementing a system with third-party plugins then that's the way to do it.
Laurence Gonsalves
+14  A: 

A Developer should never test their own software

Development and testing are two diametrically opposed disciplines. Development is all about construction, and testing is all about demolition. Effective testing requires a specific mindset and approach where you are trying to uncover developer mistakes, find holes in their assumptions, and flaws in their logic. Most people, myself included, are simply unable to place themselves and their own code under such scrutiny and still remain objective.

Bruce McLeod
Do you include unit testing in that? Do you not see any value in unit testing? If so, I don't agree. I would agree that a developer shouldn't be the *only* tester of their software (where possible, of course).
Jon Skeet
Jon, I am talking from the point of view that yes they SHOULD do unit testing but no they should NOT be the only tester of their code. As you rightly point out, if they are the only one then they don't have much choice. This question did ask for your most controversial opinion so I think that mine is right up there. The other key point is that the "we don't need no stinking testers" cause' the dev's or anyone can just do it is completely wrong as well
Bruce McLeod
I suggest you reword the rule to "should never be RESPONSIBLE for testing their own software", as your current wording could imply you were not allowed to test your pgorams at all.
Thorbjørn Ravn Andersen
+3  A: 

Many developers have an underdeveloped sense of where to put things, resulting in messy source code organization at the file, class, and method level. Further, a sizable percentage of such developers are essentially tone-deaf to issues of code organization. Attempts to teach, cajole, threaten, or shame them into keeping their code clean are futile.

On any sufficiently successful project, there's usually a developer who does have a good sense of organization very quietly wielding a broom to the code base to keep entropy at bay.

Dave W. Smith
+3  A: 

My controversial opinion is probably that John Carmack (ID Software, Quake etc.) is not a very good programmer.

Don't get me wrong, he's a very smart programmer in my opinion, but after I noticed the line "#define private public" in the quake sourcecode I couldn't help but think he's a guy that gets the job done nomatter what, but in my definition not a good programmer :) This opinion has gotten me into a lot of heated discussions though ;)

Led
If true then I'd be inclined to agree. That looks pretty bad.
spender
I don't know many programs that are this kind of performance optimized, dealing with graphics and sound and everything, (some) platform independent, which are still **that stable** as doom and quake and everything produced by id software. Really. I wish every software is made like this. Even usability is great.
Stefan Steinegger
+3  A: 

Software is not an engineering discipline.

We never should have let the computers escape from the math department.

james woodyatt
+8  A: 

Girls can't code.

samuil
Not controversial enough. This one is old enough by now that if you mention it people will just look at you funny and move along.
Coding With Style
+2  A: 

Design patterns are bad.

Actually, design patterns aren't.

You can write bad code, and bury it under a pile of patterns. Use singletons as global variables, and states as goto's. Whatever.

A design pattern is a standard solution for a particular problem, but requires you to understand the problem first. If you don't, design patterns become a part of the problem for the next developer.

G B
+7  A: 

The simplest approach is the best approach

Programmers like to solve assumed or inferred requirements that add levels of complexity to a solution.

"I assume this block of code is going to be a performance bottleneck, therefore I will add all this extra code to mitigate this problem."

"I assume the user is going to want to do X, therefore I will add this really cool additional feature."

"If I make my code solve for this unneeded scenario it will be a good opportunity to use this new technology I've been interested in trying out."

In reality, the simplest solution that meets the requirements is best. This also gives you the most flexibility in taking your solution in a new direction if and when new requirements or problems come up.

Brad C
Yeah, the best way to compare implemntations is by their line count. People wont reuse your code unless it's less than one page long.
AareP
Occam's Razor :)
Sad0w1nL1ght
++ I don't think this is controversial in one sense - everybody agrees with it. But in another sense it is controversial - because few people follow it.
Mike Dunlavey
A: 

Apparently mine is that Haskell has variables. This is both "trivial" (according to at least eight SO users) (though nobody can seem to agree on which trivial answer is correct), and a bad question even to ask (according to at least five downvoters and four who voted to close it). Oh, and I (and computing scientests and mathematicians) am wrong, though nobody can provide me a detailed explanation of why.

Curt Sampson
Even though I respect math, I'd have to disagree. Those aren't variables. Those are contants. Variables should be... well... variable.I believed Haskell has no variables because "x = x + 1" isn't possible. You use functions, you don't *really* change the value of x. HOWEVER, that post mentioned IORef, so maybe Haskell *does* have variables...
luiscubal
Well, go put an answer up on the question to which I linked showing why, in the definition "double x = x * 2", "x" is a constant.
Curt Sampson
"double x = x * 2" makes no sense in no language. Not even C.
luiscubal
It's an equation, saying that the left and right sides are equivalent (i.e., "double 3" means the same thing as "3 * 2"), and not only does it make perfect sense in mathematics, but it's perfectly valid Haskell code.
Curt Sampson
So haskell is single-assignment within the bounds of a particular scope, and you can only "change" the value of x by reintroducing a new inner scope, which is what "double x= x *2" really does, right? It doesn't change the value of x at all, it just overloads the identifier x with a new (temporary) value at a particular scope.
Warren P
+1  A: 

You'll never use enough languages, simply because every language is the best fit for only a tiny class of problems, and it's far too difficult to mix languages.

Pet examples: Java should be used only when the spec is very well thought out (because of lots of interdependencies meaning refactoring hell) and when working with concrete concepts. Perl should only be used for text processing. C should only be used when speed trumps everything, including flexibility and security. Key-value pairs should be used for one-dimensional data, CSV for two-dimensional data, XML for hierarchical data, and a DB for anything more complex.

l0b0
+1  A: 

I believe that the "Let's Rewrite The Past And Try To Fix That Bug Pretending Nothing Ever Worked" is a valuable debugging mantra in desperate situations:

http://stackoverflow.com/questions/978904/do-you-use-the-orwellian-past-rewriting-debugging-philosophy-closed

JCCyC
+2  A: 

Software-Reuse is the most important way to optimize software-development

Somehow, software-reuse seamed to be in vogue for some time, but has lost it's charm, when many companies found out that just writing powerpoint presentations with reuse slogans doesn't actually help. They reasoned that software-reuse is just not "good enough" and can't live up to their dreams. So it seams that it is not in vogue any more -- it was replaced by plenty of project management newcomers (Agile for example).

The fact is, that any really good developer by himself performs some kind of software-reuse. I would say Any developer, not doing software-reuse is a bad developer!

I have experienced myself, how much software-reuse can produce performance and stability in development. But of course, a set of PowerPoints and half-hearted confessions of management does not suffice to get its full potential in a company.

I have linked a very old article of mine about software-reuse (see title). It was originally written in German and translated thereafter -- so excuse please, when it is not that good writing.

Juergen
One of the problems with software-reuse is that it requires advanced reading and adapting skills, which aren't easy. Also, using libraries as dependencies can be a nightmare if those libraries aren't stable.
luiscubal
Yes, advanced reading skills are difficult for most programmers ;-)Your second point is a good one. Reuse does not come without a price tag, of course. Not like some companies think, that there must be just a directive to make reuse. That also is the reason, why many where disappointed by reuse. Something for nothing does not work in IT either!
Juergen
+3  A: 

Using Stored Proc is easy to maintain and less deployment vs Using ORM is OO way thus it is good

I've heard this lot in many of my projects, when ever this statements appear it is always tough get it settled.

Vadi
I haven't noticed much OO about most uses of ORM - just three layers of (not much) abstraction to maintain.
cartoonfox
+24  A: 

Emacs is better

Milan Ramaiya
Actually, either vi or vim is better.
David Thornley
Only for those who have stuff in their .emacs file (which they understand).
Thorbjørn Ravn Andersen
as a vim user, I have to +1 this one.
just somebody
+3  A: 

I don't care how powerful a programming language is if its syntax is not intuitive and I can't set it aside for some period of time and come back to it without too much effort at refreshing on the details. I would rather a language itself be intuitive than it be cryptic but powerful for creating DSL's. A computer language is a user interface for ME, and I want it designed for intuitive ease of use like any other user interface.

Anon
A: 

Develop terminators to hunt Jon. Jon Skeet.

The Alpha Nerd
It's Jon. You've failed to even identify the correct target.
IDisposable
+1 for the funny.
ceretullis
+3  A: 

Understanding "what" to do is at least as important as knowing "how" to do it, and almost always it's much more important than knowing the 'best' way to solve a problem. Domain-specific knowledge is often crucial to write good software.

Oops, I read question earlier, and then all the responses, and my question seemed to fit. I just read the initial question again, and I'm not sure it really answers that. Delete it if not, and sorry for the noise.
+2  A: 

It is OK to use short variable names

But not for indices in nested loops.

quant_dev
Not for indices in nested loops? Why? Its easy to distinguish them when definition is near usage. Well, I can only think of i and j as a bad choice, because the look so similar.
frunsi
Because it is easy to forget which variable belongs to which loop.
quant_dev
+3  A: 

Defects and Enhancement Requests are the Same

Unless you are developing software on a fixed-price contract, there should be no difference when prioritizing your backlog between "bugs" and "enhancements" and "new feature" requests. OK - maybe that's not controversial, but I have worked on enterprise IT projects where the edict was that "all open bugs must be fixed in the next release", even if that left no developer time for the most desirable new features. So, a problem which was encountered by 1% of the users, 1% of the time took precedence over a new feature would might be immediately useful to 90% of the users. I like to take my entire project backlog, put estimates around each item and take it to the user community for prioritization - with items not classified as "defect", "enhancement", etc.

Ed Schembor
+3  A: 

Software development is an art.

Dave
+3  A: 

in almost all cases, comments are evil: http://gooddeveloper.wordpress.com/

Ray Tayek
You should be commenting on the why, not the what or how.
reinierpost
+10  A: 

Recursion is fun.

Yes, I know it can be an ineffectual use of stack space, and all that jazz. But some times a recursive algorithm is just so nice and clean compared to it's iterative counterpart. I always get a bit gleeful when I can sneak a recursive function in somewhere.

Stu Thompson
"Ineffectual use of stack space" -- only in crappy languages. See http://en.wikipedia.org/wiki/Tail_recursion
Juliet
That's what's great about being a programmer - cheap thrills :-)At least Electrical Engineers get to sniff rosin smoke.
Mike Dunlavey
@Juliet: Only crap languages? So all languages that don't have tail recursion are crap? Spare me.
Stu Thompson
+6  A: 

Exceptions should only be used in truly exceptional cases

It seems like the use of exceptions has run rampant on the projects I've worked on recently.

Here's an example:

We have filters that intercept web requests. The filter calls a screener, and the screener's job is to check to see if the request has certain input parameters and validate the parameters. You set the fields to check for, and the abstract class makes sure the parameters are not blank, then calls a screen() method implemented by your particular class to do more extended validation:

public boolean processScreener(HttpServletRequest req, HttpServletResponse resp, FilterConfig filterConfig) throws Exception{           
            // 
            if (!checkFieldExistence(req)){
                    return false;
            }
            return screen(req,resp,filterConfig);
    }

That checkFieldExistance(req) method never returns false. It returns true if none of the fields are missing, and throws an exception if a field is missing.

I know that this is bad design, but part of the problem is that some architects here believe that you need to throw an exception every time you hit something unexpected.

Also, I am aware that the signature of checkFieldExistance(req) does throw an Exception, its just that almost all of our methods do - so it didn't occur to me that the method might throw an exception instead of returning false. Only until I dug through the code I noticed it.

LGriffel
And don't forget the overhead involved when throwing an exception as well. Throw/catch might be fairly harmless performance-wise for a single operation, but start looping over it and... ho-boy. I speak from experience.
Tullo
+67  A: 

Programmers who spend all day answering questions on Stackoverflow are probably not doing the work they are being paid to do.

Dan Diplo
Is this controversial? I guess no! -1!
Philippe Grondier
the latter part is highly controversial
Egg
I use the excuse: " I am charging my time to Professional Development" on the grounds that I am learning something useful as a developer. Boss agrees.
amischiefr
except I'm not getting "paid" to do anything right now.
hasen j
i'm not geting paid to do anything now.just like hasen j.
Behrooz
I agree, but in my defense I've hit a wall and need a breather before tackling the problem again.
baultista
My friend likes to use the excuse: "I'm Compiling"
Dave
+1  A: 

Remove classes. Number of classes (methods of classes) in .NET Framework handles exception implicitly. It's difficult to work with a dumb person.

adatapost
+1  A: 

Don't use keywords for basic types if the language has the actual type exposed. In C#, this would refer to bool (Boolean), int (Int32), float (Single), long (Int64). 'int', 'bool', etc are not actual parts of the language, but rather just 'shortcuts' or 'aliases' for the actual type. Don't use something that doesn't exist! And in my opinion, Int16, Int32, Int64, Boolean, etc makes a heck of a lot more sense then 'short', 'long', 'int'.

David Anderson
`int`, `bool` etc most certainly *are* part of the C# language. They're right there in the specification! They may not be part of the underlying platform, but they're definitely part of the C# language.
Jon Skeet
I think platform is what I meant. * looks around *Thanks for the clarification!
David Anderson
+15  A: 

I generally hold pretty controversial, strong and loud opinions, so here's just a couple of them:

"Because we're a Microsoft outfit/partner/specialist" is never a valid argument.

The company I'm working in now identifies itself, first and foremost, as a Microsoft specialist. So the aforementioned argument gets thrown around quite a bit, and I've yet to see a context where it's valid.

I can't see why it's a reason to promote Microsoft's technology and products in every applicable corner, overriding customer and employee satisfaction, and general pragmatics.

This just a cornerstone of my deep hatred towards politics in software business.

MOSS (Microsoft Office Sharepoint Server) is a piece of shit.

Kinda echoes the first opinion, but I honestly think MOSS, as it is, should be shot out of the market. It costs gazillions to license and set up, pukes on web standards and makes developers generally pretty unhappy. I have yet to see a MOSS project that has an overall positive outcome.

Yet time after time, a customer approaches us and asks for a MOSS solution.

theiterator
MOSS = Microsoft Office SharePoint Server ?
tuinstoel
As someone who occasionally has to program for Sharepoint, I will state that you second opinion is not controversial at all.
chris
MOSS is crap!!!!!
Faruz
I agree 250% with everything. Keep speaking your mind. Lots of people see things this way!
JL
+3  A: 

Linq2Sql is not that bad

I've come across a lot of posts trashing Linq2Sql. I know it's not perfect, but what is?

Personally, I think it has its drawbacks, but overall it can be great for prototyping, or for developing small to medium apps. When I consider how much time it has saved me from writing boring DAL code, I can't complain, especially considering the alternatives we had not so long ago.

Dkong
but it's unreliable
Jader Dias
+12  A: 

Greater-than operators (>, >=) should be deprecated

I tried coding with a preference for less-than over greater-than for awhile and it stuck! I don't want to go back, and indeed I feel that everyone should do it my way in this case.

Consider common mathematical 'range' notation: 0 <= i < 10

That's easy to approximate in code now and you get used to seeing the idiom where the variable is repeated in the middle joined by &&:

if (0 <= i && i < 10)
    return true;
else
    return false;

Once you get used to that pattern, you'll never look at silliness like

if ( ! (i < 0 || i >= 9))
    return true;

the same way again.

Long sequences of relations become a bit easier to work with because the operands tend towards nondecreasing order.

Furthermore, a preference for operator< is enshrined in the C++ standards. In some cases operator= is defined in terms of it! (as !(a<b || b<a))

Marsh Ray
Ick, no. If I want code to throw an exception when a string is over a certain length (for example) I'd *far* rather use `if (text.Length > 30) { throw new ... }` than `if (!(text.Length <= 30)) { throw new ... }`.
Jon Skeet
`if (30 < text.Length) throw ....` is another optionActually, I prefer `(!(text.Length <= 30))` because it nicely matches `assert(text.Length <= 30)`. Think about when multiple conditions get compounded. Keeping the error checking logic in that 'positive assertion' sense helps reduce logic bugs.I know it looks a little strange the first time. It's controversial and I don't push it on others. But try it with an open mind and you might grow to like it better. Or you might not. :-)
Marsh Ray
to be pedantic, `if(text.Length > 30)` is equivalent to `if(30 <= text.Length)` because the comparison goes from *exclusive* to *inclusive*
warren
s/is equivalent/is not equivalent/ is I think what you meant. In any case, I never said those two were or were not equivalent.
Marsh Ray
Why not just return your if-condition?
GMan
I would if that was really what was needed. Perhaps my example was a bit too trivial. Imagine something more interesting and useful in the if/else bodies.
Marsh Ray
It's language dependent, but in C++ `3 > getAirplane()` throws a compiler error, but `getAirplane() < 3` might not depending on which constructors are defined for your Airplane class.
thebretness
+2  A: 

Functional programming is NOT more intuitive or easier to learn than imperative programming.

There are many good things about functional programming, but I often hear functional programmers say it's easier to understand functional programming than imperative programming for people with no programming experience. From what I've seen it's the opposite, people find trivial problems hard to solve because they don't get how to manage and reuse their temporary results when you end up in a world without state.

Laserallan
Controversial? Functional programming sucks. That's controversial. However, "functional programming is hard". That's a tautology.
Warren P
Part of why it's hard is because we're damaged (pre-wired) for our iterative, procedural programming. It may be that functional programming is actually easier to absorb for a neophyte than procedural programming is. Are there any studies out there on this?
Warren P
+11  A: 

MIcrosoft is not as bad as many say they are.

Aftershock
Microsoft is not as bad?.. ok, going to shot my feet...
unkiwii
+25  A: 

1-based arrays should always be used instead of 0-based arrays. 0-based arrays are unnatural, unnecessary, and error prone.

When I count apples or employees or widgets I start at one, not zero. I teach my kids the same thing. There is no such thing as a 0th apple or 0th employee or 0th widget. Using 1 as the base for an array is much more intuitive and less error-prone. Forget about plus-one-minus-one-hell (as we used to call it). 0-based arrays are an unnatural construct invented by the computer science - they do not reflect reality and computer programs should reflect reality as much as possible.

Jack Straw
Actually, 0-based arrays are based in the reality of pointer addressing, which stems from how memory is laid out.
Paul Nathan
Can you tell me which is the first minute of the hour, please? I always forget...
Jon Skeet
@Paul: Agreed! And it's completely abstract and has nothing to do with counting.@Jon: The first minute is one, when we get to one we have counted off the first minute. Just like your first birthday celebrates your first year of life. There is no 0th anything.
Jack Straw
+1 @Jack, this is the perfect sort of controversial programming opinion. As much as my inner programmer hates to admit it, you've actually got a point. It even enticed Jon Skeet to enter the controversy.
Ash
I completely disagree with this opinion, so I'm upvoting it.
Theran
It's offset vs. index, fencepost vs. fence-segment. Posts work well for open-end ranges and segments work well for closed-end ranges.
Samuel Danielson
Jon skeet sleeps with a pillow under his gun
Egg
0-based arrays are (at least for me) very natural, and indeed, natural numbers begin with 0. +1 to this, is veeeeery controversial.
unkiwii
Who says you have to use element 0 if it's not appropriate for the domain? Are you *that* hard up for memory that you can't waste one element?
Jeanne Pindar
@Jeanne: If you're not using the 0th element, effectively that's one-based :).
Jack Straw
I interpreted your post as saying compilers should default to using one-based arrays.
Jeanne Pindar
+1 I often have trouble in real life situations because I'm so used to start counting at 0 o.o.
Helper Method
+2  A: 

Developing on .NET is not programming. Its just stitching together other people's code.

Having come from a coding background where you were required to know the hardware, and where this is still a vital requirements in my industry, I view high level languages as simply assembling someone else's work. Nothing essentially wrong with this, but is it 'programming'?

MS has made a mint from doing the hard work and presenting 'developers' with symbolic instruction syntax. I seem to now know more and more developers who appear constrained by the existence or non-existence of a class to perform a job.

My opinion comes from the notion that to be a programmer you should be able to programme at the lowest level your platform allows. So if you're programming .NET then you need to be able to stick your head under the hood and work out the solution, rather than rely on someone else creating a class for you. That's simply lazy and does not qualify as 'development' in my book.

Gerard
That's right baby, REAL programmers use 1's and 0's!!!
Cameron MacFarland
Does a down-vote mean this opinion is not controversial?
Gerard
Stated, but not reasoned. -1
Jay
Added some reason to the opinion.
Gerard
You may understand assembly but do you get how the hardware works? How electrons flow into different gates, how circuits are manufactured? Its all about choosing what you want to accomplish and the level of abstraction you need to achieve it
Eric
This also applies to java
hiena
I wouldn't say that stating that some developers who program using .NET (maybe even a lot, *maybe* even the majority) are just stiching is necessarily controversial. Heck, I'd probably agree with you.Extending that to *everyone* though as you have done! Now, that's controversial! There's a lot of very smart engineers who program using .NET.Also, I'd disagree that you need to be able to program to the lowest level of the platform. You need to know enough to understand the factors that affect your app.
Phil
This is just ridiculous. Let me counter it: low-level programming is not programming. It is just stitching CPU instructions together.
reinierpost
Case and Point - Microsoft's top developers prefer old-school coding methods - http://shar.es/aE0Qj
Gerard
+2  A: 

I'd rather be truly skilled/experienced in an older technology that allows me to solve real world problems effectively, as opposed to new "fashionable" technologies that still going through the adolescent stage.

Ash
+1  A: 

When many new technologies appear on the scene I only learn enough about them to decide if I need them right now.

If not, I put them aside until the rough edges are knocked off by "early adopters" and then check back again every few months / years.

Ash
In what sense is this an controversial opinion?
Ikke
@Ikke, Why? Surely this makes me an out of touch dinosaur, scared of change and clinging to out-dated and obsolete technologies? (I've lost count of how many projects I've worked on use new technologies because "they're cool" and will solve all our problems.)
Ash
+3  A: 

I'm always right.

Or call it design by discussion. But if I propose something, you'd had better be able to demonstrate why I'm wrong, and propose an alternative that you can defend.

Of course, this only works if I'm reasonable. Luckily for you, I am. :)

chris
It's a good attitude to have. I've learned to trust my instincts rather than defer to someone else's experience. We can do it your way, but first you have to convince me that it's a good idea.
Dan Dyer
Why this remains me to my boss? ;)
unkiwii
+3  A: 

Usability problems are never the user's fault.

I cannot count how often a problem turned up when some user did something that everybody in the team considered "just a stupid thing to do". Phrases like "why would somebody do that?" or "why doesn't he just do XYZ" usually come up.

Even though many are weary of hearing me say this: if a real-life user tried to do something that either did not work, caused something to go wrong or resulted in unexpected behaviour, then it can be anybody's fault, but not the user's!

Please note that I do not mean people who intentionally misuse the software. I am referring to the presumable target group of the software.

galaktor
+1  A: 

Agile sucks.

tsilb
A: 

That WordPress IS a CMS (technically, therefore indeed).

http://stackoverflow.com/questions/105648/wordpress-is-it-a-cms

madcolor
+1  A: 

Jon Bentley's 'Programming Pearls' is no longer a useful tome.

http://tinyurl.com/nom56r

Jim G.
Interesting opinion. I guess on the details I agree, but in terms of overall attitude, I think we can learn from it. I think we programmers tend to run in channels, and Jon has an attitude of inventiveness and questioning accepted "wisdom". (Not to mention **fun**.)
Mike Dunlavey
being extremely familair with C syntax carries-over to many languages. And anyone who thinks this is aimed at graduate students is off their rocker - I only know *one* person who read it in grad school. Almost ***everyone*** I know who has read it and/or owns a copy did so either before graduating college, or because they jumped to development from another field, or because they just wanted to.
warren
Why is it not useful?
kirk.burleson
+3  A: 

Delphi is fun

Yes, I know it's outdated, but Delphi was and is a very fun tool to develop with.

Bab Yogoo
We still code in Delphi for our business. Lots of Delphi pros out there :)
Tom
Delphi isn't just fun, it's still the best way to build Windows applications. If you want to say something controversial, the outdated bit will get you the votes. Outdated? Yeah. Unicode. Cross platform coming soon. 64 bit coming soon. More developers at Embarcadero building and improving than at any other time in Delphi's history. Yeah. Outdated. BLEAH!
Warren P
+1  A: 

I think Java should have supported system-specific features via thin native library wrappers.

Phrased another way, I think Sun's determination to require that Java only support portable features was a big mistake from almost everyone's perspective.

A zillion years later, SWT came along and solved the basic problem of writing a portable native-widget UI, but by then Microsoft was forced to fork Java into C# and lots of C++ had been written that could otherwise have been done in civilized Java. Now the world runs on a blend of C#, VB, Java, C++, Ruby, Python and Perl. All the Java programs still look and act wierd except for the SWT ones.

If Java had come out with thin wrappers around native libraries, people could have written the SWT-equivalent entirely in Java, and we could have, as things evolved, made portable apparently-native apps in Java. I'm totally for portable applications, but it would have been better if that portability were achieved in an open market of middleware UI (and other feature) libraries, and not through simply reducing the user's menu to junk or faking the UI with Swing.

I suppose Sun thought that ISV's would suffer with Java's limitations and then all the world's new PC apps would magically run on Suns. Nice try. They ended up not getting the apps AND not having the language take off until we could use it for logic-only server back-end code.

If things had been done differently maybe the local application wouldn't be, well, dead.

DigitalRoss
+3  A: 

Lower level languages are inappropriate for most problems.

Imagist
+3  A: 

Programmers should never touch Word (or PowerPoint)

Unless you are developing a word or a document processing tool, you should not touch a Word processor that emits only binary blobs, and for that matter:

Generated XML files are binary blobs

Programmers should write plain text documents. The documents a programmer writes need to convey intention only, not formatting. It must be producible with the programming tool-chain: editor, version-control, search utilities, build system and the like. When you are already have and know how to use that tool-chain, every other document production tool is a horrible waste of time and effort.

When there is a need to produce a document for non-programmers, a lightweight markup language should be used such as reStructuredText (if you are writing a plain text file, you are probably writing your own lightweight markup anyway), and generate HTML, PDF, S5, etc. from it.

Chen Levy
A: 

You must know C to be able to call yoursel a programmer!

navigator
Completely disagree. C isn't the be-all-and-end-all of programming. There were many languages before it, and there are many languages after it that will suit different situations better than C will.Also, programming is about the analytical problem solving, and not just writing code in a particular language.
Jasarien
Like Jasarien I'm completely disagree. C is another language, is not THE language.
unkiwii
Actually, C is pretty much THE language for some tasks, although certainly not for all. There is a lot of documentation and tutorials online - specially on low-level stuff - which are way harder to understand without C knowledge.
luiscubal
More people use C than any other language and it's used on more projects than any other language.
Rob
agreed. I wonder, would you say you can call yourself a programmer if you know D and not C? (D doesnt hide anything from you alike C).
acidzombie24
Depends on what you want to make. High level Windows GUI applications should not be made in C, low level ICU programming, C is required.
Petah
+4  A: 

Garbage collection is overrated

Many people consider the introduction of garbage collection in Java one of the biggest improvements compared to C++. I consider the introduction to be very minor at best, well written C++ code does all the memory management at the proper places (with techniques like RAII), so there is no need for a garbage collector.

Anders Rune Jensen
The advocates of garbage collection have an unhealthy obsession with one particular resource when RAII covers all of them.
Integer Poet
+1  A: 

C must die.

Voluntarily programming in C when another language (say, D) is available should be punishable for neglect.

reinierpost
Certainly is controversial.
Ikke
Disagree. If C is the language you are more comfortable in, and is suitable for the task, then C is the language that would make most sense for you to develop in. If you're already proficient in C, then why waste the time learning D (as you put it) if you could complete the task to an acceptable standard using C?
Jasarien
The answer is real easy: you and other people will forever have to clean up the things D helps you prevent in your C code, unless you belong to the top 0.5% of C programmers who never makes such mistakes in the first place. (it may be 0.05%, I'm not sure).There are certainly tools for C which help prevent such mistakes as well. The trouble is you can't count on other people having used them.
reinierpost
hahaha, agree. Even tho i love C(++)
acidzombie24
+1  A: 

QA can be done well, over the long haul, without exploring all forms of testing

Lots of places seem to have an "approach", how "we do it". This seems to implicitly exclude other approaches.

This is a serious problem over the long term, because the primary function of QA is to file bugs -and- get them fixed.

You cannot do this well if you are not finding as many bugs as possible. When you exclude methodologies, for example, by being too black-box dependent, you start to ignore entire classes of discoverable coding errors. That means, by implication, you are making entire classes of coding errors unfixable, except when someone else stumbles on it.

The underlying problem often seems to be management + staff. Managers with this problem seem to have narrow thinking about the computer science and/or the value proposition of their team. They tend to create teams that reflect their approach, and a whitelist of testing methods.

I am not saying you can or should do everything all the time. Lets face it, some test methods are simply going to be a waste of time for a given product. And some methodologies are more useful at certain levels of product maturity. But what I think is missing is the ability of testing organizations to challenge themselves to learn new things, and apply that to their overall performance.

Here's a hypothetical conversation that would sum it up:

Me: You tested that startup script for 10 years, and you managed to learn NOTHING about shell scripts and how they work?!

Tester: Yes.

Me: Permissions?

Tester: The installer does that

Me: Platform, release-specific dependencies?

Tester: We file bugs for that

Me: Error handling?

Tester: when errors happen to customer support sends us some info.

Me: Okay...(starts thinking about writing post in stackoverflow...)

benc
+3  A: 

Detailed designs are a waste of time, and if an engineer needs them in order to do a decent job, then it's not worth employing them!

OK, so a couple of ideas are thrown together here:

1) the old idea of waterfall development where you supposedly did all your design up front, resulting in some glorified extremely detailed class diagrams, sequence diagrams etc. etc., was a complete waste of time. As I once said to a colleague, I'll be done with design once the code is finished. Which I think is what agile is partly a recognition of - that the code is the design, and that any decent developer is continually refactoring. This of course, makes the idea that your class diagrams are out of date laughable - they always will be.

2) management often thinks that you can usefully take a poor engineer and use them as a 'code monkey' - in other words they're not particularly talented, but heck - can't you use them to write some code. Well.. no! If you have to spend so much time writing detailed specs that you're basically specifying the code, then it will be quicker to write it yourself. You're not saving any time. If a developer isn't smart enough to use their own imagination and judgement they're not worth employing. (Note, I'm not talking about junior engineers who are able to learn. Plenty of 'senior engineers' fall into this category.)

Phil
++ I liken spec-writing to driving a car at night in a fog. You can only see so far ahead, and turning up the brightness of the lights does not help. The supply of information is simply limited. It's worth getting as much as you can, but what you really have to be able to do is adapt when more information becomes available as you proceed.
Mike Dunlavey
... I was once handed a design like that. The design doc was about 2 inches thick in paper and projected to take 18 mm to develop. I talked them into writing a code-generator. The *final source* was 1/2 inch thick, was done in 4 mm, and had blazing performance.
Mike Dunlavey
... That's why I believe in prototyping, rapid or not. When I'm developing some new product, I like to be able to do at least 3 throw-away versions, because that's how I can see deeper into the fog. Good post!
Mike Dunlavey
Thanks Mike, and agree with what you're saying - it's impractical to expect to be able to get all the design right up front - you've got to 'try something', then rework it as you discover more about the requirements and both how best to implement it, and often also how the technologies you're using are best used.
Phil
+2  A: 

When Creating Unit tests for a Data Access Layer, data should be retrieved directly from the DB, not from mock objects.

Consider the following:

void IList<Customer> GetCustomers()
{
  List<Customer> res = new List<Customer>();

  DbCommand cmd = // initialize command
  IDataReader r = cmd.ExecuteQuery();

  while(r.read())
  {
     Customer c = ReadFiledsIntoCustomer(r);
     res.Add(c);
  }

  return res;
}

In a unit test for GetCustomers, should the call to cmd.ExecuteQuery() actually access the DB or should it's behavior be mocked?

I reckon that you shouldn't mock the actual call to the DB if the following holds true:

  1. A test server and the schema exist.
  2. The schema is stable (meaning you are not expecting major changes to it)
  3. The DAL has not smart logic: queries are constructed trivially (config/stored procs) and the desirialization logic is simple.

From my experience the great benefit of this approach is that you get to interact with the DB early, experiancing the 'feel', not just the 'look'. It saves you lots of headaches afterwards and is the best way to familiarize oneself with the schema.

Many might argue that as soon as the execution flow crosses the process boundaries- it seizes to be a unit test. I agree it has its drawbacks, especially when the DB is unavailable and then you cannot run UT.

However, I believe that this should be a valid thing to do in many cases.

Vitaliy
+6  A: 

Notepad is a perfectly fine text editor. (And sometimes wordpad for non-windows line breaks)

  • Edit config files
  • View log files
  • Development

I know people who actually believe this! They will however use an IDE for development, but continue to use Notepad for everything else!

TJ
That's fair enough, notepad is good at what it does, and what it does is plain text editing. However, when you're editing config files, you want something that can handle indents a little better, maybe some syntax highlighting. With log files, a regex search is invaluable.
Jasarien
yep and thats why I use EditPlus www.editplus.com great editor!!
Dal
That's why i only use textpad! www.textpad.com awesome for old skoolers!
crosenblum
+5  A: 

All project managers should be required to have coding tasks

In teams that I have worked where the project manager was actually a programmer who understood the technical issues of the code well enough to accomplish coding tasks, the decisions that were made lacked the communication disconnect that often happens in teams where the project manager is not involved in the code.

Edward Tanguay
you: "boss, the code you just checked in is sub-par. please get it up to the standard, or I'll have to back it out." him: "about that raise you wanted..."
just somebody
+1, otherwise you end up doing their job for them.
JL
+2  A: 

Development projects are bound to fail unless the team of programmers is given as a whole complete empowerment to make all decisions related to the technology being used.

Kiffin
Been there, done that. Have the t-shirt.
Jasarien
+20  A: 
Mike Dunlavey
I can add one more type of reaction: "This is a great technique, but why not use one of the tools that automates it?"
Crashworks
@Crash: Happy Halloween! You're right, that is another reaction I get, and of course the answer is: "You could if they exist". I don't want much: 1) take *and retain* stackshots, 2) rank statements (not functions) by inclusive time (i.e. % of samples containing them), 3) let you pick representative stackshots and study them.
Mike Dunlavey
... I built one ages ago, to run under DOS. It didn't do (3) but it had a "butterfly view" between statements (not functions). The real value was that it would focus my attention on costly call sites, and then I would take manual samples until one of those showed up under the debugger, and then I could really look to see what was going on, because just knowing the location was not enough.
Mike Dunlavey
... as a recent example, this C# app takes it time about starting up. Half a dozen stackshots show about half the time is spent looking up strings in a resource and converting them to string objects, so they can be internationalized. What the stack sample by itself doesn't show is how often the string is something you would never want to internationalize, which in this case is most of the time. Just finding a slow function, or looking at numbers after a run, would never reveal that.
Mike Dunlavey
@Crash: Actually there's a tool called RotateRight/Zoom that is close to doing it how I think is right. It takes and retains stackshots. You can manually control when it samples. It has a butterfly view that can work at the statement level. It gives you total time as a percent, which is the fraction of samples containing the line.
Mike Dunlavey
People with a low boredom threshold might press `Ctrl+C` after one second, which may not be a representative sample of the program as a whole.
Andrew Grimm
@Andrew-Grimm: The problem, when removed, will save some %. Pick a %. 20%, 50%, 90%, 10%? Whatever it is, that is (at least) the probability that each `^C` will see it. One way is take 20 samples - 20 * x%/100 will show it. Another way is, just take samples until something appears more than once. It's a big one, guaranteed.
Mike Dunlavey
... **one** sample is not enough **unless** you know there is a big (high percentage) problem. In the limit, if you know there is an infinite loop, it only takes one sample to see it. In general, you don't know, so take multiple samples.
Mike Dunlavey
If all you're interested in is "is there enough space in this room" then you definitely need to know how big the elephants are. Measuring and capturing go well together - you don't need to commit yourself to only using one technique.
Jon Skeet
@Jon: That's just a metaphor I'm using to try to get the idea across that if something's taking too long, stackshots can find the problem with precision of location, but not necessarily precision of time measurement. I've seen one profiler that does this (Zoom), but I haven't seen them all. Mainly I'm zealot-ing for an orthogonal way of thinking about performance tuning - to expect big speedup factors, which are typically mid-stack lines of code doing stuff you didn't realize.
Mike Dunlavey
@Jon ... and there's a central phenomenon that I never hear discussed on SO (magnification), and it's the route to big speedups. If there's a series of problems accounting for 50%, 25%, 12.5%, 6.25% of time, each time you fix the biggest one, the rest get twice as big (thus easier to find). If any one of these along the way is not something your profiler can pinpoint, you're stuck, not getting the full speedup.
Mike Dunlavey
@Mike: Absolutely. Most profilers I've used *have* shown figures as "percentage of time spent in method" mind you - with raw figures as well, but they tend not to be as useful. But yes, it's certainly possible to find big speed-ups. I recently found some in Noda Time :)
Jon Skeet
Mike Dunlavey
+4  A: 

There is no difference between software developer, coder, programmer, architect ...

I've been in the industry for more than 10 yeast and still find it absolutely idiotic to try to distinguish between these "roles". You write code? You're a developer. You are spending all day drawing fancy UML diagrams. You're a ... well.. I have no idea what you are, you're probably just trying to impress somebody. (Yes, I know UML).

+1  A: 
  • Soon we are going to program in a world without databases.

  • AOP and dependency injection are the GOTO of the 21st century.

  • Building software is a social activity, not a technical one.

  • Joel has a blog.

JuanZe
A: 

To quote the late E. W. Dijsktra:

Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians.

Computer Science is no more about computers than astronomy is about telescopes.

I don't understand how one can claim to be a proper programmer without being able to solve pretty simple maths problems such as this one. A CRUD monkey - perhaps, but not a programmer.

Andrew from NZSG
A: 

A real programmer loves open-source like a soulmate and loves Microsoft as a dirty but satisfying prostitute

Andrew from NZSG
Haha, very funny. Good one :)
JL
A real programmer? C'mon
Blub
+3  A: 

"Programmers must do programming on the side, or they're never as good as those who do."

As kpollock said, imagine saying that for doctors, or soldiers...

The main thing isn't so much as whether they code, but whether they think about it. Computing Science is an intellectual exercise, you don't necessarily need to code to think about problems that makes you better as a programmer.

It's not like Einstein gets to play with play with particles and waves when he's off his research.

Calyth
That's right. I often think about programming problems while in bed, lying on my **side**.
Mike Dunlavey
@Mike I've kept up thinking about assembly language on my side in bed. But thanks for pointing out the typo ;)
Calyth
+3  A: 

Programmers should avoid method hiding through inheritance at all costs.

In my experience, virtually every place I have ever seen inherited method hiding used it has caused problems. Method hiding results in objects behaving differently when accessed through a base type reference vs. a derived type reference - this is generally a Bad Thing. While many programmers are not formally aware of it, most intuitively expect that objects will adhere to the Liskov Substitution Principle. When objects violate this expectation, many of the assumptions inherent to object-oriented systems can begin to fray. The most egregious cases I've seen is when the hidden method alters the state of the object instance. In these cases, the behavior of the object can change in subtle ways that are difficult to debug and diagnose.

Ok, so there may be some infrequent cases where method hiding is actually useful and beneficial - like emulating return type covariance of methods in languages that don't support it. But the vast majority of time, when developers use method hiding it is either out of ignorance (or accident) or as a way to hack around some problem that probably deserves better design treatment. In general, the beneficial cases I've seen of method hiding (not to say there aren't others) is when a side-effect free method that returns some information is hidden by one that computes something more applicable to the calling context.

Languages like C# have improved things a bit by requiring the new keyword on methods that hide a base class method - at least helping avoid involuntary use of method hiding. But I find that many people still confuse the meaning of new with that of override - particularly since in simple scenarios their behavior can appear identical. It would be nice if tools like FxCop actually had built-in rules for identifying potentially bad usage of method hiding.

By the way, method hiding through inheritance should not be confused with other kinds of hiding - such as through nesting - which I believe is a valid and useful construct with fewer potential problems.

LBushkin
+7  A: 

Controversial eh? I reckon the fact that C++ streams use << and >>. I hate it. They are shift operators. Overloading them in this way is plain bad practice. It makes me want to kill whoever came up with that and thought it was a good idea. GRRR.

Goz
+3  A: 

Anonymous functions suck.

I'm teaching myself jQuery and, while it's an elegant and immensely useful technology, most people seem to treat it as some kind of competition in maximizing the user of anonymous functions.

Function and procedure naming (along with variable naming) is the greatest expressive ability we have in programming. Passing functions around as data is a great technique, but making them anonymous and therefore non-self-documenting is a mistake. It's a lost chance for expressing the meaning of the code.

Larry Lustig
While I haven't used jQuery, I have to disagree with the general principle. The ability to express (say) a projection or a filter *right where you're using it* rather than having to introduce a separate function is one of the nicest features in C# 2 and 3. (Nicer in 3 than 2, as lambda expressions are neater than anonymous methods.)
Jon Skeet
@Jon: Well, I guess that officially makes my opinion controversial. I still disagree. While it's nice to be able to express functionality that way, for all but the most trivial cases it fundamentally detracts from the readability of the code. If you could name the function in place that would help with the issue of expressing your purpose, but it still wouldn't eliminate the problem of actually reading functions nested in the parameter list of other functions which, in turn, are often nested inside another functions parameter list.
Larry Lustig
You can name inline functions in JavaScript, if you want to. Just include a name between "function" and the arguments:var s = function square(a) { return a * a; };
Mark Bessey
+4  A: 

If it isn't worth testing, it isn't worth building

Chirantan
+3  A: 

Never change what is not broken.

Varma
What if it works, but is unmaintanable, ugly, difficult to understand and likely to break if something else changes?
simon
That is the exact reason why I posted this as "controversial".
Varma
"Refactor Mercilessly". XP Manifesto. But only if you have comprehensive unit tests in place...
Nick Wiggill
+2  A: 

I'd say that my most controversial opinion on programming is that I honestly believe you shouldn't worry so much about throw-away code and rewriting code. Too many times people feel that if you write something down, then changing it means you did something wrong. But the way my brain works is to get something very simple working, and update the code slowly, while ensuring that the code and the test continue to function together. It may end up actually creating classes, methods, additional parameters, etc., I fully well know will go away in a few hours. But I do it because i want to take only small steps toward my goal. In the end, I don't think I spend any more time using this technique than the programmers that stare at the screen trying to figure out the best design up front before writing a line of code.

The benefit I get is that I'm not having to constantly deal with software that no longer works because I happen to break it somehow and am trying to figure out what stopped working and why.

zumalifeguard
+11  A: 

Copy/Paste IS the root of all evil.

OscarRyz
++ If by that you mean "cookie cutter code", I say Amen.
Mike Dunlavey
++ Hey - that's my line...
Galghamon
+3  A: 

If you have ever let anyone from rentacoder.com touch your project, both it and your business are completely devoid of worth.

Azeem.Butt
Not controversial, this is a statement of fact.
mynameiscoffey
+13  A: 

Object Oriented Programming is overused

Sometimes the best answer is the simple answer.

Chisum
For most competent worldly-wise OO devs, classes are only broken out from a root class once it becomes apparent that complexity is becoming hard to manage. Oddly (or not so oddly), it is often at that very point that it becomes apparent just _what_ needs to be broken out. And until you do break out from a root class, you _are_ programming procedurally (at least within the context of that class). Premature proliferation of classes during the development process is something that OO greenhorns do.
Nick Wiggill
+1  A: 

If you haven't read a man page, you're not a real programmer.

Quartz
+3  A: 

I have two:

Design patterns are sometimes a way for bad programmer to write bad code - "when you have a hammer - all the world looks like a nail" mentality. If there si something I hate to hear is two developers create design by patterns: "We should use command with facade ...".

There is no such thing as "premature optimization". You should profile and optimize the your code before you get to that point when it becomes too painful to do so.

Dror Helper
Premature optimization does indeed exist and is very much a problem. With very few exceptions, your goal is to satisfy a function as per business requirements. Make it work, make it right, then make it faster. Optimizing without understanding the whole application profile is like throwing money out of a window. Let me know where you work, because I'll be downstairs with a net to catch some of it. ;-)
joseph.ferris
You're right - but only some of the time... I've seen the "premature optimization" card used way too many time to create a bad very hard to improve application flow. If you can write it better the first time, why not do so?
Dror Helper
I think the best rule is to always make things as simple as possible. It is much easier to optimize simple code than to simplify optimized code.
thesmart
+5  A: 

"Comments are Lies"

Comments don't run and are easily neglected. It's better to express the intention with clear, refactored code illustrated by unit tests. (Unit tests written TDD of course...)

We don't write comments because they're verbose and obscure what's really going on in the code. If you feel the need to comment - find out what's not clear in the code and refactor/write clearer tests until there's no need for the comment...

... something I learned from Extreme Programming (assumes of course that you have established team norms for cleaning the code...)

cartoonfox
Code will only explain the "how" something is done and not the "why". It is really important to distinguish between the two. Decisions sometimes have to be made and the reason for that decision needs to live on. I find that it is important to find a middle ground. The "no comments" crowd are just as much cultists as "comment everything" crowd.
joseph.ferris
You're right about this: "Code will only explain the "how" something is done" If I want to know what it does, I'll find the TDD-written test that's covering it. If there's a mystery about what it does and it's important enough, I'll insert a breakage (e.g. throw new RuntimeException("here it is") ) and run all the acceptance tests to see what scenarios need that code path to run.
cartoonfox
Thia is why i said comments are evil in my post http://stackoverflow.com/questions/406760/whats-your-most-controversial-programming-opinion/409825#409825 I am proud my answer is the most serious most downvoted answer :)
acidzombie24
If you want to know why something is running, just inject a bug e.g. throw new RuntimeException("HERE"); into it and run the functional tests. Read off the names of the failing system-level tests - that's why you need that piece of code.
cartoonfox
No, that's just more what. Good comments explain why the function works THE WAY it does, not why it exists, which is ultimately just a what.
Integer Poet
+8  A: 

Programming: It's a fun job.

I seem to see two generalized groups of developers. Those that don't love it but they are competent and the money is good. The other group that love it to a point that is kinda creepy. It seems to be their life.

I just think it well paying job that is usually interesting and fun. There is all kinds of room to learn something new every minute of every day. I can't think of another job I would prefer. But it is still a job. Compromises will be made and the stuff you produce will not always be as good as it could be.

Given my druthers would be on a beach drinking beer or playing with my kids.

ElGringoGrande
+1  A: 

You only need 3 to 5 languages to do everything. C is a definite. Maybe assembly but you should know it and be able to use it. Maybe javascript and/or Java if you code for the web. A shell language like bash and one HLL, like Lisp, which might be useful. Anything else is a distraction.

Rob
A: 

Copy/Pasting is not an antipattern, it fact it helps with not making more bugs

My rule of thumb - typing only something that cannot be copy/pasted. If creating similar method, class, or file - copy existing one and change what's needed. (I am not talking about duplicating a code that should have been put into a single method).

I usually never even type variable names - either copy pasting them or using IDE autocompletion. If need some DAO method - copying similar one and changing what's needed (even if 90% will be changed). May look like extreme laziness or lack of knowledge to some, but I almost never have to deal with problems caused my misspelling something trivial, and they are usually tough to catch (if not detected on a compile level).

Whenever I step away from my copy-pasting rule and start typing stuff I always misspelling something (it's just a statistics, nobody can write perfect text off the bat) and then spending more time trying to figure out where.

serg
If you think getting code to compile is a big problem... (shakes head)
Integer Poet
+3  A: 

There is only one design pattern: encapsulation

For example:

  • Factory method: you've encapsulated object creation
  • Strategy: you encapsulated different changeable algorithms
  • Iterator: you encapsulated the way to sequentially access the elements in the collection.
flybywire
wrong. the only design pattern is "take out duplicate code and put it in an external function/method/object"
hasen j
+7  A: 

Modern C++ is a beautiful language.

There, I said it. A lot of people really hate C++, but honestly, I find modern C++ with STL/Boost style programming to be a very expressive, elegant, and incredibly productive language most of the time.

I think most people who hate C++ are basing that on bad experiences with OO. C++ doesn't do OO very well because polymorphism often depends on heap-allocated objects, and C++ doesn't have automatic garbage collection.

But C++ really shines when it comes to generic libraries and functional-programming techniques which make it possible to build incredibly large, highly-maintainable systems. A lot of people say C++ tries to do everything, but ends up doing nothing very well. I'd probably agree that it doesn't do OO as well as other languages, but it does generic programming and functional programming better than any other mainstream C-based language. (C++0x will only further underscore this truth.)

I also appreciate how C++ lets me get low-level if necessary, and provides full access to the operating system.

Plus RAII. Seriously. I really miss destructors when I program in other C-based languages. (And no, garbage collection does not make destructors useless.)

Charles Salvia
I really dislike the C++ compilers. They have terrible error messages.
thesmart
"any mainstream C-based language" would include C# and Scala, both of which are now quite good for functional programming. You should look at them again if you haven't tried the latest versions yet.
finnw
+10  A: 

Making software configurable is a bad idea.

Configurable software allows the end-user (or admin etc) to choose too many options, which may not all have been tested together (or rather, if there are more than a very small number, I can guarantee will not have been tested).

So I think software which has its configuration hard-coded (but not necessarily shunning constants etc) to JUST WORK is a good idea. Run with sensible defaults, and DO NOT ALLOW THEM TO BE CHANGED.

A good example of this is the number of configuration options on Google Chrome - however, this is probably still too many :)

MarkR
Agreed. Make a design decision for the user and stick to it.
thesmart
A: 

It Works, It's compatible, It'll be released soon

Yoann. B
-1, not sure what you're getting at here. Add more detail.
JL
Come on, JL, do you work as a programmer? It's so obvious what he is saying.
sims
+5  A: 

Ternary operators absolutely suck. They are the epitome of lazy ass programing.

user->isLoggedIn() ? user->update() : user->askLogin();

This is so easy to screw up. A little change in revision #2:

user->isLoggedIn() && user->isNotNew(time()) ? user->update() : user->askLogin();

Oh yeah, just one more "little change."

user->isLoggedIn() && user->isNotNew(time()) ? user->update() 
    : user->noCredentials() ? user->askSignup
        : user->askLogin();

Oh crap, what about that OTHER case?

user->isLoggedIn() && user->isNotNew(time()) && !user->isBanned() ? user->update() 
    : user->noCredentials() || !user->isBanned() ? user->askSignup()
        : user->askLogin();

NO NO NO NO. Just save us the code change. Stop being freaking lazy:

if (user->isLoggedIn()) {
    user->update()
} else {
    user->askLogin();
}

Because doing it right the first time will save us all from having to convert your crap ternaries AGAIN and AGAIN:

if (user->isLoggedIn() && user->isNotNew(time()) && !user->isBanned()) {
    user->update()
} else {
    if (user->noCredentials() || !user->isBanned()) {
        user->askSignup();
    } else {
        user->askLogin();
    }
}
thesmart
That'd be the issue of using the wrong paradigm for what you're trying to do. If you want to branch, use a goddamn `if`. If you want to print slightly differnt text (Say "Mr." or "Mrs" in a greeting), use the conditional operator
Alex Brault
use them for assignment, and not for branching. its a good replacement for `if(c) { x=a; } else { x=b; }`, which becomes `x=c?a:b;` but not for anything else!
frunsi
Nope. I'm sorry. I agree completely with the OP in that the ternary operator sucks, because you are giving some nameless/faceless dev out there the opportunity to make code much harder to read. And that's on top of the fact that, as he says, its a duplicated language feature anyway. Its okay to be impressed by this sort of stuff when you're in college. As a professional, you're part of a greater development machine that relies on readability.
Nick Wiggill
A: 

The C++ STL library is so general purpose that it is optimal for no one.

dicroce
+7  A: 

JavaScript is a "messy" language but god help me I love it.

Avi Y
I definitly have a Love/Hate relationship with JavaScript
Neil N
+1, I know exactly what you mean. It can be fun to use. One thing I hate is the memory leaks.
JL
Aestehtically, it's a pile of dog-spew. Can't deny it gets the job done, though.
Nick Wiggill
+5  A: 

Open Source software costs more in the long run

For regular Line of Business companies, Open Source looks free but has hidden costs.

When you take into account inconsistency of quality, variable usability and UI/UX, difficulties of interoperability and standards, increased configuration, associated increased need for training and support, the Total Cost of Ownership for Open Source is much higher than commercial offerings.

Tech-savvy programmer-types take the liberation of Open Source and run with it; they 'get it' and can adopt it and customise it to suit their purposes. On the other hand, businesses that are primarily non-technical, but need software to run their offices, networks and websites are running the risk of a world of pain for themselves and heavy costs in terms of lost time, productivity and (eventually) support fees and/or the cost of abandoning the experiement all together.

Gordon Mackie JoanMiro
A lot of the cost saving from OSS comes from being able to fix bugs in 3rd party tools. It's not just about license fees.
finnw
You've undermined your claim to controversy here simply by pointing out that not every tool is best for every job. You need less reason and more dogma. Instead, tell us SQL Server is industrial-strength and MySQL is just a toy. Stack Overflow needs more page views and you are not helping.
Integer Poet
WTF?? Who mentioned SQL databases? Page views? This comment is baffling.
Gordon Mackie JoanMiro
+4  A: 

Size matters! Embellish your code so it looks bigger.

fastcodejava
Ha ha! So true, fastcodejava!
javaguy
+1  A: 

Apparently it is controversial that IDE's should check to see whether they can link up the code they create before wasting time compiling

But I'm of the opinion that I shouldn't compile a zillion lines of code only to realize that Windows has a lock on the file I'm trying to create because another programmer has some weird threading issue that requires him to Delay Unloading DLLs for 3 minutes after they aren't supposed to be used.

Peter Turner
You're asking for a language with knowledge of platforms and implementation details. They don't work that way.
Integer Poet
No, I'm asking for an IDE with knowledge of platforms and implementation details. But thanks for the controversy! I didn't realize this question was finally deleted.
Peter Turner
+10  A: 

Microsoft should stop supporting anything dealing with Visual Basic.

Baddie
I've been saying that since Visual Basic 1.0.
MetalMikester
Microsoft should stop. Period.
just somebody
Fully agree, why have VB.net? any VB.net developer can covert to C#. I know because I used to be a VB6 developer.
JL
Is that even controversial at all? :D
Nick Wiggill
+7  A: 

Use unit tests as a last resort to verify code.

If you I want to verify that code is correct, I prefer the following techniques over unit testing:

  1. Type checking
  2. Assertions
  3. Trivially verifiable code

For everything else, there's unit tests.

cdiggins
0. Re-read your code. Seems trivial, but often can be the best at finding errors.
Matt Hamsmith
Enthusiasts of unit tests too often position their arguments as defenses for weak typing and late binding as if a disciplined engineer chooses exactly one approach to reliability.
Integer Poet
I'm very ambivalent about unit tests. My personal opinion is that zealots that want 100% code coverage for there unit tests are wasting a lot of time and money. But they're not completely useless either, so I guess I agree with the statement.
Seventh Element
I've pretty much been forced to this conclusion by a very tight schedule. I agree that unit tests are not for everything. But having said that, the more critical a piece of code is, the wiser you'd be to write tests for it regardless.
Nick Wiggill
+4  A: 

Procedural programming is fun. OOP is boring.

Pedro Ladaria
pythonic programming is fun (procedural + functional)
hasen j
-1, totally disagree. There is massive satisfaction of finishing a class that cleans up your code so much and adds so much more power to your project.
Sam152
This controversial ;)
Ikke
@Sam152, you should vote +1. You agree that it's a "controversial programming opinion?"
Pedro Ladaria
That is definitely controversial. I much rather program in C++ rather than C.
Noctis Skytower
+1. I went through a phase of "I hate OOP" a few years ago, although I'm mostly over it now.
finnw
+4  A: 

Java is the COBOL of our generation.

Everyone learns to code it. There code for it running in big companies that will try to keep it running for decades. Everyone comes to despise it compared to all the other choices out there but are forced to use it anyway because it pays the bills.

Kelly French
COBOL is still the COBOL of our generation. Maybe Java will be the COBOL three generations from now... But then, so will C#.
Kobi
I would say PHP is the COBOL of our generation. It has an important property in common - it was designed to be coded by people who were not full-time coders. Unlike Java and C# which borrow heavily from C++.
finnw
+1  A: 

"XML and HTML are the "assembly language" of the web. Why still hack it?

It seems fairly obvious that very few developers these days learn/code in assembly language for reason that it's primitive and takes you far away from the problem you have to solve at high-level. So we invented high-level languages to encapsulates those level entities to boost our productivity thru the language elements that we can relate to more at higher level. Just like we can do more with a computer than just its constituent motherboard or CPU.

With the Web, it seems to me developers still are reading/writing and hacking HTML,CSS,XMl,schemas, etc.

I see these as the equivalent of "assembly language" of the Web or its substrates. Should we be done with it?. Sure, we need to hack it sometimes when things go wrong. But surely, that's an exception. I assert that we are replacing lower-level assembly language at machine level with its equivalent at Web-level.

bk
That's like saying, Python is the assembly of Django, don't use it!
hasen j
You want to invent a new language that is at a higher level than XHTML?
Noctis Skytower
A: 

Human brain is the master key to all locks.

There is nothing in this world that can move faster your brain. Trust me this is not philosophical but practical. Well as far as opinions are concerned , they are as under


1) Never go outside the boundry specified in the programming language, A simple example would be pointers in C and C++. Dont misuse them as you are likely to get the DAMN SEGMENTATION FAULT.

2) Always follow the coding standards, yes what you are reading is correct, Coding standards do alot to your program, After all your program is written to be executed by machine but to be understood by some other brain :)

Sachin Chourasiya
+7  A: 

Assembler is not dead

In my job (copy protection systems) assembler programming is essential, I was working with many hll copy protection systems and only assembler gives you the real power to utilize all the possibilities hidden in the code (like code mutation, low level stuff).

Also many code optimizations are possible only with an assembler programming, look at the sources of any video codecs, sources are written in assembler and optimized to use MMX/SSE/SSE2 opcodes whatever, many game engines uses assembler optimized routines, even Windows kernel has SSE optimized routines:

NTDLL.RtlMoveMemory

.text:7C902CD8                 push    ebp
.text:7C902CD9                 mov     ebp, esp
.text:7C902CDB                 push    esi
.text:7C902CDC                 push    edi
.text:7C902CDD                 push    ebx
.text:7C902CDE                 mov     esi, [ebp+0Ch]
.text:7C902CE1                 mov     edi, [ebp+8]
.text:7C902CE4                 mov     ecx, [ebp+10h]
.text:7C902CE7                 mov     eax, [esi]
.text:7C902CE9                 cld
.text:7C902CEA                 mov     edx, ecx
.text:7C902CEC                 and     ecx, 3Fh
.text:7C902CEF                 shr     edx, 6
.text:7C902CF2                 jz      loc_7C902EF2
.text:7C902CF8                 dec     edx
.text:7C902CF9                 jz      loc_7C902E77
.text:7C902CFF                 prefetchnta byte ptr [esi-80h]
.text:7C902D03                 dec     edx
.text:7C902D04                 jz      loc_7C902E03
.text:7C902D0A                 prefetchnta byte ptr [esi-40h]
.text:7C902D0E                 dec     edx
.text:7C902D0F                 jz      short loc_7C902D8F
.text:7C902D11
.text:7C902D11 loc_7C902D11:                           ; CODE XREF: .text:7C902D8Dj
.text:7C902D11                 prefetchnta byte ptr [esi+100h]
.text:7C902D18                 mov     eax, [esi]
.text:7C902D1A                 mov     ebx, [esi+4]
.text:7C902D1D                 movnti  [edi], eax
.text:7C902D20                 movnti  [edi+4], ebx
.text:7C902D24                 mov     eax, [esi+8]
.text:7C902D27                 mov     ebx, [esi+0Ch]
.text:7C902D2A                 movnti  [edi+8], eax
.text:7C902D2E                 movnti  [edi+0Ch], ebx
.text:7C902D32                 mov     eax, [esi+10h]
.text:7C902D35                 mov     ebx, [esi+14h]
.text:7C902D38                 movnti  [edi+10h], eax

So if you hear next time that assembler is dead, think about the last movie you have watched or the game you've played (and its copy protection heh).

Bartosz Wójcik
Assembler is also essential for breaking copy protections :D
Pedro Ladaria
+8  A: 

Programming is neither art nor science. It is an engineering discipline.

It's not art: programming requires creativity for sure. That doesn't make it art. Code is designed and written to work properly, not to be emotionally moving. Except for whitespace, changing code for aesthetic reasons breaks your code. While code can be beautiful, art is not the primary purpose.

It's not science: science and technology are inseparable, but programming is in the technology category. Programming is not systematic study and observation; it is design and implementation.

It's an engineering discipline: programmers design and build things. Good programmers design for function. They understand the trade-offs of different implementation options and choose the one that suits the problem they are solving.


I'm sure there are those out there who would love to parse words, stretching the definitions of art and science to include programming or constraining engineering to mechanical machines or hardware only. Check the dictionary. Also "The Art of Computer Programming" is a different usage of art that means a skill or craft, as in "the art of conversation." The product of programming is not art.

Paul
+1  A: 

Neither Visual Basic or C# trumps the other. They are pretty much the same, save some syntax and formatting.

Brad
Now... They weren't always so feature similar. So you have to fight what many of us learned once upon a time.
Mufasa
+6  A: 

Not really programming, but I can't stand css only layouts just for the sake of it. It's counter productive, frustrating, and makes maintenance a nightmare of floats and margins where changing the position of a single element can throw the entire page out of whack.

It's definitely not a popular opinion, but i'm done with my table layout in 20 minutes while the css gurus spend hours tweaking line-height, margins, padding and floats just to do something as basic as vertically centering a paragraph.

Rob
Whoever spends hours writing `margin: 0 auto;` is one hell of a bad css-designer... Still, tables are tables and tables store data. Not design.
ApoY2k
That is why there are 3 different ways to use styles. For re-usability, and scope of need.
awright18
+1  A: 

I think we should move away from 'C'. Its too old!. But, the old dog is still barking louder!!

Ganesh Gopalasubramanian
Move away towards what...?
ApoY2k
Towards Ruby :)
Dmytrii Nagirniak
It is probably still one of the best languages to write an operating system in assuming (1) you are starting from scratch, (2) you want it to be fast but do not have time to write it in assembly, and (3) want to work on maintaining and editing operating systems written in C.
Noctis Skytower
+2  A: 

Macros, Preprocessor instructions and Annotations are evil.

One syntax and language per file please!

// does not apply to Make files, or editor macros that insert real code.

chris
Everyone agrees that the pre-processor is evil... except the people who would never be found on Stack Overflow. They love it.
Integer Poet
How about this: C is evil. And C++ is even more evil. However, C is a necessary evil, and C++ an unnecessary one.
Warren P
+5  A: 

Writing extensive specifications is futile.
It's pretty difficult to write correct programs, but compilers, debuggers, unit tests, testers etc. make it possible to detect and eliminate most errors. On the other hand, when you write specs with a comparable level of detail like a program (i.e. pseudocode, UML), you are mostly on your own. Consider yourself lucky if you have a tool that helps you get the syntax right.

Extensive specifications are most likely bug riddled.
The chance that the writer got it right at the first try is about the same like the chance that a similarily large program is bugfree without ever being tested. Peer reviews eliminate some bugs, just like code reviews do.

ammoQ
This is controversial only to the extent that you expect a specification to resemble the finished product. If instead the purpose is to make you think through the issues involved, then specifications work great. This is especially true if the finished product doesn't suck, doesn't resemble the spec, and you look back and realize you were able to change your mind effectively because you had gone through the exercise of writing the spec. Note: this only works if you have only smart people on your team.
Integer Poet
+16  A: 

Boolean variables should be used only for Boolean logic. In all other cases, use enumerations.


Boolean variables are used to store data that can only take on two possible values. The problems that arise from using them are frequently overlooked:

  • Programmers often cannot correctly identify when some piece of data should only have two possible values
  • The people who instruct programmers what to do, such as program managers or whomever writes the specs that programmers follow, often cannot correctly identify this either
  • Even when a piece of data is correctly identified as having only two possible states, that guarantee may not hold in the future.

In these cases, using Boolean variables leads to confusing code that can often be prevented by using enumerations.

Example

Say a programmer is writing software for a car dealership that sells only cars and trucks. The programmer develops a thorough model of the business requirements for his software. Knowing that the only types of vehicles sold are cars and trucks, he correctly identifies that he can use a boolean variable inside a Vehicle class to indicate whether the vehicle is a car or a truck.

class Vehicle {
 bool isTruck;
 ...
}

The software is written so when isTruck is true a vehicle is a truck, and when isTruck is false the vehicle is a car. This is a simple check performed many times throughout the code.

Everything works without trouble, until one day when the car dealership buys another dealership that sells motorcycles as well. The programmer has to update the software so that it works correctly considering the dealership's business has changed. It now needs to identify whether a vehicle is a car, truck, or motorcycle, three possible states.

How should the programmer implement this? isTruck is a boolean variable, so it can hold only two states. He could change it from a boolean to some other type that allows many states, but this would break existing logic and possibly not be backwards compatible. The simplest solution from the programmer's point of view is to add a new variable to represent whether the vehicle is a motorcycle.

class Vehicle {
 bool isTruck;
 bool isMotorcycle;
 ...
}

The code is changed so that when isTruck is true a vehicle is a truck, when isMotorcycle is true a vehicle is a motorcycle, and when they're both false a vehicle is a car.

Problems

There are two big problems with this solution:

  • The programmer wants to express the type of the vehicle, which is one idea, but the solution uses two variables to do so. Someone unfamiliar with the code will have a harder time understanding the semantics of these variables than if the programmer had used just one variable that specifies the type entirely.
  • Solving this motorcycle problem by adding a new boolean doesn't make it any easier for the programmer to deal with such situations that happen in the future. If the dealership starts selling buses, the programmer will have to repeat all these steps over again by adding yet another boolean.

It's not the developer's fault that the business requirements of his software changed, requiring him to revise existing code. But using boolean variables in the first place made his code less flexible and harder to modify to satisfy unknown future requirements (less "future-proof"). When he implemented the changes in the quickest way, the code became harder to read. Using a boolean variable was ultimately a premature optimization.

Solution

Using an enumeration in the first place would have prevented these problems.

enum EVehicleType { Truck, Car }

class Vehicle {
 EVehicleType type;
 ...
}

To accommodate motorcycles in this case, all the programmer has to do is add Motorcycle to EVehicleType, and add new logic to handle the motorcycle cases. No new variables need to be added. Existing logic shouldn't be disrupted. And someone who's unfamiliar with the code can easily understand how the type of the vehicle is stored.

Cliff Notes

Don't use a type that can only ever store two different states unless you're absolutely certain two states will always be enough. Use an enumeration if there are any possible conditions in which more than two states will be required in the future, even if a boolean would satisfy existing requirements.

Chris Stevens
I guess this is not very controversial.
Ikke
The argument isn't controversial per se, but try writing your code like that and see if your team object. I'd bet 9/10 teams would try and argue you back to booleans.
David
Of course, OOP guys in the corner would mutter something along the lines of "class Truck extends/implements Vehicle, class Car extends/implements Vehicle..."
Ivan Vrtarić
I worked on a project that used a collection of booleans to try to distinguish among models of printer. It was ... execrable. Nobody would want to do that after having seen it in action. But here's some controversy for you: In languages which allow it, it's perfectly reasonable to use a bool for one of three values: true, false, and don't know.
Integer Poet
Thanks. Never thought about that. I guess I should give enums a better look.
Sylverdrag
Hmm. I could learn from this.
Nick Wiggill
+2  A: 

One class per file

Who cares? I much prefer entire programs contained in one file rather than a million different files.

Ravi
One namespace per file is better.
Behrooz
One file per computer! ANARCHY!!!
Tor Valamo
1 FILE PER CLOUD, PER PLANET!
JL
A: 

Size matters.    

surdipkrishna
+20  A: 

It's fine if you don't know. But you're fired if you can't even google it.

Internet is a tool. It's not making you stupider if you're learning from it.

Omu
This should be a rule in any position that has anything to do with using a computer. Not just restricted to programmers.
awright18
+5  A: 

Don't be shy, throw an exception. Exceptions are a perfectly valid way to signal failure, and are much clearer than any return-code system. "Exceptional" has nothing to do with how often this can happen, and everything to do with what the class considers normal execution conditions. Throwing an exception when a division by zero occurs is just fine, regardless of how often the case can happen. If the problem is likely, guard your code so that the method doesn't get called with incorrect arguments.

Mathias
+8  A: 

Writing it yourself can be a valid option.

In my experience there seems to be too much enthusiasm when it comes to using 3rd party code to solve a problem. The option of solving the problem by themselves does usually not cross peoples minds. Although don't get me wrong, I am not propagating to never ever use libraries. What I am saying is: among the possible frameworks and modules you are considering to use, add the option of implementing the solution yourself.

But why would you code your own version?

  • Don't reinvent the wheel. But, if you only need a piece of wood, do you really need a whole cart wheel? In other words, do you really need openCV to flip an image along an axis?
  • Compromise. You usually have to make compromises concerning your design, in order to be able to use a specific library. Is the amount of changes you have to incorporate worth the functionality you will receive?
  • Learning. You have to learn to use these new frameworks and modules. How long will it take you? Is it worth your while? Will it take longer to learn than to implement?
  • Cost. Not everything is for free. Although, this includes your time. Consider how much time this software you are about to use will save you and if it is worth it's price? (Also remember that you have to invest time to learn it)
  • You are a programmer, not ... a person who just clicks things together (sorry, couldn't think of anything witty).

The last point is debatable.

Stefan Schmidt
+5  A: 

Using regexs to parse HTML is, in many cases, fine

Every time someone posts a question on Stack Overflow asking how to achieve some HTML manuipulation with a regex, the first answer is "Regex is a insufficient tool to parse HTML so don't do it". If the questioner was trying to build a web browser, this would be a helpful answer. However, usually the questioner wants to do some thing like add a rel tag to all the links to a certain domain, usually in a case when certain assumptions can be made about the style of the incoming markup, something that is entiely reasonable to do with a regex.

Nick Higgs
+2  A: 

Storing XML in a CLOB in a relational database is often a horrible cop-out. Not only is it hideous in terms of performance, it shifts responsibility for correctly managing structure of the data away from the database architect and onto the application programmer.

Tim
+11  A: 

Software is like toilet paper. The less you spend on it, the bigger of a pain in the ass it is.

That is to say, outsourcing is rarely a good idea.

I've always figured this to be true, but I never really knew the extent of it until recently. I have been "maintaining" (read: "fixing") some off-shored code recently, and it is a huge mess. It is easily costing our company more than the difference had it been developed in-house.

People outside your business will inherently know less about your business model, and therefore will not do as good a job programming any system that works within your business. Also, they know they won't have to support it, so there's no incentive to do anything other than half-ass it.

iandisme
Outsourcing is rarely anything else than paying Indians to learn how to program. They're not quit there yet, but when they will be, we'll have payed for them learning their skills.
Seventh Element
+3  A: 

Tabs vs Whitespaces

JaredCacurak
+8  A: 

Intranet Frameworks like SharePoint makes me think the whole corporate world is one giant ostrich with its head in the sand

I'm not only talking about MOSS here, I've worked with some other CORPORATE INTRANET products, and absolutely not one of them are great, but SharePoint (MOSS) is by far the worst.

  • Most of these systems don't easily bridge the gap between Intranet and Internet. So as a remote worker you're forced to VPN in. External customers just don't have the luxury of getting hold of your internal information first hand. Sure this can be fixed at a price $$$.
  • The search capabilities are always pathetic. Lots of time other departments simply don't know about information is out there.
  • Information fragments, people start boycotting workflows or revert to email
  • SharePoint development is the most painful form of development on the planet. Nothing sucks like SharePoint. I've seen a few developers contemplating quitting IT after working for over a year with MOSS.
  • No matter how the developers hate MOSS, no matter how long the most basic of projects take to roll out, no matter how novice the results look, and no matter how unsearchable and fragmented the content is:

EVERYONE STILL CONTINUES TO USE AND PURCHASE SHAREPOINT, AND MANAGERS STILL TRY VERY HARD TO PRETEND ITS NOT SATANS SPAWN.

Microformats

Using CSS classes originally designed for visual layout - now being assigned for both visual and contextual data is a hack, loads of ambiguity. Not saying the functionality should not exist, but fix the damn base language. HTML wasn't hacked to produce XML - instead the XML language emerged. Now we have these eager script kiddies hacking HTML and CSS to do something it wasn't designed to do, thats still fine, but I wish they would keep these things to themselves, and no make a standard out of it. Just to some up - butchery!

JL
Your programming opinion doesn't look very controversial to me. In fact I can't even see what your programming opinion is.
Windows programmer
I agree with your attacks on sharepoint. In my dealings with the beast, there is a lot of confusion about what it can and should do. I guess that comes from the office world were people abuse, word, excel, and access to do ungodly things that should be handled by programmers creating real applications. The running joke around sharpoint's abilities at my work is that it can "wash your car", or "mow your lawn" or that it has infinite super powers.
awright18
I agree that this is not controversial. As a MOSS dev I can only conclude that SP was written by Microsoft's best team of monkeys with down syndrome.
Repo Man
What is controversial is that MOSS is considered by most business users to be a perfect all round intranet solution, but honestly its a pile of dog crap under the hood.
JL
+1  A: 

Associative Arrays / Hash Maps / Hash Tables (+whatever its called in your favourite language) are the best thing since sliced bread!

Sure, they provide fast lookup from key to value. But they also make it easy to construct structured data on the fly. In scripting languages its often the only (or at least most used) way to represent structured data.

IMHO they were a very important factor for the success of many scripting languages.

And even in C++ std::map and std::tr1::unordered_map helped me writing code faster.

frunsi
+1  A: 

C++ is future killer language...

... of dynamic languages.

nobody owns it, has a growing set of features like compile-time (meta-)programming or type inference, callbacks without the overhead of function calls, doesn't enforce a single approach (multi-paradigm). POSIX and ECMAScript regular expressions. multiple return values. you can have named arguments. etc etc.

things move really slowly in programming. it took JavaScript 10 years to get off the ground (mostly because of performance), and most of people who program in it still don't get it (classes in JS? c'mon!). i'd say C++ will really start shining in 15-20 years from now. that seems to me like about the right amount of time for C++ (the language as well as compiler vendors) and critical mass of programmers who today write in dynamic languages to converge.

C++ needs to become more programmer-friendly (compiler errors generated from templates or compile times in the presence of same), and the programmers need to realize that static typing is a boon (it's already in progress, see other answer here which asserts that good code written in a dynamically typed language is written as if the language was statically typed).

just somebody
+6  A: 

Relational database systems will be the best thing since sliced bread...

... when we (hopefully) get them, that is. SQL databases suck so hard it's not funny.

What I find amusing (if sad) is certified DBAs who think an SQL database system is a relational one. Speaks volumes for the quality of said certification.

Confused? Read C. J. Date's books.

edit

Why is it called Relational and what does that word mean?

These days, a programmer (or a certified DBA, wink) with a strong (heck, any) mathematical background is an exception rather than the common case (I'm an instance of the common case as well). SQL with its tables, columns and rows, as well as the joke called Entity/Relationship Modelling just add insult to the injury. No wonder the misconception that Relational Database Systems are called that because of some Relationships (Foreign Keys?) between Entities (tables) is so pervasive.

In fact, Relational derives from the mathematical concept of relations, and as such is intimately related to set theory and functions (in the mathematical, not any programming, sense).

[http://en.wikipedia.org/wiki/Finitary%5Frelation%5D%5B2%5D:

In mathematics (more specifically, in set theory and logic), a relation is a property that assigns truth values to combinations (k-tuples) of k individuals. Typically, the property describes a possible connection between the components of a k-tuple. For a given set of k-tuples, a truth value is assigned to each k-tuple according to whether the property does or does not hold.

An example of a ternary relation (i.e., between three individuals) is: "X was-introduced-to Y by Z", where (X,Y,Z) is a 3-tuple of persons; for example, "Beatrice Wood was introduced to Henri-Pierre Roché by Marcel Duchamp" is true, while "Karl Marx was introduced to Friedrich Engels by Queen Victoria" is false.

Wikipedia makes it perfectly clear: in a SQL DBMS, such a ternary relation would be a "table", not a "foreign key" (I'm taking the liberty to rename the "columns" of the relation: X = who, Y = to, Z = by):

CREATE TABLE introduction (
  who INDIVIDUAL NOT NULL
, to INDIVIDUAL NOT NULL
, by INDIVIDUAL NOT NULL
, PRIMARY KEY (who, to, by)
);

Also, it would contain (among others, possibly), this "row":

INSERT INTO introduction (
  who
, to
, by
) VALUES (
  'Beatrice Wood'
, 'Henri-Pierre Roché'
, 'Marcel Duchamp'
);

but not this one:

INSERT INTO introduction (
  who
, to
, by
) VALUES (
  'Karl Marx'
, 'Friedrich Engels'
, 'Queen Victoria'
);

Relational Database Dictionary:

relation (mathematics) Given sets s1, s2, ..., sn, not necessarily distinct, r is a relation on those sets if and only if it's a set of n-tuples each of which has its first element from s1, its second element from s2, and so on. (Equivalently, r is a subset of the Cartesian product s1 x s2 x ... x sn.)

Set si is the ith domain of r (i = 1, ..., n). Note: There are several important logical differences between relations in mathematics and their relational model counterparts. Here are some of them:

  • Mathematical relations have a left-to-right ordering to their attributes.
  • Actually, mathematical relations have, at best, only a very rudimentary concept of attributes anyway. Certainly their attributes aren't named, other than by their ordinal position.
  • As a consequence, mathematical relations don't really have either a heading or a type in the relational model sense.
  • Mathematical relations are usually either binary or, just occasionally, unary. By contrast, relations in the relational model are of degree n, where n can be any nonnegative integer.
  • Relational operators such as JOIN, EXTEND, and the rest were first defined in the context of the relational model specifically; the mathematical theory of relations includes few such operators.

And so on (the foregoing isn't meant to be an exhaustive list).

just somebody
Would you agree that today's RDMSs *support* the relational model however rarely are the schema designers implementing it?
Xepoch
what are today's RDBMSs?
just somebody
Xepoch
yes. SQL database systems are just that: SQL database systems, not relational database systems.
just somebody
Do you mean Object Databases when you say relational databases? That is, db4o et al.? Relational Database system in my opinion are systems where you model relations between entities, also known as Foreign Keys and Update/Delete Cascades. Sadly, most of the time these entities are flat 2-Dimensional tables in RDBMS...
Michael Stum
@Michael Stum: no, see expanded answer, and excuse me if it's not very coherent. It's well past midnight here and I'm almost done with second bottle of wine.
just somebody
Xepoch
+2  A: 

Development is 80% about the design and 20% about coding

I believe that developers should spend 80% of time designing at the fine level of detail, what they are going to build and only 20% actually coding what they've designed. This will produce code with near zero bugs and save a lot on test-fix-retest cycle.

Getting to the metal (or IDE) early is like premature optimization, which is know to be a root of all evil. Thoughtful upfront design (I'm not necessarily talking about enormous design document, simple drawings on white board will work as well) will yield much better results than just coding and fixing.

Dima Malenko
+1  A: 

Simplicity Vs Optimality

I believe its very difficult to write code that's both simple and optimal.

Salvin Francis
+1  A: 

80% of bugs are introduced in the design stage.
The other 80% are introduced in the coding stage.

(This opinion was inspired by reading Dima Malenko's answer. "Development is 80% about the design and 20% about coding", yes. "This will produce code with near zero bugs", no.)

Windows programmer
+1  A: 

Best practices aren't.

Xepoch
+4  A: 

small code is always better, but then complex ?: instead of if-else made me realize that sometime large code is more readable.

Vinay Pandey
A: 

Python does everything that other programming languages do in half the dev time... and so does Google!!! Check out Unladen Swallow if you disagree.

Wait, this is a fact. Does it still qualify as an answer to this question?

orokusaki
Well, actually, Python still needs a bunch of C modules for some functionality.
Tor Valamo
Unladen swallow is not ready for prime time except at certain places inside google, and the "2 to 10 times" faster than interpreted python doesn't come anywhere close to real-native-code speeds for most every work load out there that is not web-slinger centric. If "everything" means "the web crap I think of as programming" then, yeah, Python can do that. And I love python. But I also see that performance-just-as-fast-as-native thing as a crock. Oh and don't forget about the global interpreter lock (GIL).
Warren P
+5  A: 

Lower camelCase is stupid and unsemantic

Using lower camelCase makes the name/identifier ("name" used from this point) look like a two-part thing. Upper CamelCase however, gives the clear indication that all the words belong together.

Hungarian notation is different ... because the first part of the name is a type indicator, and so it has a separate meaning from the rest of the name.

Some might argue that lower camelCase should be used for functions/procedures, especially inside classes. This is popular in Java and object oriented PHP. However, there is no reason to do that to indicate that they are class methods, because BY THE WAY THEY ARE ACCESSED it becomes more than clear that these are just that.

Some code examples:

# Java
myobj.objMethod() 
# doesn't the dot and parens indicate that objMethod is a method of myobj?

# PHP
$myobj->objMethod() 
# doesn't the pointer and parens indicate that objMethod is a method of myobj?

Upper CamelCase is useful for class names, and other static names. All non-static content should be recognised by the way they are accessed, not by their name format(!)

Here's my homogenous code example, where name behaviours are indicated by other things than their names... (also, I prefer underscore to separate words in names).

# Java
my_obj = new MyObj() # Clearly a class, since it's upper CamelCase
my_obj.obj_method() # Clearly a method, since it's executed
my_obj.obj_var # Clearly an attribute, since it's referenced

# PHP
$my_obj = new MyObj()
$my_obj->obj_method()
$my_obj->obj_var
MyObj::MyStaticMethod()

# Python
MyObj = MyClass # copies the reference of the class to a new name
my_obj = MyObj() # Clearly a class, being instantiated
my_obj.obj_method() # Clearly a method, since it's executed
my_obj.obj_var # clearly an attribute, since it's referenced
my_obj.obj_method # Also, an attribute, but holding the instance method.
my_method = myobj.obj_method # Instance method
my_method() # Same as myobj.obj_method()
MyClassMethod = MyObj.obj_method # Attribute holding the class method
MyClassMethod(myobj) # Same as myobj.obj_method()
MyClassMethod(MyObj) # Same as calling MyObj.obj_method() as a static classmethod

So there goes, my completely obsubjective opinion on camelCase.

Tor Valamo
underscores suck. Except in the way I use them, which is to mark a method as "something that sucks and you definitely shouldn't use even though it's public".
Warren P
if you say so...
Tor Valamo
+4  A: 

Programmers need to talk to customers

Some programmers believe that they don't need to be the ones talking to customers. It's a sure way for your company to write something absolutely brilliant which no one can work out what it's for or how it was intended to be used.

You can't expect product managers and business analysts to make all the decisions. In fact, programmers should be making 990 out of the 1000 (often small) decisions that go into creating a module or feature, otherwise the product would simply never ship! So make sure your decisions are informed. Understand your customers, work with them, watch them use your software.

If you're going the write the best code, you want people to use it. Take an interest in your user base and learn from the "dumb idiots" who are out there. Don't be afraid, they'll actually love you for it.

Vincent
+4  A: 

Zealous adherence to standards stands in the way of simplicity.

MVC is over-rated for websites. It's mostly just VC, sometimes M.

Justin Johnson
How about this MVC is overrated, period.
Warren P