views:

90643

answers:

415

This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.

The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).

So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.

Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.

+85  A: 

The world needs more GOTOs

GOTOs are avoided religiously often with no reasoning beyond "my professor told me GOTOs are bad." They have a purpose and would greatly simplify production code in many places.

That said, they aren't really necessary in 99% of the code you'll ever write.

Max
I agree. Not necessarily that we need more gotos, but that sometimes programmers go to ridiculous lengths to avoid them: such as creating bizarre constructs like: do { ... break; ... } while (false); to simulate a goto while pretending not to use one.
Ferruccio
Especially when you're taught what GOTOs are for an entire semester and how to use them, then the next semester a new lecturer comes along chanting the death of the GOTO statement in a folly of unexplained and illogical rage.
Kezzer
I agree as well, one of my old lectures would go mental if you ever thought about using them. But coding to avoid them may end up being worse than using them.
Mark Davidson
I've used GOTOs in switch statements to have logic jump all over the place, and had no problem with it (apart from the fact that I got FxCop to actually complain about the complexity of the method in question).
Dmitri Nesteruk
I have seen only 1 example of a good usage for the last 5 years, so make it 99,999 percent.
Paco
I've never had to use a goto for anything. Anytime when I actually thought goto might be a good idea, it was instead an indicator that things weren't flowing properly.
PhoenixRedeemer
No no no no no. So much production code is so wildly obfuscated and unclear already. You would be giving more tools to the monkeys.
Steve B.
I don't think I can come up with a single good use of GoTo in a .NET application... can you give an example of a good use of it?
BenAlabaster
Goto is very useful in native code. It lets you move all of your error handling to the end of you function and helps ensure that all necessary cleanup happens(freeing memory/resources, etc). The pattern which I like to see is to have exactly two labels in each function: Error and Cleanup.
Jesse Weigert
The explanation I've heard is that GOTOs make the stack non-deterministic. If you got to a line with a GOTO, there's no way of telling how you got there. Makes debugging much harder.
dj_segfault
As the years have gone by the need for GOTOs goes down and down as languages add constructs that remove the need for some uses. I'm down to about 1 GOTO per year now but there are times it's the right answer.
Loren Pechtel
Nice to see that this did indeed generate a great bit of controversy!
Max
I find goto's are not very readable. I despise them in SQL, so why would I use them anywhere else?
Jeremy
@Jeremy, Can you do goto in SQL? SQL is a declarative language. Which db vendor has SQL that knows a goto?
tuinstoel
@tuinstoel, MSSQL has supported it since at least 6.5. I use it a lot to begin, commit/rollback transactions in stored procedures.
Jeremy
@Jeremy, Don't you mean T-SQL instead of SQL?
tuinstoel
To my knowledge in assembly/machine language all branching are forms of goto. What does your high level language get compiled into? Nothing wrong with the occasional "low level style" shortcut if it is done properly.
Andy Webb
Continue = goto for loops;Break = goto for blocks;switch = goto madness;Goto is obviously not a problem if used with some sense then.If you are using an OO language and you use Goto for Error and Cleanup then you scare me. RAII and counterparts should be considered your friends.
Greg Domjan
+1 for controversy :). Oh, I know what GOTO's are, I started with BASIC like many of you. We need more GOTO's like we need DOS 8.3 filenames, plain ASCII encoding, FAT 16 filesystems, and 5 1/4 inch floppies.
postfuturist
Just found this: http://stackoverflow.com/questions/84556/whats-your-favorite-programmer-cartoon#301419
Cameron MacFarland
A good example of goto: http://stackoverflow.com/questions/416464/is-it-possible-to-exit-a-for-before-time-in-c-if-an-ending-condition-is-reache#416555
FryGuy
I used goto quite a bit in C programming - generally as a finally block. I have a file handle I need to close, memory I need to free etc, so at the point where I would return early, I just set a return code and goto the cleanup: label.
Hamish Downer
Gotos are also commonly used to code up state machines. You can use an enumeration, a switch statement, and a loop to achieve the same effect. However, all that really does is mask the true structure of your control flow (and slow things down a bit).
T.E.D.
Goto can be OK. My rule of thumb. If a good programmer, who doesn't often use Goto, is prepared to defend it - then it's OK. And it probably is a once a year thing if that. Dmitri, sounds like FxCop is right and you're wrong.
MarkJ
This thread considered harmful. Edsger Dijkstra is rolling in his grave. :)
Darcy Casselman
Agreed. I am struggling to translate numerical code from Fortran into F# because it lacks an efficient goto construct.
Jon Harrop
The problem with GOTO's are that they are like giving a little alcohol to a recovering alcoholic. Incredibly dangerous for programmers coming over from BASIC who are unstructured happy.
Austin
People who think gotos are evil have never programmed in C, or if they have, they did it poorly. Gotos are the *best* way to do error handling in plain C, and repeating Dijkstras quote dogmatically only demonstrates ignorance.Please read this before complaining about gotos: http://eli.thegreenplace.net/2009/04/27/using-goto-for-error-handling-in-c/
catphive
To add on to catphive's point about using goto's in C, here's a discussion about gotos by the Linux kernel developers when one man jumps the gun on a goto and proceeds to recommend avoiding it at all costs: http://kerneltrap.org/node/553/2131
Coding With Style
Actually, the discussion of the use of goto in Linux made me change my mind if goto is indeed harmful in development. I've learned not just to trust what you've taught :).
OnesimusUnbound
I needed gotos in C because it has no equivalent for Java's "continue loopname;"
luiscubal
I once got sent home from college for telling someone to use a GOTO :P
ing0
Events are the modern GOTO statement. You arrive from anywhere, anytime, with extra baggage of data that GOTOs never had.
Tom A
I've always learned not to use GOTOs because they create spaghetti code and are for the lazy (that if you do use them, something is wrong with your flow). However, JUMP statements, which are essentially GOTOs, are very useful in assembly.
Dennis
"They have a purpose and would greatly simplify production code in many places. That said, they aren't really necessary in 99% of the code you'll ever write." +2 if I could, sir, that could not have been written better.
Jake Petroules
Sorry but I'm very very glad to have not seen a GOTO statement since porting a QuickBasic program to C#. Give me a break statement anyday.
wonea
+13  A: 

Stay away from Celko!!!!

http://www.dbdebunk.com/page/page/857309.htm

I think it makes a lot more sense to use surrogate primary keys then "natural" primary keys.


@ocdecio: Fabian Pascal gives (in chapter 3 of his book Practical issues in database management, cited in point 3 at the page that you link) as one of the criteria for choosing a key that of stability (it always exists and doesn't change). When a natural key does not possesses such property, than a surrogate key must be used, for evident reasons, to which you hint in comments.

You don't know what he wrote and you have not bothered to check, otherwise you could discover that you actually agree with him. Nothing controversial there: he was saying "don't be dogmatic, adapt general guidelines to circumstances, and, above all, think, use your brain instead of a dogmatic/cookbook/words-of-guru approach".

Otávio Décio
Yes! His ideas about Heiarchical data structures are academically elegant and totally useless.
Charles Bretana
Well, I like Celko but I agree with you re: surrogate primary keys!
Mark Brittingham
Agree in part, surrogate keys are definitely more convenient when accessing data, but I try to identify a natural key as well and usually set it up as a constraint. So why not both?!
tekiegreg
I have no problems with natural keys to be used for convenience, but primary keys should be immutable. I once had a system that used SSN's as PK's, and sometimes persons wouldn't have one (as children) and then they would. Try to change a PK, what a mess...
Otávio Décio
I can agree with the concept that once your autonumber keys get mismatched, there's no way to fix them. But the solution isn't "natural" keys; the solution is never to expose the keys to your users.
Kyralessa
I wish I could go back a few years on my current project and tell myself not to use a natural key. Now we're stuck with it and kludging around it. +1
Marcus Downing
@ocdecio: Fabian Pascal gives (in chapter 3 of his book, as cited in point 3 at the page that you link) as one of the criteria for choosing a key that of **stability** (it always exists and doesn't change). When a natural key does not possesses such property, than a surrogate key must be used, for evident reasons, to which you hint. So you actually agree with him, but think otherwise. Nothing controversial there: he was saying "don't be dogmatic, adapt general guidelines to circumstances, and, above all, **think**, use your brain instead of a dogmatic/cookbook/words-of-guru approach".
MaD70
One of the classic mistakes is to assume that just because a candidate natural key, such as SSN, is by definition unique, that you will receive unique values. People may lie or make mistakes and you then have a chance of collision when the "real person" comes along.
Andy Dent
+62  A: 

Respect the Single Responsibility Principle

At first glance you might not think this would be controversial, but in my experience when I mention to another developer that they shouldn't be doing everything in the page load method they often push back ... so for the children please quit building the "do everything" method we see all to often.

Toran Billups
How is that controversial?
Vinko Vrsalovic
Agree, but not very controversial?
Ed Guiness
it's controversial because the ugly mess that most people call MVC is mostly a 'do everything'
Javier
Really? I actually thought that MVC was the opposite to that.
Leonardo Herrera
Upvoted for lack of controversy!
spender
This answer seems to stir up a bit of controversy on its controversial-ness. ;P
strager
I Agree RE: MVC - really hard to limit method bloat on the controllers
Harry
Re MVC: If method bloat is the issue then make more controllers, they shouldn't be bloated with methods it doesn't feel right if that happens, feels like the controllers try to do more than they should.
Pop Catalin
If you don't think this is controversial, you probably don't know how far you can go with this. :-)
hstoerr
+45  A: 

If I were being controversial, I'd have to suggest that Jon Skeet isn't omnipotent..

Gareth
+1 up vote, all though I think you've upset his fans ;)
Shane MacLaughlin
Yes, apparently this is a very controversial view
Gareth
BLASPHE---!!Um, I mean, yes, I quite concur.
Mike Hofer
It does appear that writing a book on C# doesn't also mean you know everything about VB ;)
ChrisA
I think you might want to bring yourself up to date on the Jon Skeet facts. Remember:"Can Jon Skeet ask a question he cannot answer? Yes. And he can answer it too."He is omnipotent!
Totophil
At first I thought you said John Skeet isn't impotent.
John D. Cook
@Totophil: Interesting comment when you consider: Jon Skeet asked this question (and he posted an answer...)
James Curran
@John D. Cook: Well, he isn't: http://moms4mom.com/users/111/jon-skeet
Brian Ortiz
+5  A: 

In my workplace, I've been trying to introduce more Agile/XP development habits. Continuous Design is the one I've felt most resistance on so far. Maybe I shouldn't have phrased it as "let's round up all of the architecture team and shoot them"... ;)

Giraffe
That's good. Along the same lines is casually insulting people in the name of "truth". That particular virus seems to have a reservoir in grad schools, like the one I attended.
Mike Dunlavey
+20  A: 

I work in ASP.NET / VB.NET a lot and find ViewState an absolute nightmare. It's enabled by default on the majority of fields and causes a large quantity of encoded data at the start of every web page. The bigger a page gets in terms of controls on a page, the larger the ViewState data will become. Most people don't turn an eye to it, but it creates a large set of data which is usually irrelevant to the tasks being carried on the page. You must manually disable this option on all ASP controls if they're not being used. It's either that or have custom controls for everything.

On some pages I work with, half of the page is made up of ViewState, which is a shame really as there's probably better ways of doing it.

That's just one small example I can think of in terms of language/technology opinions. It may be controversial.

By the way, you might want to edit voting on this thread, it could get quite heated by some ;)

Kezzer
Could you highlight your controversial opinion... is it "viewstate is bad" or something else?
Ed Guiness
Nope, it's "ViewState is enabled by default, when I really don't think it should be, but having it disabled by default required custom controls"
Kezzer
I expect anyone who has worked on ASP.NET would agree with this. We have a page to search a third party system that has some LARGE drop down lists on it. The ViewState doubled the already 200Kb page size.
pipTheGeek
I don't think that experienced webforms developers will find this particularly controversial...most of us will agree with you!
Mark Brittingham
Yup, we encounter the page size doubling from time to time, and sometimes even more. The page renders slower, more bandwidth is used, and it's a nightmare to track down problems when you're viewing the rendered page source.
Kezzer
The intersting thing about this is that in the majority of cases ViewState is not needed at all!
etsuba
Don't throw so much crap on a page if Viewstate is really a problem. You probably have a design problem if you really have that much viewstate stuff on a page.
Paul Mendoza
Have you tried programming without ViewState? I can promise you that 5 minutes with JSP will make you *run* back to ViewState. Seriously, the ViewState is *NEVER* the problem, the problem is the developer using the ViewState!
Thomas Hansen
@Paul, I insanely agree! Don't throw so much crap in your page if you're having ViewState problems - go back to design!
Thomas Hansen
Try ASP.NET MVC, it's a joy to program with.
Dave
You do not have to turn ViewState off for each and every control. You can do it in the @Page directive.
xanadont
+749  A: 

The only "best practice" you should be using all the time is "Use Your Brain".

Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks etc onto things that don't warrant them. Just because something is new, or because someone respected has an opinion, doesn't mean it fits all :)

EDIT: Just to clarify - I don't think people should ignore best practices, valued opinions etc. Just that people shouldn't just blindly jump on something without thinking about WHY this "thing" is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?

Steven Robbins
This is exactly what I was going to write, so instead I'll just say amen!!
xando
+1, agreed completely... though I don't think this is a very controversial statement.
Kon
It doesn't sound controversial, but the amount of times I get a "WTF?" face from people when I question the use of a particular tech/method/whatever in a meeting is quite alarming :)
Steven Robbins
Yeah I gotta second that - it's "think for yourself", basically.
Dmitri Nesteruk
This is not controversial, it is true ;-).
Gamecat
Not only is it not controversial, but it's not true. I'm happy to use my brain, but there's a lot to be gained from looking at people smarter than you and saying - This smart person does this thing this way and I'd be wise to listen.
seanyboy
For example - every time I use someone elses library or implement a solution using a pattern - then I'm "jumping on a bandwagon." The most amazing thing about modern development is the fact that we can re-use the things other smarter people have created.
seanyboy
I think you are missing the point entirely seanyboy.. the point is not to ignore any other opinions or technology, it's to evaluate them yourself and apply them where you feel they will be of value, rather than blindly implementing something because AN Other said it was the way to do it!
Steven Robbins
You obviously have to make judgement calls about the techniques and technologies you use, but this should not mean that you *never* use other peoples techniques or technologies.
seanyboy
@beepcake and @seanyboy - I think you are heatedly agreeing with each other :)
Ed Guiness
Indeed we are.. I never use the word never ;)
Steven Robbins
we probably are.
seanyboy
BANDWAGONS BEGONE! Software isn't powered by popularity. It's engineering. Every technique has pros and cons for any given purpose.
Mike Dunlavey
I think what Beepcake meant was that many people apply "best practices" unthinkingly, either because they didn't understand WHY they're good, or because they once got enthusiastically convinced by the reasoning and never stop to think about whether it really applies universally.
Michael Borgwardt
Absolutely spot-on. +1 and sorry it can't be +10
Brent.Longborough
To really spice up the controversy, if your brand of "best practice" includes a slavish, single-minded devotion to any single programming language, platform, editor, IDE or technological trend, you are part of the problem.
dreftymac
Excellent write up!!
featureBlend
If it weren't against the rules I would create 10 more accounts to vote you up on this one. I see this all of the time and it's depressing.
nlaq
@Nelson LOL, I just wish I got rep for all these up votes :-)
Steven Robbins
ahem - unit testing... ahem - pair programming... ahem - scrum...
mson
Hell yes. 'It's 'not right' to do something...' why? 'Because it's bad practice.' But why? 'It's not the right way to do it' etc. etc.It's all right to characterise something as bad practice, but be able to back it up. Hey - good life strategy overall. :-)
kronoz
full agree. vote up! :)
ecleel
This is a good statement but it's not controversial...
TM
Someone use newest version but oldest feature of that technology.
In The Pink
This answer reminds me of the fable of the five monkeys: http://www.contactandcoil.com/Articles/StandardsfortheSakeofStan.html
Scott Whitlock
"...people shouldn't just blindly jump on something without thinking about WHY this 'thing' is so great, IS it applicable to what I'm doing, and WHAT benefits/drawbacks does it bring?" It is also important to apply this to your nightlife
JoeCool
"Best practices" is in fact a meaningless term most of the time, as it is primarily used as a debating cudgel to try to claim that certain practices are better, without any actual technical evidence that this is the case. 99% of the time if you ask someone where the "best practices" they're talking about are documented (or who exactly defined them to be "best practices"), they'll go red in the face and change the subject.
dirtside
The phrase I tend to use is, "Blindly following Best Practices is not a Best Practice."
Dave Markle
@Dave, great advice. I blindly follow it every day.
Daniel Daranas
Yee-haw! Cowboy coding for the budweiser-sipping win. When people disregard the opinions of others and "use their brain", that's when I sign in my resignation.
bzlm
This is so true now that Google Go has gone viral.
Barry Brown
Try to offer an improvement to best practice - but make sure you're not refuting it just because you're engaged in a battle with your own ego ('I know better than best practice because I'm a genius' / 'what do they know anyway' ... etc)
codeinthehole
seanboy, by arguing you just prove that it's a controversial opinion
Vitalik
While I agree with the generalization of David's post, I also find that those that argue against using a best practice or other peoples code is more often than not because it goes beyond their comprehension or skill level. I often hear we use the KISS principle here and I often hear that from those against using best practices or patterns and or frameworks.
OutOFTouch
Congratulations, +600 score now!
Daniel Daranas
@seanyboy I call that using your brain also ;-)
Hannes de Jager
@bzlm:Notice that "using your brain" requires having one. I agree that some people shouldn't try to rely on something they don't have (or care to *turn on* before using).
slacker
To me it's not controversial at all. First comes brains/common sense and then design methods/patterns anything.
kudor gyozo
couldn't agree with you more.To me, that is the difference between a good programmer and NOT a good one.
Adil Butt
+6  A: 

I firmly believe that unmanaged code isn't worth the trouble. The extra maintainability expenses associated with hunting down memory leaks which even the best programmers introduce occasionally far outweigh the performance to be gained from a language like C++. If Java, C#, etc. can't get the performance you need, buy more machines.

marcumka
I think you overestimate the amount of memory management that occur in modern C++. C++ now uses the RAII idiom everywhere. Memory leaks aren't really much of a concern or an issue anymore.
Doug T.
if you can't track memory leaks, you're not worth to use high-powered tools.
Javier
agree with Doug; with some simple rules of thumb, memory leaks are mostly elliminated.
Javier
... and performance is a much-misunderstood subject.
Mike Dunlavey
Sometimes raw performance matters.
David Thornley
Not to mention that not all programs run exclusively on a recent version of Windows.
David Thornley
I completely agree. Using a non-memory-managed language is like taking a shortcut through a minefield rather than going a slightly longer route on a comfortable and well paved road.
glenatron
And sometimes you need to take the shortcut, no matter what. I need all the performance I can get, in what I'm paid to do. This is not true for most people.
David Thornley
Should I buy more machines to all users of the software I write? There are millions of them, and all of them want their programs to run fast.
Nemanja Trifunovic
but ... but ...... I don't think that's controversial, is it?
hasen j
Hey how about not worrying about performance until it actually becomes an issue, and then when it does, profile, Profile PROFILE. It is at that point when it's legitimate to decide whether to take that shortcut through the minefield. It's a cavalier waste of money and time to decide before necessary
Breton
I firmly believe that we don't need airplanes, we can always use cars, right...? And if we need to cross the open sea, we could just use a boat, right...?
Thomas Hansen
Hi.My name is Larry.It's nice to meet all of you. :) I thought I was alone in this world, then I find all of you who think just like me... As you'll see in MY answer to this question. I'm a HUGE fan of C/C++, and feel that if you can't do C/C++ right, then don't do it at all. C# is NOT required.
LarryF
Pipe-dream reasoning. Earth calling marcumka
Seventh Element
**Right tool, right job.** Go try and code that kernel or NIC driver in C# and get back to us. Yes, there are plenty of folks who stick with the language they know, but your unqualified answer is overly broad. (And that from a Java developer!)
Stu Thompson
As if C# doesn't have memory leaks...
Matthew Flaschen
If we had really well written frameworks to run managed code on, then I'd say you have a good point. Sadly, the .NET framework gets more bloat heaped onto it with every release, and the truth is that C++ remains about the only way for a developer to write at a reasonably high level and be assured of [the ability to attain] good performance.
Mark
Memory leaks are not possible in C++ if you use the right techniques:Use RAII/Smart pointers instead of raw pointers/handlesIn the worst case, use Valgrind
blwy10
+15  A: 

I really dislike when people tell me to use getters and setters instead of making the variable public when you should be able to both get and set the class variable.

I totally agree on it if it's to change a variable in an object in your object, so you don't get things like: a.b.c.d.e = something; but I would rather use: a.x = something; then a.setX(something); I think a.x = something; actually are both easier to read, and prettier then set/get in the same example.

I don't see the reason by making:

void setX(T x) { this->x = x; }

T getX() { return x; }

which is more code, more time when you do it over and over again, and just makes the code harder to read.

martiert
Agreed. Getters and setters violate encapsulation just as much as exposing objects directly does. There is no real point to them (except maybe in an external interface).
Ferruccio
There's actually a good reason to use setters: You can do some checking on constraints before assigning the new value to your variable. Even if your current code doesn't require it, it will be much easier to add such checks when there's a setter.
Jorn
I was very glad there was a setter on a variable once when I had to make sure some processing was done when it changed.
David Thornley
Actually, I think Ruby has something that gets you both - it's called virtual attributes. It allows you to have checks on your assignments and still be able to access the data as if it were a public member.
Cristián Romo
Python lets you do that as well.
sli
Setters allow you to add contention in multithreading environments. Just lock when you set. Of course, it is not always the case that your code will end up being accessed by multiple threads, or is it?
David Rodríguez - dribeas
But this being 2009, who's still using an IDE that does not create the getters and setters on the press of a key...?
Arjan
It's not just that I have to write the code, but the getters and setters obfuscates the code itself by, in 95% of the time of my applications, taking up space and just being plane ugly.
martiert
I guess C# gives you a easy way to have both, is this Java?
rball
I had / have this opinion in some cases, but, one VERY important fact for me is that you can't 'override' a public variable. If the class in question is final, sealed, whatever - cool... AND if you're basically saying extenders should never be able to do anything on set / get ... ever ...
Gabriel
In many languages you can change a public field to a property without requiring any changes to code that consumes it. You would, however, force a recompile (in non-interpreted languages at least), which adds some constraints if you're shipping opaque libraries to external customers.
Richard Berg
And you set a breakpoint on a public field how, exactly? Setters are brilliant for exactly this reason - you can easily see what code is influencing a value.
Mark
You *must* use getters and setters when you code to an interface!
Thorbjørn Ravn Andersen
1. Use an editor that shortens the process2. Using setters and getters are much more safe than directly accessing the variable:what if you write a class with a variable inside: counter, and incorporate it into code (maybe in 100 classes) and now suddenly decide that counter cannot be negative ?using setter can help solve problems like these...3. Sometimes exposing variables can be dangerous;eg: Exposing TOS in a stack class
Salvin Francis
@Richard Berg In VB6 you could change a public field to a property and vice versa without requiring any changes to code that consumes it, not even a recompile. It's one of the few areas where VB6 was IMHO better than .Net
MarkJ
@Thorbjørn -- not necessarily. Just because the designers of C#/Java decided to disallow fields in interfaces doesn't make it an inherently bad idea. Direct access is the dominant idiom in languages as diverse as C and Ruby.
Richard Berg
@Mark -- set a data breakpoint. Your CPU has hardware interrupts for this exact purpose. Getting it to work in a managed language is a little challenging, but not any harder than the problems inherent to soft-mode debugging generally.
Richard Berg
@Richard Berg: I don't get you - direct access *is* a dominant idiom for C, but definitely not for Ruby - actually, without reflection, there is no way in Ruby to do direct access. What Ruby does is give you an extremely easy way (`attr_accessor :x`) to generate getters/setters for an attribute which are syntactically transparent; i.e. you'd still use `p.x` and `p.x = 3` instead of `p.getX()` and `p.setX(3)`, but they're still methods. "Direct" instance variable would be `@x`, and you can't use a dot notation with it (i.e. `[email protected]` is ungrammatical).
Amadan
+584  A: 

I fail to understand why people think that Java is absolutely the best "first" programming language to be taught in universities.

For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax

For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table.

Also the natural progression should be from "how can I do this" to "how can I find the library which does that" and not the other way round.

Learning
my first language in school was called "visual logic" which did just that
I taught CS, and teaching/learning is a lot easier with what I call a "nanny language" - i.e. a language that assumes you're a klutz. Beyond that, I agree with you.
Mike Dunlavey
I feel the same. We were taught Java in Uni but it was taught in a very functional way. I think inheritance was one of the last things we were taught in the "Learn how to Program 101" class.
smack0007
Java was invented so that any minimum-salary halfwit could "do programming"; as a result, many do.
Brent.Longborough
The first year curriculum at my university has recently changed from Java to Scheme. The faculty finds that the students learning Scheme are better equipped in later years (and they supposedly pick up java quickly).
Albert
Interesting. I thought java was selected for teaching mainly in environments with few CS faculty - so those making the decisions merely chose what they knew was popular. I haven't encountered many people who actually felt strongly that Java was a good choice.
Joshua Swink
Counter proposal - C++ is the WORST first language to teach, IMO.
Huntrods
Mike Dunlavey
@Huntrods: Personally, I think as soon as someone understands the basic concept of programming, it should be on to C++ with them. Yeah, it's hard, but it makes ya tougher ;) I did most of my learning on C++ and nothing after was challenging.
PhoenixRedeemer
To elaborate. I thing C is one of the best "first" programming languages - EVER. I also think C++ is the b*****d child of a very large committee, and is among the WORST languages ever. It's a horrible "version" of C, and a worse "OO" language, IMO.
Huntrods
What the heck - might as well put that last comment in a response...
Huntrods
lots of C and .NET fanboys here, you know it
01
My alma mater works closely with companies like Intel, so it teaches C first, then C++, etc. We even learn assembly on a PDP11. No sandboxing for us...
Uri
I wish more schools taught Scheme (or similar) as a first language. I've learned more from watching the first two of the Abelson and Sussman lectures from MIT than any of the "Intro to Programming" courses I've attended.
romandas
I was taught Java at uni, but I never took it seriously, being a WinAPI coder back then. The C++ course was pathetic though. Ugh, uni memories :(
Dmitri Nesteruk
I think C should be the first language taught, because it makes you need to understand more of "what's under the hood"... Once you can code C well, then have the second language be something very OO. Then something very functional. After that everything is easy.
Alex Baranosky
C I can understand as a first language, but despite its similarities I don't think C++ makes a good second language. I'm not sure there's an awful lot to be learned, in general, from learning C++, other than C++ itself. It takes a lot of effort to learn and provides little benefit.
Calum
I agree that every student should learn C, but in school Pascal is probably better---as it has much better(clear?) structure.
AnSGri
let's not forget that memory-leaks *can* and *do* happen in Java (just ask me : I have several examples). But, I strongly agree that using third party libraries without considering custom implementations can lead to disastrous results. I've seen it happen on numerous occasions.
Ryan Delucchi
Java is so portable and have great graphic library that college kids can write game with.
Tom
Stop bullshitting people! If they don't wanna learn programming then that's up to them. Throw them to the wolves and see how them fare. The first programming language I was thought at my collage was Haskell and it made me a better programmer. It was difficult at first but you learned something new.
John Leidegren
I teach at University-level and I think a object-oriented language is a good first language and Java was one of my favorites. But now-days I actually prefer Python, because it is a real script-language, fantastic syntax, multi-paradigm and Java have become harder to handle for beginners.
P-A
I think that one requirement for a first language is that it be hard to write obfuscated code. All too many programmers who have C or C++ as their first language write illegible code. That being said, I also believe that every programmer should learn C, just not as their first language.
DLJessup
I'd honestly say, I like JavaScript as a first language. You can start off learning functional constructs, without worrying about inheritance models. Splitting a first year with HTML + JavaScript, and the second half going through a low-level language, like C. Higher languages can be done later.
Tracker1
when I was a lad, back in the 80s, it was Pascal as a first language, then C, Modula II and Ada. OO was being built by Bjarne Soustrup
Quog
I think Java is the best first language to learn simply because it has the best introductory book associated with it. Head First Java is so much better than other books that teach object-oriented programming to beginners that it pulls Java up above other languages.
Bill the Lizard
Everyone needs to know the mother language!
Daz
Everyone should learn [pet language] first, because of [pet feature(s)].Personally I don't think what first language you choose is very important, it's far more important that it's not the only one you ever learn. Having a broader outlook leads to better developers.
Richard Nichols
I teach at university level, and to the best of my knowledge, we've never given an object-oriented language as a first language in our courses. We currently give a subset of C that excludes pointers and memory management, to exclude language-specific details from the education. It is reasonable in the sense that it doesn't attempt to teach OO design at the same time as basic imperative programming, but C is relatively low-level and is in some sense encumbered by it's closeness to the hardware, so I don't really think it is an ideal first language. We're switching to Python now, though.
Lucas Lindström
I learned in this order C, C++, Java, .NET....I like the C++ before GC langs so you know enough to be grateful for the collector.C is a great first language. You're just learning about control flow and loops...it's not like you're talking to the hardware at that time.
dotjoe
@Lucas Lindström: I do think that C should be used as a first language, but do not castrate students by not teaching pointers. I've seen that many students not able to grasp pointers in the first month, were never able to understand them.
voyager
When I started my first programming job 20 years ago, I was the only one of my group who had never coded with punchcards. Everybody thought that C coders were coddled with CRTs and magnetic media. Just because something is old does not mean that it is the best choice for a first language. I think it's reasonable to pick an environment with fewer barriers to entry.
David Chappelle
I agree... this is a great answer!
Kwang Mark Eleven
I totally agree with Albert. My former university's first language is Scheme and it's great.Before university my first language was C and that had a detrimental effect on many of the newcomers
Luis Filipe
Bah. I learned QBasic in elementary school, Visual Basic 6 in high school, and C++ in college, though we had to rough it for a year before we learned anything about the STL -- implemented our own linked list classes, etc. People will hate on me, but QBasic is great for noobs once you teach them that goto = bad, functions = good. Then it's on to C++, teach it as C with some nice added features.
rlbond
First teach MIPS assembly, then Scheme, then Java or whatever...
James M.
The control structures are the same in C, C++, C#, Java, so why spend time learning printf and std::cout instead of Console.WriteLine and Label.Text ? It's more motivating when you know you learn something you can use, instead of something obsolete.C and assembly can be learned afterwards, just to learn how the computer works, and not learning their API.
csuporj
@voyager - I agree. I taught myself C after my first language (Perl from a book and the Internets). I completely didn't grasp pointers for a while, and after a month of headache it suddenly all made sense, and it was totally worth it.
Chris Lutz
@csuporj: C and C++ may be old languages, but they are still widely used. I think that disqualifies them from being obsolete.
Cristián Romo
I think that for a first language needs to be object oriented and needs to truly show the pupil what the difference is between reference and value, i.e. pointers. C++ should be the first language for beginners.
kzh
The best first languages are those that don't bog down the beginning programmer with arcane syntax (missing semicolons, misplaced braces, missing parentheses). BASIC is a more forgiving language to edit than C++, for example. Most beginners are probably still learning to type.
Loadmaster
C++ is a great first language! Because it is a hybrid OOP language, you can learn more from it than what you could from Java. You can do some procedural, object oriented and generic programming with C++. Memory management is a plus too! Once you understand the mechanics of memory management you will understand why it is so costly to place some new in your Java code. The STL is pretty much straightforward too not to mention fast.
Partial
1. C2. C++3. Java
Salvin Francis
I think Lazy K should be the first language students learn. Programming is hard, in engineering courses we weed out undesirables maliciously, and while there may be shortages of engineers and programmers both, I've never met an engineer who couldn't "engineer" where as I've met dozens of professional programmers who can't program.
marr75
I didn't understand how much value java and other garbage collected applications add until i had to work with manually managed applications. I agree on that. But Both C/C++ and Java are important to learn.
kudor gyozo
I learnt VB6 first by reading a book thinking that it is a software like MS Word. I also didn't knew what is programming at that time. And then slowly got to know other things. But surely I was able to gulp things quickly in initial stage which I'm sure I couldn't have done that fast in any other language at least without anyone else's help.
Ismail
I vote for C# as the first language to learn.
Dmitri Nesteruk
Somebody asked me (about 12 years ago) which would be the best 'first' language. I said Java. My coworker said Assembler. He argued, people who are not smart enough for assembler would get frustrated easily and do something else instead, which would mean less trouble for the remaining programmers...
Thomas Mueller
If your first exposure to programming languages isn't until CS 101 then you're doomed. The best first programming language is one you discover and learn for yourself out of passion. For me that was C.
burkestar
+46  A: 

Okay, I said I'd give a bit more detail on my "sealed classes" opinion. I guess one way to show the kind of answer I'm interested in is to give one myself :)

Opinion: Classes should be sealed by default in C#

Reasoning:

There's no doubt that inheritance is powerful. However, it has to be somewhat guided. If someone derives from a base class in a way which is completely unexpected, this can break the assumptions in the base implementation. Consider two methods in the base class, where one calls another - if these methods are both virtual, then that implementation detail has to be documented, otherwise someone could quite reasonably override the second method and expect a call to the first one to work. And of course, as soon as the implementation is documented, it can't be changed... so you lose flexibility.

C# took a step in the right direction (relative to Java) by making methods sealed by default. However, I believe a further step - making classes sealed by default - would have been even better. In particular, it's easy to override methods (or not explicitly seal existing virtual methods which you don't override) so that you end up with unexpected behaviour. This wouldn't actually stop you from doing anything you can currently do - it's just changing a default, not changing the available options. It would be a "safer" default though, just like the default access in C# is always "the most private visibility available at that point."

By making people explicitly state that they wanted people to be able to derive from their classes, we'd be encouraging them to think about it a bit more. It would also help me with my laziness problem - while I know I should be sealing almost all of my classes, I rarely actually remember to do so :(

Counter-argument:

I can see an argument that says that a class which has no virtual methods can be derived from relatively safely without the extra inflexibility and documentation usually required. I'm not sure how to counter this one at the moment, other than to say that I believe the harm of accidentally-unsealed classes is greater than that of accidentally-sealed ones.

Jon Skeet
I believe the default in C++ is to make all methods non-virtual, so C# was hardly taking a step in the right direction. I'd call that returning to their C++ roots.
duffymo
C# isn't really rooted in C++ though - it's rooted in Java, pretty strongly. IMO, of course :)
Jon Skeet
That's not controversial - that's common sense :)
Brian Rasmussen
I realize that the link between C# and Java is certainly stronger than C++, but if we were drawing an inheritance diagram they'd both claim C++ as parent (arguably grandparent for C++).
duffymo
+1 from me. I very rarely have to remove a sealed modifier (and I make everything sealed by default, unless it is immediately clear that it cannot be sealed).
Andreas Huber
My understanding is that you are saying we should be extra careful when designing object hierarchies, but I don't understand how sealing classes by default would help to achieve this.
Leonardo Herrera
"I believe the default in C++ is to make all methods non-virtual, so C# was hardly taking a step in the right direction" how is that logical?? i miss the connection. making methods nonvirtual by default in c++ is a Good Thing (imho) +1
Johannes Schaub - litb
I think the counter-argument could be generalized to: a class that derives from a base class without overriding anything further up in the hierarchy can be done relatively safely, etc.
Chris Smith
i think this is an anti-pattern. Classes without inheritance are just modules. Please don't pretend to know what all future programmers will need to do with your code.
Steven A. Lowe
Inheritance and immutability don't go well together. If I want to know for sure that an object is immutable, I must know that it is not derived from, since a derived type can break that contract.
Jay Bazuzi
Given your reasoning, it's difficult to disagree. However - if I wished to use your class for a purpose which you didn't intend, but through some clever overriding/application of your base methods/properties it will suit my purpose, isn't that *my* prerogative rather than yours?
BenAlabaster
+1 from me too. Its about avoiding implicit assumptions - which always come back to bite you. An explicit statement is always more accurate.
devstuff
@balabaster: If you do that and then I want to make a change, it's very likely to break your code. As a code supplier, I don't want to put customers in the position of having fragile code. (Not that I'm actually a code supplier etc. This is in theory.)
Jon Skeet
I agree that inheritance should be guided, but sealing all your classes by default doesn't guide you it's a road block, removing inheritance entirely
Jeremy
Even so, I should understand the risks in deriving from a non-frozen class. Any changes you make in an unsealed class carry the same penalty, so all you're doing by making everything default-sealed is making it harder to use your code in my own way.
Jeff Hubbard
Agreed in principle, although I hated the sealed-by-default behaviour of methods when I was using early C# (at Microsoft, actually) because sometimes I would want to intercept calls to some library class's method, but couldn't just subclass it because they didn't make the methods virtual.
Joe
If a inheriting class changes behavior of the method it is wrong. Period. It does not fulfill the substitutability principle. There is no need to make a class sealed, just shoot the offender.
David Rodríguez - dribeas
One problem with having everything sealed is that it kills proper unit testing. Because methods in the .NET framework are sealed, it's almost impossible to test classes that use .NET framework classes like DirectoryEntry (which uses external resources), without writing a wrapper first
Erlend
I agree, and I would expand the scope to say that all programming language constructs should default to the "safest" or "no additional work required" state (not the opposite). Also, there should always be an optional keyword for the default whenever there is a keyword to specify a non-default.
Rob Williams
You can not mock sealed classes, except if they implement a certain interface which is used by all users of that class instead of the sealed class.(Bye Bye folks, I will descent into hell soon, as I dared to down vote Jon Skeet...)
EricSchaefer
I vastly prefer mocking of interfaces instead of classes anyway, so it's never been an issue for me.
Jon Skeet
AOL. Interface based programming is underrated anyways...
EricSchaefer
Why not get rid of defaults all together force the developer to make a decision if it's sealed or not, same should go for public vs private.
JoshBerke
@Josh: Yes, that's definitely an interesting idea. There are some options where I don't want to have to be explicit - e.g. "nonvolatile" would be silly. How about "writable" as the opposite of "readonly" for static and instance variables though? Hmm...
Jon Skeet
LBushkin
Strongest argument I've seen for classes NOT to be sealed by default is that it would adversely impact the ecology of software libraries (commercial and internal). Too few people take the time to consider how their classes can be inherited - it's hard to get this right. Most will stick with the language default. Software changes relatively slowly (even when you have the code) and there will be a lag in getting inheritability changed. Finally, will people really spend more time designing for inheritance? Or just blindly add "overrideable" when the find a case where they decide they need it?
LBushkin
@LBushkin: The fact that people don't take time to consider things properly (and that it's hard to get it right) is exactly why the default ought to be the *safe* option. Give people the shotgun *unloaded*, and make them load it themselves if they want to.
Jon Skeet
A: 

My controversial view is that the "While" construct should be removed from all programming languages.

You can easily replicate While using "Repeat" and a boolean flag, and I just don't believe that it's useful to have the two structures. In fact, I think that having both "Repeat...Until" and "While..EndWhile" in a language confuses new programmers.

Update - Extra Notes

One common mistake new programmers make with While is they assume that the code will break as soon as the tested condition flags false. So - If the While test flags false half way through the code, they assume a break out of the While Loop. This mistake isn't made as much with Repeat.

I'm actually not that bothered which of the two loops types is kept, as long as there's only one loop type. Another reason I have for choosing Repeat over While is that "While" functionality makes more sense written using "repeat" than the other way around.

Second Update: I'm guessing that the fact I'm the only person currently running with a negative score here means this actually is a controversial opinion. (Unlike the rest of you. Ha!)

seanyboy
What if you're unaware of when a condition is false? And where has Repeat come from? While works on the English basis of "while this condition is true, do this"
Kezzer
You could replace all constructs with goto.
Gamecat
Not only do I like WHILE but I would also borrow Nemerle's UNLESS and put it into C#.
Dmitri Nesteruk
a language designed for mediocre or unexperienced programmers gets only mediocre and unexperienced users.
Javier
I haven't seen Repeat...Until since BBC BASIC! VB now has Do...Loop, Repeat...Until and While...Wend should both be removed.It bugs me though when I see, Do While Not ... instead of Do Until ...
pipTheGeek
The first question I usually ask when I see a While loop is "Will it break during the loop or after the check?" The reason for this is I've used a language or two before that immediately broke out of the loop when the condition returned false.
Dalin Seivewright
This is nonsense. Neither repeat nor while will break in the middle so your argument is absurd. Basically the developers need to be instructed in the use of break/exit/goto to exit a loop early. As for testing condition at the beginning/end both have their uses.
Cervo
Also do { statements } while (!condition) is the same as do { statements } until (condition) so I don't know what the complaint is.
Cervo
It's not controversial, just wrong :-P
Dour High Arch
Actually, I'm not sure if it's the same or not, but I never use do ... while blocks, so I think perhaps I agree with you. :)
skiphoppy
"One common ... flags false" - How common is this? In what language? Perhaps the answer for those who have this idea when it's false is "RTFM!". This is just a bad solution looking for a problem it can't find.
duncan
A while with a repeat is a if <not condition> then repeat until conditionnot a while + bool
Marco van de Voort
+281  A: 

Design patterns are hurting good design more than they're helping it.

IMO software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember - and they're far too abstract for people to really remember more than a handful. So they're not helping much.

And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere - usually, in the resulting code you can't find the actual design between all the (completely meaningless) Singletons and Abstract Factories.

Michael Borgwardt
Nice one. The Java XML DOM library - factories upon factories - is a good example of massive overengineering at the cost of simplicity of use. (There are benefits, of course, but...)
Jon Skeet
Isn't the Java XML DOM library just a transliteration of the JavaScript library (I don't do JavaScript)?
Tom Hawtin - tackline
Even streams in Java are a bit more complicated than they really have to be due to the many decorator patterns.
Brian Rasmussen
Absolutely agreed.
Max
I actually like the Java IO streams, the decorator patterns does make sense there - my biggest problem with it is with a class outside the strict pattern application: FileReader, which is a convenience class lacking the basic feature of allowing you to specify the encoding.
Michael Borgwardt
I kinda agree - knowing anti-patterns is more helpful in it's way than knowing DPs
annakata
I kinda agree too, except that I would say it's not design patterns themselves, but their misunderstanding and overuse.. A design pattern, to me, is nothing more than a attempt to create a ubiquitous language or common set of definitions, for things we all use anyway, to streamline communication.
Charles Bretana
I like learning about design patterns in the sense of "this is how someone solved this problem." Sometimes their solutions will inform to a small extent my design decisions. I don't think they should be used for a template from which to write code from.
Doug T.
Charles, I think the "language" aspect is simply not working, because the patterns are too abstract to allow everyone learn more than a few, which makes the language pretty useless. I do like Doug's idea of viewing them more like case studies - but then the abstraction is actually harmful.
Michael Borgwardt
Amen. One more damage done by Smalltalkers, together with "extreme programming"
Nemanja Trifunovic
You should use them whenever you can. Yes, it hurts design, but when you be on next job interview you will remember better those stupid names and you can say you used it. win-lose-win. i for example cant remember what Bridge Pattern is, so I need to use it more. I bet its useless, but ...
01
"I actually like the Java IO streams" - that means you dont use it or you are masochist. I used to like it too, but then I start using it. It would be cool if they did some factories so you dont have to copy'n'paste.
01
Design Patterns fail not because they are meaningless or far too varied. Design patterns fail because people make the arrogant mistake of equating idioms in their insular little language fiefdoms with grand ontologies that describe and explain the Universe.
dreftymac
who the hell likes Java IO Streams? I never used them, the only thing I know, everytime I try to read a simple fricking text file, I have to browse through the stupid API to figure out which classes I need and which constructrs I should use to just read the content into a string.
hasen j
I do use them, and I'm not a masochist. It's neither difficult nor inelegant if you understand the concept. Having the same API for network and file IO is great, and "simple text files" are anything but - too bad 90% of all programmers don't understand that not everything is ASCII/Latin1.
Michael Borgwardt
I disagree. My personal opinion is that no matter what you are developing, either look for existing patterns you can use, or develop some of your own as it can keep consistancy across your apps or even across 1 app, particularly if it's a large application.
Jeremy
Hm, I think I'll have to disagree with this one, but probably for a controversial reason. ;)I think design patterns help design *on average*, because they guide the people who'd otherwise be likely to end up with lousy design towards something less lousy.
jalf
but really good programmers should know not to rely too much on design patterns for the reasons you state. So I see design patterns more as a help to average programmers than as something that affects (positively or otherwise) a *good* programmer.
jalf
A design pattern is simply a commonly accepted solution to a given problem. Your prejudice is against their perception and use, not the patterns themselves. Would you suggest civil engineers throw out trusses?
Bryan Watts
Agreed. Design patterns are, IMHO, duct tape to fix language deficiencies.
Dan
Disagree (although I accept that there are drawbacks and they are commonly misused... Antipatterns might be even more useful). So +1...
AviD
Read "A Timeless Way of Building" by Christopher Alexander ;) Patterns are a good thing, but people use them to justify many bad things. The GoF book set the industry back 10 years imo.
Gwaredd
What really grates with me is when the fashionable dev uses the pattern then names the class after the pattern, with no clue as to its actual use. E.g. ControlVisitor - right it visits controls, and then what?
Gaz
+100. I've had many co-workers, when asked "how are you going to do X", reply with "Oh, I'm going to use the Visitor pattern" or whatever, as if that was an actual answer to my question.
MusiGenesis
Disagree; I think it's the misuse and misapplication of design patterns for the sake of using them, not the design patterns themselves. If you do something slightly outside of the original intent, name it something new and don't abuse and confuse the existing patterns.
Ryan Riley
Disagree: Patterns are all about communication of intent. The exact implementation detail isn't why you use a pattern; telling maintenance programmers the intent of your thinking in a concise manner is.
Scott Stanchfield
I disagree. Bad designs are bad not because of using patterns, but because of bad design...
Kwang Mark Eleven
I've read a couple of books dealing with design patterns and they all shout the same thing at the beginning of the book: these are guidelines, they are not dogmatic. Use them to your advantage, and when they apply; modify them to suit your needs if you have to. To me, this answer is the akin to saying "The entirety of X language sucks cause I've seen bad code written in X." It's not the patterns, it's the people who wield the patterns like a giant hammer and see every problem as a nail.
Hooray Im Helping
Design patterns are a "natural fit" sometimes, when the problem itself suggests a particular pattern, or when you know you would have used a similar technique to solve the problem even if you'd never heard of design patterns before. When they are not a natural fit, *don't use them*.
Todd Owen
If design patterns are hurting your design then your not using them right.
Swanny
Sort of agree - sort of disagree. The problem I have with patterns is not the patterns themselves - it is that they are derived from a few fundamental principles that *very few people* seem to actually understand. Patterns aren't magic. Once you understand the underlying concepts of the OO style that generated the patterns, most of them are as common sense as a for loop. Unfortunately, the people that push them the most tend to not understand them.
kyoryu
Down with MVC! Long live Front-Ahead Design!
DR
+3  A: 

Debuggers should be forbidden. This would force people to write code that is testable through unit tests, and in the end would lead to much better code quality.

Remove Copy & Paste from ALL programming IDEs. Copy & pasted code is very bad, this option should be completely removed. Then the programmer will hopefully be too lazy to retype all the code so he makes a function and reuses the code.

Whenever you use a Singleton, slap yourself. Singletons are almost never necessary, and are most of the time just a fancy name for a global variable.

martinus
I have noticed a definite inverse relationship between design/coding skill and skill in using a debugger (which is not the same as having debugging skills).
Ferruccio
Rauhotz
Copy/paste is an instant Red Flag in my opinion. If code is duplicated, it should either a) be factored using OO methods; or b) model-driven/generated/dsl-defined.
Dmitri Nesteruk
I agree, all the code you see in stackoverflow should not be tested code because if it is tested it is copied from an IDE and copying from an IDE should be impossible:) So please post only untested code on SO!
tuinstoel
@tuinstoel: So maybe it should be "copy but not paste"? :)
Jon Skeet
martinus
There is no way testing can replace the usefulness of debuggers and debugging.
Tim
Singletons look really mental when bound to WPF too (all that x:Static stuff).
Dmitri Nesteruk
Ok, so you remove all debuggers, and all alternate systems for debugging. (if the easy way is bad, then the hard ways must be worse, no?) Then in testing you discover a bug. Now what do you do?Cancel the project?
Charles Bretana
@charles, when I discover a bug I try reproduce the behavior in a unit test. Then I fix it. If you need a debugger it is just a sign that you need better tests or refactor the code that it is easier to understand.
martinus
Sometimes I have to maintain and extend programs that make extensive use of complex pointer arithmetic. You can pry my debugger from my cold, dead hands. And if any developer mentions "global" in the same room I am, he can consider himself slapped.
Leonardo Herrera
@Jon Skeet, if only copy is possible I can't paste from SO:)
tuinstoel
Right.. Get rid of debuggers - so that you can't see the results of your code until then end, rather than step your way through to see exactly WHERE the problem crops up. I'll take debuggers over dozens of "temporary, interim display statements" *ANY* day.
David
Debuggers can be excellent for understanding how current code is working (I generally don't need them much for my own independent code), and cut/paste is part of refactoring.
David Thornley
Without any way to debug it, how can you tell what to change to fix it? are you prescient? if so, why did you put the bug in there in the first place? "Debugging" and "Debuggers" are by defintion, the tools we use to figure out what is causing the bug. Without them, you can't fix any bug.
Charles Bretana
Except perhaps by random shotgun approach and a LOT of luck (Just change something, test again, and repeat until bug goes away...)
Charles Bretana
And outputting variable values or "I am Here" statements to a text file IS a debugger too!
Charles Bretana
"Debuggers should be forbidden." -- and how do you find bugs that are not yours but come from the library/platform?
niXar
Wow. This is like saying "if a hammer can't do the job, it isn't worth doing." Seriously, how would you track a memory overwrite originating outside of your object with unit tests?
Mark Brittingham
Probably the term "Debugger" is just wrong. I have yet to see a tool, that removes bugs from (de-bugs) my program.
Simon Lehmann
@simon: `rm` or `del` will remove all bugs. Granted, it also removes the rest of the program, but such is the price for a bugless program :)
Will Mc
IMO, you can only discover bugs with unittesting, not locate them. After you found a bug with unittesting, you use debugging/debuggers to find where the bug actualy is located
Ikke
Steve Macguire uses the entirety of chapter 4 of "Writing Solid Code" to promote the idea of stepping through new or changed code in a debugger. It's good advice. Debugger *abuse* is a different story. I've seen that too, but wouldn't propose doing away with the tool because some will abuse it.
JeffK
+1: definitely a controversial opinion based on the comments on this post :)
Juliet
Hmmm... When I started writing basic on a dumb terminal back in 1979, we didn't have a debugger nor did I have a copy/paste, but that doesn't mean I wrote better code back then.
Kluge
First two seem almost hypocritical...
Pablo Fernandez
Um, and I did use a singleton in code I was working on last night. And I'm not slapping myself. And I might use another singleton sometime in the future too. There are reasons for global state, although darn few are good ones.
David Thornley
Eek - no copy and paste? What happens when I decide I need to move a block of code from one class to another? You're gonna make me retype it all out by hand? No debugger...yeah, I could probably work around that, it would be a pain though. I could probably live without Singletons too.
BenAlabaster
@balabaster that's cut -)
martinus
Loren Pechtel
Those are the strangest, most archaic things I've read in a while. Chances of writing bugs are just as great when manually typing as when copy/pasting, There's nothing forcing someone to write good code if they have no debugger, and although a singleton may bot be necessary, does that make it bad?
Jeremy
Sounds like someone who doesn't understand debugging. How can my copying and pasting my own code be bad? As far as copying and pasting others, I think you need to test it, understand it, and reduce it to what is necessary for your application before using it in your project.
bruceatk
martinus
Regarding Singletons, that may be wrong in languages like C# or Java, but with less OOP strict languages like Javascript or Scala, using singletons is okay. In JS every object is a singleton! (and classed using prototypes, at least in JS 1.x) And Scala has a singleton type called object.
Alcides
There's nothing wrong with Singletons in themselves. I suspect you're upset with some particular abuse of them, not the concept.
chaos
@martinus, maybe it is for you, but I copy and paste my own code all the time. I've never had a problem having to go back and fix stuff. I've been doing it for almost 30 years. I see no practical reason to change now.
bruceatk
I want to up-vote your Singleton comment and down-vote your Debugger comment... You need the debugger to figure out why that core dump exists in the first place (only trivial code is 100% testable).
Tom
@martinus, it's not that copy paste is bad but in your example, the programmer should use a common function, rather than duplicating a chunk of code. That way if the function has a bug, you fix it in one place. But there's copy paste scenarios where you wouldn't use a common function line one liners
Jeremy
How can I fix a driver without a debugger... Write a unit test that reproduces... Wait.
Edouard A.
Fundamentalisme isn't the way forward!
Seventh Element
I agree with getting rid of copy-paste as long as you can still cut-paste. Cutting and pasting code is essential to refactoring and keeping the code in a clean state.
Sergio Acosta
Are you nuts? <wink> I'll vote you up just because I disagree so strongly (and that makes it controversial - to me anyway). I need those tools. It would be similar to punishing everyone by taxing junk food because some people can't control themselves.
Doug L.
I agree that you shouldn't need a debugger for app code you wrote. But you need one to make sense of corefiles, you need one for driver work, and you damn well need one to make sense of weird, uncommented and undocumented code some other bloke (who has long since left the company) perpetrated. There's not only *creators* out there, but *maintainers* as well, and the debugger is our best friend.
DevSolar
+251  A: 

Write small methods. It seems that programmers love to write loooong methods where they do multiple different things.

I think that a method should be created wherever you can name one.

Matt Secoske
Agree, but not too controversial?
Ed Guiness
perhaps not... hopefully not... unfortunately I usually see long methods, so practice vs preaching?
Matt Secoske
I had an office-mate who practice this and his code used to drive me nuts. Nothing ever got done where I expected it: it was its own form of "spaghetti code." Also, research has shown that longer methods do not produce more bugs. With that said, each method should do 1 task: longer isn't better.
Mark Brittingham
completely agree... I break methods on logical boundaries, usually where I can say "this block of code does THIS" and name the method accordingly. Sometimes I have one line in method, sometimes 20... just depends on what it /does/.
Matt Secoske
I think long methods are a sign of a cluttered mind and a lazy programmer. Generally speaking I think large methods actually are comprised of smaller algorithms which should each actually be contained in their own methods, to enhance organization of code and readability.
Jeremy
had to debug a 500 line function? :-)
billybob
@billybob: how did you know? :-)
Matt Secoske
AKA: Your method should only do one thing, and only one.
thenonhacker
indeed... SRP to the rescue!
Matt Secoske
Im new to programming and have leaned towards the single responsibility priciple but now im so not sure i agree, searching for each of the functions and scrolling back up and down the page is the worst part of debugging!
xoxo
Sounds like uncle bob! My $0.02, not all methods can be small or contain many small methods to do complex tasks but I agree, you should try.
asp316
I think this is one of the fundamental problems with most code I've maintained. Splitting up functions encourages code reuse tremendously.
brad
Partially agreed. I feel that if you write only small methods, you may end up with way too many methods that do -almost- the same thing but are different enough to seperated and not put into generic methods. Theres a point where having one long function is better than having 100 small functions.
Dalin Seivewright
Not at all controversial, but I agree. Small methods == sanity.
Scott A. Lawrence
When you've got a sequential set of tasks in a function. Break them up into paragraphs by wrapping them in some scopes { }. This at least maintains the order of the function.
Scott Langham
Too many little functions get tricky to navigate, and if refactored from the one place they're called, they lose their context and become harder for a human to parse. A one liner comment at the start of the scope can say what its purpose is.
Scott Langham
@Scott Langham - everything loses its context at some point. ie: A zip(string) function inside a Address class has an entirely different meaning than zip(string) function inside a Compress class. That does NOT mean you shouldn't break out an contextually grouped piece of code from another method.
Matt Secoske
I prefer methods to be less than 10 lines long. The smaller the better. Each method does only one thing, on one level of abstraction, and its name describes what it does.
Esko Luontola
Not sure about this one. I've seen long, well-documented methods that really do a lot in a clean way. I'd rather just follow line-by-line than jump all over the place trying to understand why the developer made 30 methods to do a task with a single path. However, a method should never repeat itself, that's where a loop or a private method should make an appearance.
User1
I think I could suggest a more controversial version of this rule: People who write long methods probably started out by writing short methods, and then added lines of code incrementally, until they had code that was long, and, that if people do it that way, (a) they have a distinct code smell left behind, and (b) the guy who picks up your code is going to hate you. Not that you care.
Warren P
Nothing controversial about that, it's a fact. Small methods have only advantages.
Exa
Using simple, internal functions to breakup complex algorithms is a good thing. However, classes should expose minimal methods to not overwhelm its consumers.
burkestar
+413  A: 

Getters and Setters are Highly Overused

I've seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you're using threads (but generally is not the case) or if your accessors have business/presentation logic (something 'strange' at least).

I'm not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding... ha!

UPDATE:

This answer has raised some controversy in it's comments, so I'll try to clarify it a bit (I'll leave the original untouched since that is what many people upvoted).

First of all: anyone who uses public fields deserves jail time

Now, creating private fields and then using the IDE to automatically generate getters and setters for every one of them is nearly as bad as using public fields.

Many people think:

private fields + public accessors == encapsulation

I say (automatic or not) generation of getter/setter pair for your fields effectively goes against the so called encapsulation you are trying to achieve.

Lastly, let me quote Uncle Bob in this topic (taken from chapter 6 of "Clean Code"):

There is a reason that we keep our variables private. We don't want anyone else to depend on them. We want the freedom to change their type or implementation on a whim or an impulse. Why, then, do so many programmers automatically add getters and setters to their objects, exposing their private fields as if they were public?

Pablo Fernandez
I wouldn't say it's encapsulation on its own - I'd say it's a first step on the road towards encapsulation.
Jon Skeet
You'll find many people (and even books) claiming that doing just that is encapsulation
Pablo Fernandez
Oh agreed. I'm just saying that it's not wrong to require properties instead of public fields - it's just wrong to leave it there :)
Jon Skeet
This could be restated as "mindless getter/setters." I've found that most of the time you only need a getter.
Leonardo Herrera
The advantage of a setter over a public variable is that you can put a hook into a setter. Other than that, mindless getter/setters are no better than public variables.
David Thornley
getters and setters define the interface of your class. It allows you to add logic to the get/set later on, if required. Therefore preferable to public fields.
Richard Ev
I'd vote this one up twice if I could. *IF* there is logic in a setter then it makes sense to use it. Otherwise, there is no point: a public property is equivalent to a public variable. Also, Leonardo's comment is good: many times you only need the getter.
Mark Brittingham
I've tried this, based on the same opinion. I think you're incorrect because in your 500 setters in your code you'll find 5 that need some kind of initialization code, and only altering those makes the code inconsistent, which is a different kind of complication. Agree that they're annoying, though.
Steve B.
In .NET properties are handled differently than fields in certain situations such as databinding so you are kind of forced down the public property road like it or not.
Daniel Auger
@Richard: Not every field in your class has to be a part of the public interface. I never had logic in my getters or setters, I don't see that as a good practice at all, but thats just me.
Pablo Fernandez
@Steve I don't think they are a need at all. Also, I don't place logic on my getters/setters.
Pablo Fernandez
@Daniel: Agreed. Also in Java you have to adhere to the JavaBean spec, (something pretty similar) in order to let 3rd party code (like hibernate) access to your fields. In this case, its a necessary evil, as you said
Pablo Fernandez
I think the language should just automatically make fields private and create overrideable getters and setters, and optimize it such that it's not performance-worse than accessing the fields instead of calling the accessors.
skiphoppy
I like properties with getters/setters. A public field can't restrict you from setting it where a property can. You can also apply logic in your getter/setter. People say use either where it makes sense, but I say keep constant and use 1 or other, so I choose getter/setter everywhere. flexability
Jeremy
I think getters/setters are completely anti-OO. Why should I be able to poke at an objects internals (sure, getters/setters provide some control, but your still trying to access the internals directly or close to directly). The OO way (IMHO) would be to ask the object itself to perform the action.
Dan
@Dan: That's pretty close to the truth. Some getters/setters are needed but we usually do "getA", "getB", "getC" from a domain object, and then perform the calculation elsewhere (generally in the service layer), that is not OO at all.
Pablo Fernandez
This is why our Russian architects usually declare public variables instead of using Getters and Setters!
thenonhacker
I prefer public variables which can then be turned into properties with no changes to the caller over getters/setters. Even still, they should be avoided if at all possible. Real OO would encapsulate the need to get/set away.
Dan
@Dan: one reason I've heard is that turning fields into properties forces the client code to be recompiled.
Jimmy
@Dan: in some languages you not only need to recompile the client to use a getter/setter but you also need to change the syntax. And if getters/setters are "not OO", how is directly modifying the object better than asking the object to modify itself?
Mr. Shiny and New
At least if you're adding "mindless" getters and setters, you have a structure that encourages you to do logical things - whether you've thought of them yet or not. Sometimes I'm too lazy to turn a field into a property, and it's enough to discourage me from bothering with setter logic.
JasonFruit
Getters and Setters, at least in .NET, are more usable from a databinding perspective. As far as I'm aware most .NET databinding frameworks don't work with public fields.
Erik Forbes
I hope you don't write library or api code :-) It's easy for you to later turn public variables into methods if you own the code. If other people use your stuff and rely on its signature, not so much.
LKM
I love the solution that Python takes. All fields are public, but you can add the getter and settter later if you want to...
Bartosz Radaczyński
While getters/setters can (and usually do) start out as using the same data type as the underlying data, the fact that they are there allows the developer to change the underlying data type at some later time without affecting the API in any way. In short, they hide the actual implementation. ...
RobH
... And isn't hiding the implementation what encapsulation is all about?
RobH
I agree but would say it stronger: you absolutely should use public members. Alas, as other folks point oit, the java bean spec makes this difficult in java -- yet another reason not to use java. Also, as someone else mentioned, python has the best solution for this.
Using public members exposes the implementation, which is the total opposite of encapsulation, since it makes future changes without affecting dependent code difficult or impossible. I agree with others that the object should do as much of its own member data access as possible, but ...
RobH
... where other objects need to access an object's properties (getting/setting), you cannot allow them to do it directly (i.e., by exposing public data members) and still call it OOP.
RobH
Pretty late I know. There are some languages where you do not have a choice either you provide accessors or the stuff is not accessible. That's the base line in Smalltalk...
Friedrich
This is plane and simply wrong. The idea behind encapsulation is that it provides the ability for the implementation of a class to evolve with affecting client code. That is precisely why you would want to hide a field behind a property.
Seventh Element
I'd give this two "up" votes if I could. It gives me screaming fits when I see people doing this - or even, as I saw recently, have their IDE do this *AUTOMATICALLY* for every data member...
DevSolar
I'm currently trying to move away from getters/setters altogether for two reasons: 1) immutable types can use public readonly fields and 2) most UI frameworks that require bindable properties provide a type for that (e.g. DependencyProperty in WPF).
Ryan Riley
Encapsulation is more about data *protection* than data *hiding*. Using get/set with private fields protects them. If you're doing simulation work, data hiding is a great idea, but in non-simulation programming, data protection is key.
Scott Stanchfield
I completely disagree. There have been countless times when something like getNumber() which started as {return number_;} but later turned into an immense calculation. Public members destroy encapsulation and make changing implementation impossible.
rlbond
This is not about never using getters, and certainly NOT ABOUT USING PUBLIC FIELDS. The statement is clear, many people think is right to make a getter and a setter for every field in every class, this does break encapsulation
Pablo Fernandez
@Pablo: I favor getters/setters for every publicly available field -- even if there's no logic in the getter or setter and it's just a straight copy -- and I'll tell you why:
Randolpho
It's not about encapsulation, it's about preparing for the future need for encapsulation. It's always 100% easier to add logic to a getter/setter that's already there than it is promote a field to a property with a getter and setter. Sure, it's simple if you control all of the code.
Randolpho
But what if you don't? If your class is exposed externally and you promote a field to a property, you just broke the contract for that class, and everything that depends on it must be recompiled. But if you already had a property and you modify the logic of the setter, only your code must change.
Randolpho
That's why it's best to always use properties with getters/setters than it is to just use fields. There's little or no performance benefit to just using a field; simple getters and setters get inlined by the JIT anyway. So there's no harm to do it now, and you get a huge potential benefit later on.
Randolpho
@Randolpho, read the whole answer. No one favours public fields. I might rewrite it soon since it's getting so many people confused. Exactly your last comment is the reason why we have getter/setter overuse
Pablo Fernandez
I also like hash oriented programming!
nothingmuch
@Mr. Shiny and New: Its not. I just mean that if I absolutely MUST poke around in the internals, then at least give me a language with properties. I'm still against using properties though, as, just like getters/setters, I do not think that a properly designed OO system needs them.
Dan
One benefit of getters/setters is the added abstraction allows you to do more than just get/set a value, but there are better ways to tackle this so that it doesn't get in the way 90% of the time when you just want encapsulation.
Wahnfrieden
Also, python does just fine leaving absolutely everything public.
Wahnfrieden
+1. I *hate* getters and setters when they are only `getX() { return x;}; setX(_x) { x = _x }`. I can't see why public variables aren't the same.
LiraNuna
@Leonardo, interesting, I very rarely write just a getter, but quite often write just a setter
finnw
This is Allen Holub's classic rant: http://www.javaworld.com/javaworld/jw-09-2003/jw-0905-toolbox.html
Grandpa
Richard E has it right - there are times you really want a class with all its members public, but the *ability* to override those members with a method call if needed. promiscuous getters and setters are great for that.
Martin DeMello
@Grandpa: <sarcasm>Thanks for bringing out that old link.</sarcasm> The conclusion that Allen Holub's rant always leads me to is that he thinks objects should know how to render themselves. Anyone want to write a webapp where the objects know how to render themselves for CRUD? I didn't think so.
byamabe
getters and setters is the way to specify a field in an interface, and you should -- tadaah - code to interfaces.
Thorbjørn Ravn Andersen
@Ravn The idea of interfaces is not exposing the implementation. So why would you make a getter and setter of each of your fields in an interface?
Pablo Fernandez
I strongly feel the opposite. Public fields are flat out wrong. Public properties are fine, as long as they are representing the public contract of the class as intended. Later on when the class evolves, public properties give you a significantly better opportunity to refactor the internals while keeping the public interface intact. It doesn't matter if a propert just sets/gets a field without any other logic. That's not the point. The point is later on that field may need to change, no longer exist, or any other possibility. If it's merely a field, you're screwed.
Matt Greer
@Matt: it´s OK to disagree (this is supposed to be controversial), but please read the whole answer and not only the title. I've wrote "anyone who uses public fields deserves jail time" in bold. You are missing the point
Pablo Fernandez
I want a language that lets me directly declare a member variable as *publicly read-only* but *privately mutable*.
Loadmaster
@Loadmaster ruby has *attr_reader*, you should give it a try
Pablo Fernandez
Man, this is controversial! +1
Vikrant Chaudhary
@Loadmaster: C# 2.0 onward -- `public int Value { get; private set; }` like that?
Rei Miyasaka
@Rei: Did not know that. It would make for an interesting trivia (or interview) question. But it's still not quite what I'm looking for, since the getter is still a method.
Loadmaster
@Loadmaster Why's it bad that it's a method? The stack operations are optimized out before runtime, and it's entirely declarative, so for all intents and purposes, it's a field and not a property/getter-setter. Never thought of this as an interview question; I put this to practical use every day!
Rei Miyasaka
http://www.idinews.com/quasiClass.pdf
sbi
+178  A: 

I also think there's nothing wrong with having binaries in source control.. if there is a good reason for it. If I have an assembly I don't have the source for, and might not necessarily be in the same place on each devs machine, then I will usually stick it in a "binaries" directory and reference it in a project using a relative path.

Quite a lot of people seem to think I should be burned at the stake for even mentioning "source control" and "binary" in the same sentence. I even know of places that have strict rules saying you can't add them.

Steven Robbins
Really? I've never encountered that. I mean, where are you supposed to keep all the third party binaries then? I know, we should develop everything in-house! This way, we will never have to store third party binaries!
DrJokepu
Normally the anti-binary brigade don't have an answer for this, or they just say something along the lines of "just reference it in <blah> directory and make sure the devs have it there" :)
Steven Robbins
I think it's more common to have "no generated binaries" - i.e. build X should build its in-house dependencies, rather than relying on the results of a previous build being checked in. There are pros and cons here.
Jon Skeet
Sure, I agree that generated binaries are pretty much a nono, and I think that's where people get the seed of the idea from that unfortunately mutates into "no binaries in source control".
Steven Robbins
Funny, I've always felt against source-controlling generated binaries, but I usually get overruled. It hasn't killed anybody yet, that I know of.
Mike Dunlavey
I check in generated binaries that are tricky to compile or require a compiler that might not be on everybody's machine (at $1000 per seat, we are not buying copies for devs who don't need it).
Joshua
I agree with @Jon and @Joshua. Especially important for big teams that share dependencies. And, for 3rd-party stuff: also check-in a document containing the URL, license key, contact info, etc. so that future team members can upgrade it.
devstuff
This idea is probably started by people using Visual Source Safe which would probably barf fairly quicly if you add dlls.
Martin Brown
my reason for forbidding generated binaries is that you're never sure exactly what they're built from. If someone does an svn update, then changes some source, rebuilds, then checks in the binaries without the changed source, you can't debug/reproduce it. Third party libs are fine to check in.
dj_segfault
Another good solid idea which got corrupted through thoughtless application. When people spout rules without being able to explain the justification you know it's a bad time for everyone.
duncan
Sure! There's something to be said for one place to find all the things you need. If a project needs these dependancies to build or run, why would you not put it all in one place for a developer to find?
Jeremy
Checking in binaries is bad because: it does not scale (spend an hour checking out the source tree, or fifteen minutes committing a single changed file), you can't diff, you don't know where it came from, and there are better places to put binaries (see Ant + Ivy).
Rob Williams
@Martin Brown, works fine for me. We have a very large VS Solution in progress here, and with 15 or 16 projects under the one solution, sometimes it's just easier to check a benchmarked DLL into VSS for the others, rather than everyone spend hours compiling all the projects in the solution.
Pat
To anyone saying checking in binaries is bad because people might check in the binary without checking in the code, it doesn't help me if I can get neither the current binary nor the current code.
Jimmy
echoing dj_segfault, I've been at companies where the binaries that were checked in were actually PATCHED and checked in, and there was pretty much no way to tell what the code was actually doing without decompiling it.
Will Sargent
Agreed with those that exclude generated binaries.. I have bin and obj in my ignore list for tortoise myself (along with .user and others)...
Tracker1
@Pat, I've done things like that, where I have a "Releases" folder, where I will put builds meant for distribution specifically.
Tracker1
I think the converse advice comes from merged binaries being corrupted, eg if two people edit the same image and commit and merge without thinking, you end up losing the image. (Of course you just get it back from an earlier revision but it can be a hassle.)
DisgruntledGoat
Nice thing for .Net platform is that 3rd party vendors usualy ship .pdb and .xml files along with their dlls. I suppose that in Java and other technologies same practice/possibility exists ?
m1k4
Checking in third-party libraries and images is fine, but I draw the line there. The rest of the versioned files should be source code (in text form).
Loadmaster
Binaries that are checked in are acceptable if: (a) they are resource or image files that are part of a build rather than a build product, and (b) they are not archives (zips), (c) anything where you need to read and review diffs, or do merges, should not be stored in a binary form where any alternative exists whatsoever.Delphi form files (.dfms) for example, can be stored as binary or text. Changing the IDE to output text dfms results in a "diff" that is human readable, and that is another great reason to avoid binaries.
Warren P
+683  A: 

Most comments in code are in fact a pernicious form of code duplication.

We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code.

I think eventually many people just blank them out, especially those flowerbox monstrosities.

Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness.

On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.

Ed Guiness
I'm all for comments that describe methods, parameters or particularly complex chunks of code, but comments like "loop through the list" are just pointless. I seem to remember back in the mists of time being taught that for every line of code I wrote I should write 2 lines of comments :S
Steven Robbins
Amen to that. I still shiver when I remember that Perl script I received 10 years ago, where *every* *single* *line* was preceded by a comment describing (often wrongly) what the next did:# Increase $a by 1$a = $a + 1;
niXar
The code should tell you how...the comments should tell you why...
Richard Ev
Sometimes people go truly overboard with comments - often to cover up the weaknesses of their (deficient) algorithms. What I personally hate is *lack* of good documentation, especially in complex code that warrants it.
Dmitri Nesteruk
I use comments in code quite sparingly, and always in spots where the intent of the code isn't as clear as it should be. I also use comments to document public library methods so that instead of looking at the code the user can read a quick synopsis of what it should do and return.
thaBadDawg
This seems to be people misunderstanding the differences between school and work. Teachers want pupils to explain what they were trying to do so they can correct the code to match the intent. Once one is writing code that will be read by peers the purpose and content of comments is different.
duncan
If you can't understand my code without comments, there's something wrong with my code. Adding comments may mitigate the problem, but doesn't fix it.
Jay Bazuzi
The book "Refactoring" (by Martin Fowler) identifies comments as one of the "code smells": if the code needs comments, it isn't clear enough and needs to be refactored.
ShreevatsaR
Amen, brother. I think that if your code needs comments you're doing it wrong.
Sara Chipps
The way I approach commenting is that you should comment what you want to achieve, before you write the code. Code does not always illustrate what the programmer intended to do, but the comment can, which makes it easier on a maintenance programmer, particularly if he's trying to fix a bug.
Jeremy
My university had a lecturer who handed out an assignment and let everyone know you lose marks for not commenting every function, except the obvious ones for which you lose marks for commenting. Which actually made you think a little more... Was the only one who did btw
billybob
I once worked on a 300 line function that was 50% comments, of which every comment described the line of code like "increments i, if x is true then". It's like the programmer was told he needed to comment every line, so he did.
Cameron MacFarland
@Jeremy: what if you replaced "comment" with "unit test"? It has the same function + you can easily verify that it holds true.
Jay Bazuzi
@Jay Bazuzi, I agree that the unit test verifies that it holds true, but the unit test doesn't show a maintenance programmer what's the code SHOULD do, and you can unit test until you're blue in the face but we all know that every app still has bugs, so unit tests aren't perfect either.
Jeremy
I have actually used a emacs macro that made all the comments invisible.
Hemal Pandya
Simple rule I use when commenting: Don't comment *what* you did, comment *why* you did it. I can see what you did; the question is typically why in the world you would want do it that way (and there are often non-obvious reasons)
LKM
+1 for comments describing "why" instead of "what"; I can figure out what if I know the language and API.
cliff.meyers
If you're using the ubiquitous language of your business (http://domaindrivendesign.org/discussion/messageboardarchive/UbiquitousLanguage.html), you don't need to comment the "why," because it will be self-evident. If it isn't, then you need to reconsider your design.
Michael Meadows
using the word 'pernicious' is sufficient cause for an upvote. (being right is another sufficient cause)
Leon Bambrick
A comment is an apology. http://butunclebob.com/ArticleS.TimOttinger.ApologizeIncode
Esko Luontola
If i could, i'd +2 !
Benoît
A comment is an apology, sometime an apology IS NEEDED, but try to advoid having to comment if possible.
Ian Ringrose
Not that it's all that applicable any more, but perhaps some of the Comments Are Good mantras stem from the days (for those who were there) when real coding was done in assembler ;-) Sometimes, it wasn't clear you were incrementing a counter :-D
DCookie
Well I'm going to be the dissenting voice. I like 'what' style comments *at a high level* because I can see what code is supposed to do without needing to flip back and forth between various functions and unit tests or mentally separate the descriptive code from the utility code.
Whatsit
@ShreevatsaR: you are badly misquoting (no offense). I have Martin Fowler's "Refactoring" open above my keyboard right now. Let me quote from the "bad smells" chapter. "...comments aren't a bad smell: indeed they are a sweet smell... [but] they are often used as a deodorant. It's surprising how often you look at thickly commented code and notice that the comments are there because the comments are bad."
MarkJ
I was taught to think of a comment as saying sorry to whomever is reading your code. You're apologising that the code could not be clear enough for easy reading, and thus placing a comment in to explain yourself. There is no other reason to comment in code.
Robert Massaioli
Yes! Generally needing to comment means the code is badly written, rather than write comments, reduce the size of your functions.
Jacob
I think in a complicated algorithm comments saying things like "If we get here then either X or Y" help a lot with understanding how the code works; this is true however good the code is. Yes, if the code is good then someone reading the code can work out for themselves, but not instantly if it's a complicated algorithm.I also think you shouldn't need to read the code to work out how to use a function. Ideally just the name of the function and its arguments should be enough, but realistically that will often not be true and a comment can be very useful.
Mark Baker
// next line exits the function and returns True.
nilamo
return True; // and never False
nilamo
Literate Programming (Knuth) comes to mind, but the code and the comments are important, but literate programming gives you a way to treat the equally. (Some tools and assembly required.) If a man can not treat two wives equally he should only have one.
Don
@Shhnap -- let's say you're working with a large team on frontend website code though. Or working with third-party APIs. In both these cases you eventually run into browser bugs or quirks or issues in someone else's code. Comments are in this case needed to say "sorry about this other person's code". Or -- "sorry that I didn't take all day to figure out a beautiful elegant workaround for this CSS issue" when there is a well-known hack that works perfectly well to address the problem, assuming whoever leaves the hack adds a brief comment to explain it?
Ben
@Richard Ev, I wish I could vote your comment up 10 times.
Renesis
@Richard EVAnd the VCS should tell you when, by whom, and what the code is.. All making up the perfect news story of who, what, when, where, why, and how when someone asks you about the code.
Zak
@MarkJ: Thanks! I was quoting from (faulty) memory; apologies for misquoting. Looks like I remembered only the vague idea… [I had missed noticing your comment for some reason.]
ShreevatsaR
Yay, I made the upvote to 666. :)
Albert
+322  A: 

The use of hungarian notation should be punished with death.

That should be controversial enough ;)

Marc
Nope, not controversial enough. Let them rewrite the complete works of shakespear in hungarian notation: a verb prefixed with v, a noun prefixed with n etc.
Gamecat
eh i dunno, i like it for some objects, like textbox = txtFirstName, etc
Shawn Simon
http://www.joelonsoftware.com/articles/Wrong.html
Ikke
OK, that's controversial. Sure HN can be abused, but hating it is one of those judgemental attitudes that, IMHO, comes from ignorant profs and bloggers who sound good.
Mike Dunlavey
My understanding is that the original intent was more along the lines of military naming (e.g boot-parade, boot-combat) which has value in some cases and it got corrupted through usage into something else of lesser value. So it depends what you mean when you use the term, as in so many things.
duncan
my boss would love you. I on the other hand ... :-D
J.J.
Very controversional! My dev team argues that too, but I prefer hungarian notation. I can tell the variables data type and scope just by looking at the code. I actually think none hungarian coding is sloppy and the only reason for not doing it is lazyness, no matter what your actual argument is :)
Jeremy
I simply disagree. You clearly can go overboard board with it, but in C for example, the lack of a using a preceding "p" on a pointer should in and of itself be punishable by death.
Tall Jeff
+1, although with reservations: IMHO full blown Hungarian obscures code readability but use of some basic rules - such as p for pointers, does quite the reverse. It's a question of balance
Cruachan
System Hungarian notation is devil, Application Hungarian notation is a really neat solution. In Apps HN the name denotes not the type but information of the contents, screen_x, paper_x, document_x to denote coordinates referring to the screen, paper, document, all of which are ints
David Rodríguez - dribeas
I like it when an interface starts with I. Like IComparable, IEnumarable, IEquality...
tuinstoel
@"every body in favour of HN" - wait till you guys discover implicitly typed variables!
SDX2000
+ 1 to HN prifixes identifying Nontrivial types (e.g. a textbox widget) special class of variables (pointers, interfaces etc)- Infinity to all other HN forms.
SDX2000
I never understood the need for HN in strongly typed languages like C. One, you should be able to remember what are you using a particular variable for. Two, using HN you lose some abstraction. Do you really need to know the type 'ItemCost' to know if it can be used in the 'CalcItemTax' function?
Kronikarz
You can have my 'm_' prefixes on my member variables when you pry them from my cold, dead, fingers!
Jim In Texas
Joel Spolsky wrote a nice post referring to HN: http://www.joelonsoftware.com/articles/Wrong.html
sharkin
That IS a good one! But, I think there are FAR worse offenses than Hungarian...
LarryF
Dumb da da dumb dumb dumb
Seventh Element
Yeah, but I don't write code for me, I write it for all the other people who *aren't* me. Personaly, I prefer HN, but don't currently use it, because it's not in the frameworks, it's a difficult one.
@Gamecat: I love you.
Matthew Scharley
What would you do when you have more than 25 controls in GUI and you have to refer often to each one of them ... Even if you are the one defining there names - for sure you will never remember each one of them ( anyone claiming the opposite lies ). The only answer is to have it: txtControlName1txtControlName2 .txtControlName3 ...and Autocompletion will save you
YordanGeorgiev
Leves felét egy kutya. :)
Jan Remunda
My opinion is that hungarian notation is easily misapplied and it is easy to change the code and not update the name of the affected variables, resulting in misleading variable names.
Kwang Mark Eleven
HN describes how to name local variables and private fields, not public classes and methods. The I in IEnumerable isn't HN.
Justice
@Justice, Read "Agile Principles, Patterns and Practices in C# (Martin)". The authors state that the I of IEnumerable is HN and that you shouldn't use it.
tuinstoel
I was taught to use HN from my first day in the industry, when I was programming Access databases. It was a revelation for me, because I could easily tell by looking at a variable what was its scope and its type. Yes, it involves a bit more typing, but if it makes the code more readable and intuitive, that's a small price to pay. In these days of intellisense, the extra typing argument is superfluous anyway.
Billious
@Billious: In these days of Intellisense, pervasive HN is superfluous. You can just mouse over a variable to see its type and scope. I do use it for form controls though.
jnylen
Hungarian notation is horrible. How about just naming variables reasonably?
Joren
Actually, back in the older days of C, with the less capable IDE's and especially for windows, the "prefix notation" made perfect sense. Since an integer could have been a number, a pointer, a handle, a pointer to a handle, a pointer to a pointer, etc... then this was actually helpful. In fact the original "hungarian" concept was actually a completely different context. What is unforgivable was how it was forced on a language like Visual Basic in some "Emperor's Clothes Mentality" and thank heavans the community finally woke up.
Swanny
With C#, I prefix private fields with "_" and, yes, interface names with I. This is common practice and friendly on the eyes. Using something like `m_count` is acceptable to some, but to me, it's hideous. Hungarian Notation, however, is just a plague. I first came across HN during my VBA programming days. I bought into the "wisdom" somewhat, but I decided to move all the visual garbage to the *end* of the useful part of the variable name instead of the beginning, hence I would write `countInt` instead of `intCount`. Same thing with visual controls today. What's wrong with `NameTextBox`?
DanM
I hate hungarian notation with a passion. I make an exception for prefixing pointers with p (or p_, whatever). There is a very good case that a pointer is a variable of a different _kind_.
Kelly French
I wish I could up vote this a second time ;)
ceretullis
I think it varies: depends what language you are working in, whether the workplace has a consistent set of Hungarian notation conventions, etc. Works fine if it really begins to approach a situation where every developer on the team would give the same name to a variable, but it's a disaster if each developer has their own idiosyncratic variation of Hungarian notation.
Joe Mabel
Has anyone written a SWIG-like utility to automatically de-Hungarian code?
Mike DeSimone
Prefixes and rules can be handy, as can naming conventions. They can also suck. Delphi programmers and MFC C++ programmers often at least note data fields with a prefix, and classes with a prefix. Cocoa developers are used to NS prefixes on the provided AppKit/Cocoa classes, and write their own class names with their own prefixes (BNR for Big Nerd Ranch etc). I think that "classes and private/protected fields need a prefix" is enough prefixes for me. And I like english suffixes (Form or Control or Window, for properties which are meant to point to a Form, a Control, a Window, etc)
Warren P
There are three places I like a hungarian-style decoration: 1) to denote a member variable, 2) to denote a pointer, 3) to denote a GUI widget. The rest I could take or leave.
Mike Clark
+444  A: 

1) The Business Apps farce:

I think that the whole "Enterprise" frameworks thing is smoke and mirrors. J2EE, .NET, the majority of the Apache frameworks and most abstractions to manage such things create far more complexity than they solve.

Take any regular Java or .NET ORM, or any supposedly modern MVC framework for either which does "magic" to solve tedious, simple tasks. You end up writing huge amounts of ugly XML boilerplate that is difficult to validate and write quickly. You have massive APIs where half of those are just to integrate the work of the other APIs, interfaces that are impossible to recycle, and abstract classes that are needed only to overcome the inflexibility of Java and C#. We simply don't need most of that.

How about all the different application servers with their own darned descriptor syntax, the overly complex database and groupware products?

The point of this is not that complexity==bad, it's that unnecessary complexity==bad. I've worked in massive enterprise installations where some of it was necessary, but even in most cases a few home-grown scripts and a simple web frontend is all that's needed to solve most use cases.

I'd try to replace all of these enterprisey apps with simple web frameworks, open source DBs, and trivial programming constructs.

2) The n-years-of-experience-required:

Unless you need a consultant or a technician to handle a specific issue related to an application, API or framework, then you don't really need someone with 5 years of experience in that application. What you need is a developer/admin who can read documentation, who has domain knowledge in whatever it is you're doing, and who can learn quickly. If you need to develop in some kind of language, a decent developer will pick it up in less than 2 months. If you need an administrator for X web server, in two days he should have read the man pages and newsgroups and be up to speed. Anything less and that person is not worth what he is paid.

3) The common "computer science" degree curriculum:

The majority of computer science and software engineering degrees are bull. If your first programming language is Java or C#, then you're doing something wrong. If you don't get several courses full of algebra and math, it's wrong. If you don't delve into functional programming, it's incomplete. If you can't apply loop invariants to a trivial for loop, you're not worth your salt as a supposed computer scientist. If you come out with experience in x and y languages and object orientation, it's full of s***. A real computer scientist sees a language in terms of the concepts and syntaxes it uses, and sees programming methodologies as one among many, and has such a good understanding of the underlying philosophies of both that picking new languages, design methods, or specification languages should be trivial.

Daishiman
1) I used to work for a big multinational that rhymed with Dunisys. Anyway we used to use the word "Enterprisy" to mean any solution that wasn't complex enough. Like, "asking the user for a password isn't enterprisy enough".
Cameron MacFarland
2) Back in 2002 I once saw a job add asking for 2-3 years of C# experience. This basically restricted the job to those who worked on the original C# design team.
Cameron MacFarland
Regarding (3), you sound like my echo.
Mike Dunlavey
Man, make these separate so I can provide yet another vote up for #2!
skiphoppy
I beleive in abstraction layers. I think the application should be abstracted from the OS in most scenarios. Finally .net/java decouple the app from the OS! Besides when a programmer can concentrate on the usability of an app over low level code, you always get a nicer application.
Jeremy
+1 very nice opinion
0xA3
Jeremy, .net only does that decoupling if you take care to use only the decoupled part.
Svante
The number 1 deserves more than one vote..., well said!
Alex. S.
#3: The more CS degrees were like you wish they were, the more I'd regret not having one. As it is, I only regret it a little.
JasonFruit
-1 for #1, +1 for #2, +1 for #3. Only having 1 vote is not a problem here.
MusiGenesis
I HATE #2, more so when they say something like 20 years of python, which is only 18 years old!!!
Unkwntech
Unnecessary complexity == complications
mike g
@Cameron: Heh, I remember seeing requirements for 8+ years of (insert web technology) a lot in the mid 90's.
Tracker1
+1 for #2 in particular (as one with >20 yrs experience): what is needed are problem solvers, which has a bearing on #3 as well.
DCookie
wow :) I quoted your first point in here: http://stackoverflow.com/questions/781191/making-life-better-by-not-using-java-web-frameworks/785118#785118 thanks!
dankoliver
You totally said it ! :D
majkinetor
As far as #1, I think there is a tradeoff. Enterprise frameworks add a lot of structure that helps mediocre developers think about the problem in a more organized way. As a PHP developer, I've seen a lot of code that was the result of a developer just thinking of a page as a long list of commands to be executed sequentially. Applications built in this manner are incredibly difficult to debug and keep working, let alone build on top of.
notJim
What each business wants is often completely different from the next one, fitting frameworks just doesn't work you always have to do workarounds. Keep it simple and custom to the users.
PeteT
Regarding (3) - i agree that CS courses should be as you say. But i don't think CS courses are useful training for a career writing software. Hmm - maybe i should add that as my own controversial opinion!
Tom Anderson
if i may quote dijkstra here: “Computer science is no more about computers than astronomy is about telescopes.” This also applies to coding. Having a CS degree as a computer scientist can only a benefit, but the common misconception of people is computer scientist == programmer. If you have no idea about code complexity ( O(n) notation ) then your code may be bad (inefficient). so yep, i disagree. having a CS degree does make you a better coder, because you understand the theoretical background
Tom
...and here, on StackOverflow, you find your solemates. Upvote!
Mark
can't agree more with number 3
Domenic
n-years-of-experience-required does matter but it shouldn't go beyond 2
Yassir
@Jeremy - .NET only decouples the app from the OS if by "OS" you mean "Windows". Since C#/.NET really only work on Windows, it is the ultimate in platform coupling. Java only decouples the app from the OS if you _have_ all the OSes you want it to run on to make sure there aren't any subtle bugs in the complex library support code that rear their heads differently on different OSes.
Chris Lutz
Totally agree with point 1 !
Preets
I don't find 3 controversial, I find it to be factual.
amischiefr
I wish more people (employers) felt this way. I could have a much better job right now.
Cogwheel - Matthew Orlando
#3 is controversial? Really? I thought it was clear as day, and those that didn't think so were hopelessly stupid. You really should have posted 3 separate answers.
MAK
@notJim "Enterprise frameworks add a lot of structure that helps mediocre developers think about the problem in a more organized way". WOAAHH! I couldn't **disagree** more!! IMO this 'structure' reduces the thinking (the reason they're only mediocre) of such developers. The problem is: they use frameworks to add huge volumes of 'functionality' without understanding the impact of their decisions. This typically leads to inefficient systems with far more complexity than is required. See: http://stackoverflow.com/questions/406760/whats-your-most-controversial-programming-opinion/406775#406775
Craig Young
Gaah! Stop posting several answers in a single answer! How hard can it be to separate them out?
Timwi
Yes, yes, a thousand times yes on all three points!
Max Shawabkeh
#3: thank you! I remember being a degree-less 7-year professional programming "veteran" (who admittedly started coding at age 10) I was floored that people made it through the degree without knowing what I considered to be the basics. I still want to go back to school, but I need to find a *good* compsci program in the KC area.
Jaime Bellmyer
@Jaime School is often more about the proverbial piece of paper than learning concepts; I know many graduate software engineers who have no concept object orientation, let alone big O. I'd only go back to school for research.
Kirk Broadhurst
It would be better if these answers were split into three. I agree with #1 quite strongly, but disagree with #2 almost equally as strongly.
Erick Robertson
Modern MVC frameworks create more problems than they solve? Hmm. Please show me some decent webapp you've written in plain CGI. (And forgive me for not taking such radical statements too seriously from someone who's still in college)
Nikita Rybak
@Nikita ASP.NET, Struts, Silverlight, J2EE, Zend: all of them pretty worthless. I talked about "enterprise" frameworks, and this opinion holds true after having used all the aforementioned ones. My favorite framework as of now is Django, but it's far from the other monsters I have mentioned, and I'm quite partial to the new micro-frameworks trend. I may be in college, but I make a good living out of this and I have been coding for more than 10 years, which is more than what many other so-called professionals can say for themselves.
Daishiman
+175  A: 

Every developer should be familiar with the basic architecture of modern computers. This also applies to developers who target a virtual machine (maybe even more so, because they have been told time and time again that they don't need to worry themselves with memory management etc.)

Brian Rasmussen
agree, and add a real, solid low-level language; just to get some 'feeling' about that architecture. C is good for this
Javier
As soon as you say "every" that should be a hint that something is wrong with a statement.
PhoenixRedeemer
Change "Every developer should..." to something like "You can't call yourself a real developer if..." (with obvious follow-on changes) and you point is both better and more controversial.
duncan
I'd change this to say every developer should understand, at a basic level, how any platform they utilize should work, wether it's the hardware, or the software. I've seen too many using tools like ajax, ado.net, asp.net and not really understand what's happening under the hood.
Jeremy
@Jeremey: +1 understanding ASP.NET under the hood. You have to understand JavaScript and HTML when ASP.NET's magic voodoo breaks down and it doesn't work the way you thought.
Jared Updike
@PhoenixRedeemer: So "every developer should be competent" is wrong too?
jalf
"Every" time someone focuses on some detail about the word choice of a sentence rather than on it's implied idea, they should be punished.
Nick
This has to do with the law of leaky abstractions. See http://www.joelonsoftware.com/articles/LeakyAbstractions.html As a corollary, programmers should be curious and able to learn about the abstractions they rely on. Nobody will understand or remember all the abstractions, but I'd say it's a good sign if you can identify a time in your career or education where you learned how things worked under the hood, and understood a leaky abstraction.Some examples: integer division and float vs real, memory hierarchy, TCP/IP, sql optimization, race conditions, pointers, references, etc...
Kimball Robinson
A: 

Once i saw the following from a co-worker:

equal = a.CompareTo(b) == 0;

I stated that he cannot assume that in a general case, but he just laughed.

Rauhotz
I'd be interested in hearing your reasoning here - as well as which CompareTo method you're talking about.
Jon Skeet
I'm taking about the C# IComparable.CompareTo method. Don't expect that two IComparable implementing objects are equal if the CompareTo method returns zero. They just have the same order.
Rauhotz
Then your implementation of IComparable is broken. The docs state that a return value of zero means "This instance is equal to obj." I'm not saying that there aren't broken implementations out there, but your colleague can reasonably point to the docs...
Jon Skeet
I'd argue that if things don't have a natural equality/ordering relationship, it's better to have a separate IComparer implementation, which can express this explicitly. There are certainly tricky edge cases - is 1.000m equal to 1.0m for example?
Jon Skeet
that's a good case of narrow-minded view. check the lots of 'compare' predicates in Scheme
Javier
Jon, could you be so kind and point me the lines in the docs, where it says "a.CompareTo(b) == 0 implies a.Equals(b) == true"?
Rauhotz
Sure. The docs to ICompable.Compare mean that "a.CompareTo(b) == 0" implies "a is equal to b". The docs to object.Equals mean that "a.Equals(b)" should return true if a is equal to b. It can be argued that the documentation is too narrow or incomplete (Java's docs are more careful on this front)
Jon Skeet
but the documentation really does seem fairly clearly limiting. It does say that the meaning of "equals" depends on the implementation, but it's at the very least confusing for "equals" to mean something different *within the same type* between two methods.
Jon Skeet
That's why I think it's clearer to implement non-natural orderings (i.e. where equality within ordering doesn't mean equality between objects) via IComparer instead of IComparable.
Jon Skeet
JavaDocs where it's nice and clear (compared with the MSDN ones for IComparable): http://java.sun.com/javase/6/docs/api/java/lang/Comparable.html It even says how to document times when you violate consistency with equals.
Jon Skeet
Of course, given the number of comments discussing this line, I think we're justified in considering it at least a bit unclear.
David Thornley
On the other hand, 7 of the comments before this one were mine :)
Jon Skeet
+656  A: 

Not all programmers are created equal

Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another.

It's politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it's not always the case. I have even seen cases where lead developers were 'beyond hope' and junior devs did all the actual work - I made sure they got the credit, though. :)

Dmitri Nesteruk
Funny thing is... it's usually the worst devs that think they are 10x or 100x better than the others
John Kraft
Not in my experience :)
Dmitri Nesteruk
I wasn't convinced about this being "controversial" until your point about the politics of recognizing it. Good point, there.
Adam Bellaire
Yeah, good luck explaining to your boss that In Your Humble Opinion, Joe is 10 times better programmer than Jack when your boss pays them equal wage for identical positions. Dangerous!
Dmitri Nesteruk
Another point to make is that prolific != skillful.
Marcin
Excellent points, everybody...
Mike Dunlavey
"one developer can be 10x or even 100x that of another" - priceless
01
Why does it matter whether this opinion is C#-related? Is this part of the thing I see occasionally where people seem to speak as if stackoverflow were a dedicated C# and .NET discussion platform?
chaos
Heh. C# is in my ignored tags ;o)
Svante
Well, the question was asked by Jon Skeet, thus my disclaimer."one developer can be 10x or even 100x that of another" - that's what we call "taken out of context" :)
Dmitri Nesteruk
Unfortunately, I've read some of the code my tech lead has written in the past, and I have concluded that I've seen things no other developer should have to see.
moffdub
Although there do exist programmers that are significantly better at their job than most if not all of their colleages, they are such a rare breed that you may never ever work along side one during your entire carreer wich may span several decades.
Seventh Element
+1 - for software development is a rare example, where one person can literally be worth 1000 and even more ... Sure it does NEVER reflect proportionally in financial appreciation !
YordanGeorgiev
+1 because I recognize myself in the "lead developers were 'beyond hope' and junior devs did all the actual work" part. (me being the lead developer) :-)
jao
+1 because this is so true and makes estimation very hard
Jimmy
PHB says the one who commits 1000 lines of code per day is 10x as good as the one who only does 100 lines. :)
James M.
I heard this saying once, which has kept me wanting to improve ever since... "Do you have 9 years development experience ? or 1 years experience 9 times over ?"
cometbill
lead vs junior vs senior is less interesting to me in this as most titles/positions are gained politically anyway. However, I love the 10x and 100x comparison because it is very true. Developers are not interchangeable. The greatest metric I've seen so far is using success as a metric itself. Pontification about academics or architectural correctness is a lot less valuable than someone who ships quality code, on time, and in budget. I often value developers whose code is heavily referenced, reviewed, or blatantly copied/stolen by peers or other teams. Productivity is like a magnet.
Stuart Thompson
This isn't controversial, but it is 100% true!
Chris Pietschmann
hmm, should it be DeveloperA != DeveloperB or !DeveloperA.Equals(DeveloperB)??
Sunny
What does that say about the manager?
JerryOL
+516  A: 

If you only know one language, no matter how well you know it, you're not a great programmer.

There seems to be an attitude that says once you're really good at C# or Java or whatever other language you started out learning then that's all you need. I don't believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be.

It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn't necessarily tally with the qualities I would expect to find in a really good programmer.

glenatron
Would you go as far as to say if you only know one *kind* of language you're not a great programmer? For instance, knowing Java and C# isn't that much better than knowing just one or the other - but knowing Java and Haskell will give much more of an open mind, IMO.
Jon Skeet
@Jon Agreed. I learned Haskell at Uni and now it has helped alot with learning LINQ and lazy evaluation, etc. The concepts are what make a language worth learning.
Cameron MacFarland
I'm a little shy of the adjective "great" applied to programmers, but I know what you mean.
Mike Dunlavey
completely reasonable and uncontroversial opinion - you fail, sir
annakata
You'd be surprised how many people have really laid in to me for saying it- perhaps not so controversial to a Stackoverflow audience, but it certainly is in other company...
glenatron
I'm not sure I agree. Even though I kind of happen to know a few languages, if I'm interviewing a person who only knows C# but to a very good degree, they'll get hired. Would you pass by an expert just because they don't diversify? I wouldn't.
Dmitri Nesteruk
@nesteruk It depends on what you want the person for. You may not need a 'great programmer' like glenatron said, but a man to do one specific job, so the expert in this case would be even more useful than the 'great programmer'
Pablo Fernandez
My image of a good programmer is one who knows multiple languages, and who is also, by the way "white" and male. I don't need to be told I can be really, really wrong.
Mike Dunlavey
Off topic, but I believe these arguments apply analogously to learning more than one natural language as well. I wish I knew a few more so I could think in new and creative ways.
Adrian Pronk
@Adrian Pronk - absolutely. Every time I find myself in a foreign-speaking area I wish I knew the language better.
glenatron
IMHO a great programmer can use any language immediately with a good reference.
Gary Willoughby
If you author books in only only one language your not a great story teller?
Brian
I think its just nuthugging, I know more than one language, but I wont learn ruby and etc lanaguages any time soon, because I truly believe is pointless. You just cant admit that every language is almost the same. Whatever, feel like you are better.
01
I would say it depended on what programming language, VB programmers can never be considered `great programmers`, without knowing other languages.
Brad Gilbert
I would tend to agree. I think a well rounded developer knows or have some working knowledge in many languages and keep up to date on them. I've found that some useful design patterns manifest in one language, but can be applied in others, Knowing 1 language limits exposure to things like that.
Jeremy
If you really are happy just knowing one language, that's fine, but *please* do the world a favor and keep your mouth shut about how great your language is compared to others. You've got no substance to back it up.
dreftymac
@annakata - withness Mark's comment there- I told you it was controversial.
glenatron
@Gary - I'll have to disagree simply because there are _some_ classes of languages which I wouldn't expect anyone to learn from just a reference. Imperative, OO and functional - sure, but what about other paradigms? What about Prolog, Forth, FP, Icon, Befunge (Ok, esoteric, but still)..
Dan
I like my one language.
xoxo
Corollary: If you only know one kind of language, you...oh, Jon said it right up at the top there...
Rob
Learning a new language is easy for any programmer, learning the framework in depth takes much longer and is more beneficial.
Cookey
Not focussing to narrowly on a single programming language or platform will definitly widen your horizon and offer opportunities to better understand what you are doing. On the other hand, I do think that focus is important and proficiency commes at a cost.
Seventh Element
@Brian - it would be more analogous to an author who knows multiple languages is a more creative writer, though not necessarily better
dragonjujo
@Dan, agreed here... I find I'm able to pick up OO-style in languages fairly easily, the more functional a language, the harder time I have with it conceptually. I love loosely typed languages (JS is my fav, that's controversial).
Tracker1
Great answer glenatron. I strongly concur!
Yoely
Data from the Cocomo II estimation model shows that programmers working in a language they have used for three years or more are about 30 percent more productive then programmers with equivalent experience who are new to a language (Boehm et al. 2000). An earlier study at IBM found that programmers who had extensive experience with a programming language were more than three times as productive as those with minimal experience (Watson and Felix 1977).Having said that, I believe that a dev should know several languages also, but being fluent in one counts for something.
Nemi
In fact, I think everyone should learn an Imperative OO language (Java, C++, Python whatever), a functional language (haskell, erlang, OCaml whatever), a concatenative language (Factor, Forth, Cat, Joy etc), a logic programming language (Prolog, Mercury etc) and a dataflow language (labview, estereel, Lustre, verilog, pure data, MAX/MSP etc). This combination will show you that a) there are many radically different paradigms out there, b) some languages really are different and c) you cannot learn them all from a reference.
Dan
I would argue that a person is not a master of their field until they are sufficiently well versed in their options such that they both have the knowledge to pick the right tool for the job, and understanding to be flexible enough to do so. For a programmer on a single platform, this might mean a deep knowledge of the language's tools and libraries, but for someone designing a solution, this also means a deep knowledge of programming language paradigms.
T.R.
I think that language is secondary - either you have the aptitude to be a developer or you do not. Those that do can transition between languages have this skill and a lot of people who pigeonhole themselves (by choice) are on the shorter side of the aptitude equation. A good programmer will get the job done; a great programmer will adapt and apply language / framework specific paradigms to reach a solution and look to how the equivalent can be executed on the next toolset.
joseph.ferris
If I'm an expert at language X, then I can be a great X programmer without knowing any other languages. And I'd argue a great X programmer is a subset of 'great programmers'.
Kirk Broadhurst
@Kirk - this is precisely the opinion with which I dsagree absolutely. I see "great java programmers", "great perl programmers", "great C programmers" and so on as a superset- a great programmer will certainly belong to one of those groups, but they will also be able to use a few other languages too. I honestly don't see how you could achieve that level of greatness in one language without having the ability to pick up others easily.
glenatron
It's the generalist vs specialist argument. If you don't particularly enjoy programming but want to do it as a job, you could survive by just knowing one language very well.If you want any kind of growth in your career, become a generalist.
baultista
@Dan, that's why they invented Oz, a multi-paradigm language, for teaching at EU, it only lacks the concatenative facet, but layers paradigms one over the other, in a nice progressive way to learn each one firmly before upgrading to the next. Anyway, learning multiple languages still adds value even if you learned Oz at the start, as Oz isn't a 'production' language, at least not one that you find deployed at large.
Monoman
@Monoman, indeed, Oz is an interesting language. I learned a little of it from "Concepts, Techniques and Models of Computer Programming" and it certainly is a flexible language, paradigm-wise. Still, I think ultimately one would get more out of learning a handful more paradigm-specific languages, though if you learned Oz well you're certainly ahead of a lot of programmers.
Dan
+26  A: 

I often get shouted down when I claim that the code is merely an expression of my design. I quite dislike the way I see so many developers design their system "on the fly" while coding it.

The amount of time and effort wasted when one of these cowboys falls off his horse is amazing - and 9 times out of 10 the problem they hit would have been uncovered with just a little upfront design work.

I feel that modern methodologies do not emphasize the importance of design in the overall software development process. Eg, the importance placed on code reviews when you haven't even reviewed your design! It's madness.

Daniel Paull
I don't know, I've seen an upfront design be a very good guide to development. I've never seen it work out such that the upfront design is followed exactly. It seems in my experience that when the rubber hits the road, designs have to be reworked.
Doug T.
fine with that, so you iterate... amend the design now that you have discovered something new and get on with the job. Your code is, once again, an expression of your design. It's developers that think that a design follows code the urk me.
Daniel Paull
I wish I was allowed to design before I code. In this job it's "I have an idea" from somoene followed by a directive to get something in a demo ASAP.
David
Much of my design is noted in header files and/or a few diagrams on a white board. I'm not saying anything about the how formal your design should be, or how to do it, but for the love of God, get your thoughts sorted before coding!
Daniel Paull
I've been irritated by the opposite, too much value placed in the design. The mantra "reuse the design not the code" forgets the time spent on implementing, testing, debugging and generally hardening the codebase. You cannot just throw that amount of work out.
JeffV
@Daniel: I think I agree with you. At the same time, it's important to be ready and able to revise the design and the code late and often. That takes skill that, I'm afraid, is not taught.
Mike Dunlavey
@Mike - I'm not saying that we all return to a waterfall model. Quite the opposite - as a developer you should expect things to change, so design your system to cater for change (eg, minimise coupling) and expect unexpected iterations that affect your design. You are right - this is not taught.
Daniel Paull
So if you have to iterate anyway, the choice to design first or code first is essentially the same thing.
Kendall Helmstetter Gelner
@Kendall: you are kidding, right? Perhaps you are thinking of a proof by induction for your statement, but I'd hope that the number of iterations to write a bit of code that is closed against change is small. In that case, I believe that design first is far more efficient.
Daniel Paull
I believe in iterative design. If you invest too much time upfront in design, you won't have the time to do the necessary rewrite (which always happens).
quant_dev
+512  A: 

Performance does matter.

Daniel Paull
Overall performance, or performance of every single block of code? More reasoning very welcome :)
Jon Skeet
I think this is a counter to the widely held opinion that performance doesn't matter since "you can always buy more CPU, hard disk, RAM etc".
Ed Guiness
And I thought the simplicity of the comment was it's main appeal. Ok, for example, many developers do not think about the time complexity of the algorithms that they use. Lesser developers reading this comment just ran off to wikipedia to find out what time complexity is.
Daniel Paull
There are lots of silly ideas about performance inhabiting programmers' heads. The only solution I have for this problem is to recommend that people do performance tuning of existing code.
Mike Dunlavey
@edg - Exactly so, and IMHO it just aint the case, as per my post here http://stackoverflow.com/questions/377420/throwing-hardware-at-software-problems-which-way-do-you-lean#377429
Shane MacLaughlin
Thats not controversial... Improving performance early is what's controversial (and stupid by the way)
Pablo Fernandez
I have to groan when I think of all the code I've optimized that was piggy because it contained massive data structure and event-driven architecture put in for the purpose of "performance".
Mike Dunlavey
especially developer performance!
Steven A. Lowe
Here's my take: Performance sometimes matters. There are applications that are uncomfortably slow no matter how much hardware you throw at them, and applications that are very fast on 486s, and applications in between.
David Thornley
no, they dont. If you join Strings with + or use StringBuilder the performance will be the same. db, clustering is performance. not creating less objects(except c++)
01
Wirth's law: http://en.wikipedia.org/wiki/Wirth%27s_law It's true.
Imbue
I completely disagree with that. More applications suffer from robustness issues than performance issues.
Uri
@Uri - but, improving quality is not controversial!
Daniel Paull
Importance *can* matter.
J.T. Hurley
"Premature optimization is evil" regards bit fiddling, not picking an efficient algorithm. It is not an excuse to produce bad code.
Svante
I work in embedded. So yes I agree, every little helps.
Quibblesome
If there weren't so many programmers who think that "you can always buy more RAM", we could nowadays run a complete office suite, a graphical web browser with flash, java etc. plugins, several messaging and online game clients concurrently on a 1 GHz, 256 MB RAM machine without any swap.
Svante
Clearly this is the first "controversial" opinion in this question that was really controversial
1800 INFORMATION
To state the obvious, Performance doesn't always matter, it only matters when it matters. The trick is being able to predict when that is before you start coding... Easier is figuring it out after you're done coding...
Charles Bretana
After you've done coding can be way too late on a complex system.
Pete Kirkham
My own corollary is: Performance matters, but you suck at it. Use a profiler.
qualidafial
Performance DOES matter. Learn your containers and use the best one for the job. Too many programmers use the early optimization quote as an excuse to be lazy.
WolfmanDragon
That's what she said!
Kip
Amen to that. In a project made by two different parties (we were using a library they produced), their chief engineer said we had 4GB of memory, so why bother about memory leaks [in c++]? *sigh*
Diego Sevilla
@unanswered: it varies. If you are joining a few strings, just go with += however if it's in a loop or a metric ton of joins, StringBuilder is faster because it preallocates the memory ahead of time. Once you max that out however...
Nazadus
Scalability != Performance... It's a balancing act.. do you need your application to scale to N servers? then the performance on one server may suffer, for decisions made for scalability... Some problems you can throw hardware at.
Tracker1
Diego.. LOL, I worry about memory leaks with my ajaxy stuff (removing event handlers on DOM elements being removed from the tree).
Tracker1
Don't forget about Amdahl's Law!
rlbond
Okay, this should only be taken to a point... Computational complexity should always be considered. But it drives me nuts when my coworkers daisy chain ternary operators touting that it's "slightly more efficient" than writing a few if statements. Sometimes the .0001% efficiency cost is worth code readability.
James Jones
It's too bad I can only upvote this once.
Crashworks
I think performance does matter, but code correctness and reliability matters more. That means you should develop your code to be correct, reliable and secure first and worry about performance later, after prototyping. The hotspots should then be rewritten as needed. Pure cycle-shaving optimisation is also never as effective as a good algorithm. Example: I once had a program that took 2.5 hours to run. I did some hardcore optimisation and it ran in 40 minutes!! I rewrote it to use a better algorithm; it ran in 20 seconds! I tried optimising that - but it did not go any faster. Go figure :-P
Dan
@Dan: "I rewrote it to use a better algorithm; it ran in 20 seconds!" I bet you wish you'd done it that way in the first place... It sounds like your initial slow but correct/robust solution was a waste of time in more ways than one!
Daniel Paull
@Daniel: Both solutions were correct and robust. The second solution used better data structures (one big win was replacing lists with tables so I went from O(n) to O(1) in that part of the code). This was, unfortunately, only possible because I could profile the code, so the first version wasn't a TOTAL waste of time and see which parts were inefficient. Would have been hard to do it that way from the start. But, yes, I do wish I wrote it that way in the first place. Would have saved me about a day...
Dan
I'd say if something is half as fast, it doesn't matter (in most cases) because you can always buy faster hardware. But a bad algorithm or bad data structure can make things thousands of times slower. If you really think performance doesn't matter you've obviously only ever done noddy programming.
Mark Baker
Here's the thing: performance comes with a price (paid in programmer-hours), and consumers aren't willing to pay for it.
Frank Farmer
@Frank - are you sure about that? I have many anecdotes relating to a poor performing app slowing down developers and testers, leading to a lot of wasted time. A relatively small amount of time spent improving performance could save a team a heap of time and money while increasing overall quality and reducing frustration. That's a win, win, win situation.
Daniel Paull
CPU, memory, disk IO or space? What is the one resource that has not been doubling every 18 months.The programmer.That is why when I think of performance I first think how can I make my developers (and myslef) more efficient.With a billion CPU cylces going to waste every second, why waste time on worrying about every CPU cycle.And if you want that level of control, then the only langauge that will give you that is assembler.Me I'm glad that my assembler days are mostly behind me, and more often than not I'm programming in languages written for this century.
Swanny
@Swanny: Your comments about using assembler are not necessarily true. When I talk about performance and scalability I am referring to choosing appropriate algorithms and data structures such that you may turn a naive O(n^2) algorithm into something better the O(n). The other area of increasing performance is changing the way your program works, focussing on interactivity. For example, making long running operations asynchronous. I don't need to resort to using assembly language very often, nor do I feel it necessary. I also assume that you don't ever work with embedded systems...
Daniel Paull
In my primary field (server-side applications), performance is for padding profit margins, scalability is for making sure your business can actually keep running. When it comes right down to it, you can add server capacity in an emergency by going to Fry's and exchanging currency for whatever hardware they have lying around. You cannot reliably enhance performance in time to save you from the fact that your application is about to crash because you have too many users, driving a bunch of them off.
Nicholas Knight
@Nicholas: what you say is true only if the server load scales linearly with the number users. If you had a nasty O(n^2) or worse algorithm that was causing your performance problem, them I doubt that just buying more hardware is an economical solution at all. So, I claim that scalability and performance are intimately entwined - so much so that I see them as pretty much the same thing. Wouldn't you agree?
Daniel Paull
@Daniel: No. Creating a scalable application _includes_ selection of _scalable algorithms_, which is still different from the selection of _fast algorithms_. O(anything) tells you how an algorithm scales, but by itself it tells you nothing about performance, and it's entirely possible to have an O(n^2) algorithm that will be faster for many practical datasets.
Nicholas Knight
@Nicholas: "and it's entirely possible to have an O(n^2) algorithm that will be faster for many practical datasets" - in which case performance doesn't matter? I find your distinction between scalability and performance confusing. To me, scalability is one aspect of performance. Hence, a "performance improvement" may be gained by improving scalability.
Daniel Paull
@Daniel: A simplistic example, but consider a single-threaded webserver. With select() or similar, its performance may vastly outstrip that of a naïve threading or forking server (like traditional Apache), up until you start overloading a single CPU. If you make it threaded, you can add capacity simply by adding CPUs (and other hardware as necessary). It is now scalable, but you have done nothing to speed up an individual request (actually, it's probably slightly slower with the additional overhead from threading). That is my distinction between scalability and performance.
Nicholas Knight
@Nicholas: Your example does not improve scalability at all. The single threaded server scales linearly with load, as too does the threaded version. However, you can get away with less hardware (assuming you are using multi-CPU machines), which will pad your profit margins... no wait, padding profit margins is what performance does. Oh dear, I am confused by what you mean.
Daniel Paull
I am so glad you said this. Try writing a genetic algorithm in Ruby. It's not fun.Well, it is fun, but trying to get it to finish in under six hours is not fun.
Michael Dickens
I love that people always say "Performance matters, but reliability matters more" as if you have to choose between the two. You might as well say "Performance matters, but comfortable office lighting matters more." You can have it all! Good code is reliable, performs well, and is written under good office lighting. Don't settle for less!
Dan Olson
@Dan: Sure, you *can* have both performance and reliability, but that doesn't happen too frequently in practise - and when it does, you've sacrificed something else (probably development time). Performance doesn't matter, unless it does. You need to make a conscious decision to care about a particular performance metric in a particular case, set a benchmark, and profile. The anti-performance mindset doesn't say "performance *never* matters", rather "stop caring about performance in those frequent cases where it doesn't matter".
Iain Galloway
Wait, this is a controvery? There's some projects where performance matters and some that don't. That doesn't mean there's a dichotomy here.
Rei Miyasaka
@Rei: I can't think of a single software project where performance is not important. It just-so-happens that in many situations, a naive implementation exceeds your performance requirements.
Daniel Paull
@Daniel: Seriously? Well then I guess there's some controversy here after all. I can think of plenty of examples where performance is hardly a concern: any code that's O(1) and waits on other invariable bottlenecks like user response is and always will be fast enough, in my mind.
Rei Miyasaka
@Rei: Yes, Seriously! It just-so-happens that for many applications using modern hardware, a naive implementation may satisfy the performance requirements of most users - that is not to say that performance does not matter; it just means that it's already taken care of. BTW - to say, "and will always be fast enough" suggests that you have an idea of the performance requirements and know that you'll always exceed them - ergo, performance mattered. If you disagree, then I think you've just become complacent.
Daniel Paull
Wow, getting personal. There's no way that my button click handler will ever be too slow, because there's no way that it'll be used outside of WPF -- because it can't be used outside of WPF. It might be used on Silverlight on a phone, sure, but that phone will be fast enough to run Silverlight, and thus, fast enough to deal with my less-than-perfectly-efficient click handler. The view logic in my code is already hard-coupled to WPF/Silverlight. Just because I thought fleetingly about performance *doesn't* mean performance mattered for the project. Complacency or pragmatism?
Rei Miyasaka
@Rei: I hadn't meant for you to take my comments personally. I don't think you understand my argument. You keep saying "fast enough" - this means that you have an idea of acceptable performance. Ergo, performance matters. Just because you didn't have to do anything "special" doesn't imply that performance isn't important.
Daniel Paull
@Daniel So what you're saying is that even the very act of *considering* performance for a split second is to prove that performance matters. That seems like a rather contrived understanding of phrase "performance matters". Determining whether performance is important or not is obviously important -- but once it's been determined for a case that it's not, then it simply is not. It's like saying that the vase in the corner of the room requires constant attention because someone might someday throw it at you. Yeah, sure maybe, but it just isn't prudent.
Rei Miyasaka
@Rei - isn't the problem when developers don't consider performance - not even for a split second? To be able to state "that it will always be fast enough", which is what I was responding to, involves much consideration - far more than a split second. I find your vase analogy is very weak.
Daniel Paull
@Daniel Death by vase is never a concern until it's very imminent; that's the analogy. I don't know what you experienced that disillusioned you on the ability for a developer to passively identify potential performance issues, but I know potentially slow code when I see it; I don't need a mental checklist so to speak. It comes as part of thinking about the imperative execution of your code. There are signs -- large IO, platform API in tight loops, >O(n) function calls, loops on indefinite collections, timers, threading etc. They're signs that you *see*, not signs that you stand there and read.
Rei Miyasaka
@Rei: Ok, let's stick with the vase - when choosing a third-party vase (as opposed to rolling your own), do you look for one that is unlikely to inflict serious damage when flung at you? No. Why not? Because it doesn't matter. The corollary for third party software products and performance is not true. "large IO, platform API in tight loops, >O(n) function calls, loops on indefinite collections, timers, threading etc" - that's a lot of things to keep in mind when writing and/or maintaining your simple button click handler.
Daniel Paull
@Daniel Again, I don't keep any of that "in mind" -- I notice it when I see it. I'm not thinking about performance or a flying vase until I see a certain code pattern or an API call or a psycho ex with the vase in hand. Check this out: http://stanford.library.usyd.edu.au/archives/fall2001/entries/mind-identity/ `Place has argued that the function of the ‘automatic pilot’, to which he refers as ‘the zombie within’, is to alert consciousness to inputs which it identifies as problematic, while it ignores non-problematic inputs or re-routes them to output without the need for conscious awareness.`
Rei Miyasaka
@Rei: You seem to take the phrase "does matter" to mean "I have to actively do things to take care of it." I read your comments and I have no idea if you are supporting my argument that "performance does matter" or not. For example, your auto pilot interrupts you when there is a potential performance problem - ergo, performance matters to you and your auto pilot. Under normal flying conditions, your auto pilot writes code in the usual manner that adheres to your regular flight pattern of readbility, reliability and ensuring appropriate performance levels. Again, performance matters.
Daniel Paull
@Rei: You said, "I'm not thinking about performance ... until I see a certain code pattern or an API call." In my opinion that's just too late. You should have considered the performance implications BEFORE you wrote the code. It must be funny to watch you cut code - zombie, zombie, zombie, CRAP! refactor, refactor, refactor. Zombie, zombie, CRAP! refactor, zombie, zombie ...
Daniel Paull
@Daniel Actually, "autopilot" in this context refers to driving. When you're on the highway, you sort of zone out and stop paying attention to anything until there's a deer in front of you -- and usually, unless you're otherwise distracted, your response will be no different. No, I don't write code and realize after the fact that it's too slow -- I think about the API that I need to call, the algorithms that I need to write, and it's then that it'll click -- "hey, could get pretty slow". That doesn't necessarily mean I'm constantly thinking about performance.
Rei Miyasaka
To restate my, `Determining whether performance is important or not is obviously important -- but once it's been determined for a case that it's not, then it simply is not.` And, I don't need to be thinking about it constantly to notice it. So my position is that it matters when it matters, but it shouldn't be a constant obsession.
Rei Miyasaka
@Rei: '"autopilot" in this context refers to driving.' Umm... ok. I had expected you to understand that "usual flying pattern" was merely an extension of your chosen "autopilot" metaphor. The autopilot metaphore is not just restricted to driving - it can be anything that you do by rote (washing the car, eating soup, walking to the corner shop, etc). If I ever feel like I am "programming by rote", then alarm bells go off - why can't this be automated or commonality factored out? To continue the analogy, my autopilot only makes short syntactic flights - long haul flights are always aborted.
Daniel Paull
@Daniel: I think we're talking about two different things here now. I'm talking about not actively paying attention to performance; you're talking about not paying attention to design. Obviously I pay close attention to what I'm designing, but there are patterns in design and implementation that instantly set off "performance alarm bells" in my mind the same way rote code sets off "redundancy alarm bells" in your mind. But you don't need to be constantly going "I will not repeat myself" when you're coding, do you? You just think of a design and go "huh, this could be refractored".
Rei Miyasaka
And yes, if you mean "performance does matter" in the sense that you need to always be able to subconsciously spot potentially slow code, then I totally agree with you. But like I said a while back, I'm not sure that that's a really useful interpretation of the word "matters"; I think the term is more pertinent to the discussion of whether or not code needs to *always* be as fast as it can be regardless of known invariable implementation constraints. I hope you're not annoyed by this discussion by the way, because I honestly think it's damn interesting.
Rei Miyasaka
@Rei: no, I'm certainly not annoyed - quite the opposite. I mean that "performance matters" in the way that "readability matters". Readability does not stop "mattering" once you consistently write readable code. What inspired me to post this one like controversial opinion is that I have observed far too many situations when developers have not known that their designs have serious performance problems until they hit some wall at runtime and they they plead ignorance. By "performance matters", I mean known your limitations, predict bottle necks and build appropriate systems.
Daniel Paull
There have been many times where I have taken the "low road" in a design, knowing that my design and implementation is not as fast or scalable as it could be, however, the extra cost of the higher performance approach could not be warranted or would not be useful. The important thing is that I made an informed design decision. Heck, I've even been known to write code that blocks on IO from time to time - but I know the dangers and accept the consequences! Blocking on IO without knowing the dangers or accept the consequences is what gives me the absolute shits and is the crux of my argument.
Daniel Paull
@Daniel Totally agree there. I think I was just expecting you to have meant "performance matters" in a more controversial, possibly disagreeable way!
Rei Miyasaka
@Rei: Perhaps you need to spend some time around web developers. Now that's a controversial statement!
Daniel Paull
@Daniel, Rei: I didn't read the last few rounds of this discussion, so you may have got around to this point, but the way I see it is this: If it is feasible, for any given requirement, that code _could_ have been written (even if only by a deranged lunatic writing an O(2^(2^n)) "speed-up loop", etc.) that fails to meet performance requirements, then the mere possibility of existence of such code is proof that performance "matters", because there _is_ a requirement - even if it's insanely easy to meet. I think this was the point of Daniel's original statement.
mokus
+37  A: 

You don't have to program everything

I'm getting tired that everything, but then everything needs to be stuffed in a program, like that is always faster. everything needs to be webbased, evrything needs to be done via a computer. Please, just use your pen and paper. it's faster and less maintenance.

Mafti
+1 sorta. I use my tablet when I can like pen and paper because sometimes its just easier to write than use a piece of software.
percent20
Do you mean "everything" and not "anything"?
Adam Bellaire
Well, he sayd "you don't have to program" and I completely agree - nobody has forced me to program, I just happen to like it. Sorry, but no controversy here.
Rene Saarsoo
No, no, I have to program lots of things.
postfuturist
You don't have to program everything
Anil
+1 for low-tech. Sometimes an Excel spreadsheet will do the trick just fine instead of coding an expensive CRUD.
Mauricio Scheffer
+193  A: 

Code == Design

I'm no fan of sophisticated UML diagrams and endless code documentation. In a high level language, your code should be readable and understandable as is. Complex documentation and diagrams aren't really any more user friendly.


Here's an article on the topic of Code as Design.

Jon B
It's hard for programmers to fathom, but non-programmers have a very hard time reading ANY programming language, and something visual or free-form text is usually much easier for them to handle. And you WILL need to talk about the design with non-programmer domain experts or managers.
Michael Borgwardt
I generally agree that it is possible to write readable code in even the most complex algorithm. By using techniques such as splitting up code into smaller well named functions, variable naming, and commenting, to name a few.
Jeremy
How about using a program to read and design diagrams from the source code that helps non-programmers understand the program?However, why should non-programmers care about the source code in the first place? That should be the domain of the programmer, not the manager.
Jeff Hubbard
I mostly agree. I often write prototype code as a means of designing. I then throw it out and write it again (with flaws fixed) as a means of coding.
Dan
developing software is like writing laws, you start at some level and keep filling in the detail or providing generalisations until it is sufficient that it can be followed. Mostly the aim is for it to be followed automatically, but forget not being able to explain it to the fleshies.
Greg Domjan
I disagree it is much easier to change a line in a word doc or visio about a function is to act then it is the code.
David Basarab
There's a difference between "hey, check out this UML, see any issues with the architecture?" and "hey, check out my code, see any issues with the architecture?" Humans aren't particularly good at parsing code; they're good at parsing images, though.
LKM
UML is always insufficient to describe in detail the problem at hand - if it wasn't, we wouldn't have programming languages. It is at best a formalized sort of handwaving.
Kris Nuttycombe
UML and code are meant for different purposes, and it is not really reasonable to use one in lieu of the other.
Kwang Mark Eleven
UML is overrated but it's not the only aspect of design.Code != Design.
I. J. Kennedy
+1 for Controversy, but you are 110% wrong. Cowboy coding ftl.
Kyle Rozendo
However, when discussing your code in a meeting with functional testers and business people, having one diagram that shows the problem, makes things a lot clearer a lot faster than several pages of code. However, I am no fan of UML.
Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious. ― Frederick Brooks, in The Mythical Man‐Month, p. 102.
Teddy
if code was equal to design, then there will be no coders. All you will need is a designer. Man this is sooo untrue...
Random
+57  A: 

Opinion: Unit tests don't need to be written up front, and sometimes not at all.

Reasoning: Developers suck at testing their own code. We do. That's why we generally have test teams or QA groups.

Most of the time the code we write is to intertwined with other code to be tested separately, so we end up jumping through patterned hoops to provide testability. Not that those patterns are bad, but they can sometimes add unnecessary complexity, all for the sake of unit testing...

... which often doesn't work anyway. To write a comprehensive unit test requires alot of time. Often more time than we're willing to give. And the more comprehensive the test, the more brittle it becomes if the interface of the thing it's testing changes, forcing a rewrite of a test that no longer compiles.

Cameron MacFarland
Yes. And code can only be tested if it has room to fail. Simple structures without inconsistent states have nothing to unit test.
Mike Dunlavey
Yeah, unit tests up front don't really make sense. If I wrote it down, I thought about the possibility. If I thought about the possibility, unless I'm a complete moron it'll at least work the first time around where the test would apply. Testing needs to catch what I DIDN'T think about!
PhoenixRedeemer
Phoenix - you have a point about only catching what you didn't think about but I disagree with your overall point. The value of the tests is that they form a spec. Later, when I make a "small change" - the tests tell me I'm still Ok.
Mark Brittingham
I worked a company that wanted 95% test coverage, even for classes containing which contained fields to assign and no business logic whatsoever. The code produced at that company was horrible. My current company does not write any unit tests, relying instead on intense QA, and the code is top-notch.
Juliet
I write unit tests when I think I need them, but more importantly I write random test drivers, because my code might work fine in 100% of predictable cases. It's the unpredictable cases I'm worried about.
Mike Dunlavey
In my current project, I've introduced up-front unit tests, and code quality has improved drastically. People had to be convinced at first, but soon noticed the positive effects themselves. So my experience says you're wrong. And PhoenixRedeemer, you ARE a complete moron... just like everyone else.
Michael Borgwardt
@Brazzy: Why weren't your devs writing better code to start with? Notice my opinion says you don't "need" to write tests up front. I'm not saying you shouldn't, just that you should think about why you're writing that way.
Cameron MacFarland
@brazzy: Hey, complete morons rule! :) I've seen code that is improved by unit tests, because it needed them. I've seen code that didn't need many unit tests, because it had few invalid states. My code tends to need randomly generated tests, due to the problem space.
Mike Dunlavey
Unit tests are also about managing change.It's not the code that you are writing right now that needs the tests, but the code after the next iteration of change that will need it. How can you re-factor code if you have no way to prove that what it did before the change is still what it does after?
Greg Domjan
@Greg: While it is true to say how can you refactor if you can't prove you didn't break stuff, but then I do write tests designed to show changes after a refactor. My opinion of tests is mainly confined to their use up front. Tests are very useful when refactoring.
Cameron MacFarland
Everyone writes the unit test that checks open() fails if the file doesn't exit. No one writes the unit test for what happens if the username is 100characters on a tablet PC with a right-left language and a turkish keyboard.
Martin Beckett
I think this misses the point of test driven development, which hurts the argument. It isn't about testing edge cases, it is about driving design.
Yishai
You don't need to catch every edge case. If you are testing the best case and a few common errors, when an edge case pops up you can write a test for it, fix it, AND ensure that all you don't introduce new bugs. Apart from that, writing tests first forces you to think about what you are trying to acheive, and how. It helps you write small maintainable methods. I don't see how any programmer with a desire to write good software could be against this.
railsninja
Although I agree that "unit tests only catch the issues I've thought about", there are many times where I'm *positive* the code I just wrote satisfies a particular condition, yet the test reveals something I totally overlooked. Furthermore, the act of writing tests first forces you to think about all the edge cases in a manner that you might not have to as great a degree.
Ether
For me, an eye-opener about testing was this: you need to try out your code anyway - so why not do it in form of a test? Extensive testing is controversial, of course, but a little can get you a long way.
hstoerr
+229  A: 

It's ok to write garbage code once in a while

Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever... Throw up a Console or Web App, write some inline sql ( feels good ), and blast out the requirement.

jfar
Some languages *cough*Perl*cough* are better for garbage code than real development. A good developer will know more than one language for either role.
David Thornley
Absolutely. Don't make a mountain out of a molehill. Little throw-away apps don't have to be pretty - they get a job done.
Mike Dunlavey
I had to recently recover and reprocess payroll data with a deadline of a few hours to identify how to get the data and reprocess it and then be very sure of its correctness by eyeballing it. When the pressure is on like that you have to just hack it up until it works.
Tony Peterson
brad.lane
I would add that if you're adding "garbage code" to a non-garbage app, do it in a way that won't pollute the rest of the app. Encapsulation is especially important for hackish code.
JW
J.J.
In my environment I found that almost ever app grows in to a large project so what seems like a good idea to write garbage code ends up being a very bad idea later. Do a job well or not at all inevitably pays off in the end for me.
Jeremy
2 problems with this: first, you never know how permanent your little garbage app may become; I found one not long ago I wrote eight years ago, still in use, still crap. Second, writing crap is habit-forming; like you say, it feels good, which makes you want to do more of it. Just say no.
JasonFruit
It's not ok to write garbage code if *MY* name is on it, that's for sure...
LarryF
It's fine sometimes - if it's only going to be run once for a particular task, then thrown away
MarkJ
Sometimes you really do know when somethings garbage code. Those one-time exports or imports are perfect examples.
Brian Bolton
It's fine until it moves into production, which is about 90% of the time...
Daz
I usually work from home and when my kids were a little younger they would look at the screen and say "Dad's doing his garbage writing best keep quiet" The name stuck so when coding I'm still referred to as doing "Garbage writing". Always made me laugh!
Despatcher
I agree with this and we use a concept of TECHDEBT to track it. If you write garbage code to do something now, you have to mark it as a Debt to your overall system with a promise to go back and fix it right later. Overall you allocate X amount of debt your system can carry and you can never go over that limit
Gord
I don't think this is ever true. If you write junk code there's no reason for writing it in the first place other than maybe a client with a deadline that needs booked.
leeand00
It is ok, but only for so long as it does not go in the source code repository... THEN you can be publicly flogged AND told to fix it..
Thorbjørn Ravn Andersen
NO. Throwaway code, yes. Garbage code, no. You don't need to generalize your code to cover every possible situation if it does what it needs to do. But you do need to make sure your code can be read and followed, because it's funny how often throwaway code becomes useful later.
Kyralessa
extreme YAGNI! yummy :)
Mauricio Scheffer
if (idiocyOfRequirements > willToWork) BringOutTheDuctTape();
Repo Man
Just don't do it out of laziness. Refactoring and evolving your first fumbling attempts at solving a problem are what iterative software development is all about.
burkestar
+5  A: 

Manually halting a program is an effective, proven way to find performance problems.

Believable? Not to most. True? Absolutely.

Programmers are far more judgmental than necessary.

Witness all the things considered "evil" or "horrible" in these posts.

Programmers are data-structure-happy.

Witness all the discussions of classes, inheritance, private-vs-public, memory management, etc., versus how to analyze requirements.

Mike Dunlavey
By manually halting you're acting as a simple sampling profiler, so there's certainly some logic behind it, but I tend to find that instrumenting profilers give better results on the whole (albeit with more performance impact on the running application).
Greg Beech
Yes it is a sampling method. The difference is that you're trading precision of timing for precision of insight. Concern about slowing down the app is confusing means with ends. You're trying to find cycles spent for poor reasons. This does not require running fast.
Mike Dunlavey
I would humbly assert, from logic as well as experience, low-frequency sampling of the program state beats any profiler for the purpose of finding things that can be optimized. However, for asynchronous message-driven software, other methods are needed.
Mike Dunlavey
What I do think profilers are very good for is monitoring program health, to see if performance problems are creeping in as development proceeds.
Mike Dunlavey
The "best" way to analyze requirements varies both on who is giving them, and who is receiving them. Therefore discussion around the "best" way to do that is not very quantifiable.
Kendall Helmstetter Gelner
@Kendall: I've never seen "any" work in how to analyze requirements, and propose and evaluate alternative solutions, let alone "best". If we were doctors, we would know all about treatments but diagnosis would be "obvious".
Mike Dunlavey
+150  A: 

There is no "one size fits all" approach to development

I'm surprised that this is a controversial opinion, because it seems to me like common sense. However, there are many entries on popular blogs promoting the "one size fits all" approach to development so I think I may actually be in the minority.

Things I've seen being touted as the correct approach for any project - before any information is known about it - are things like the use of Test Driven Development (TDD), Domain Driven Design (DDD), Object-Relational Mapping (ORM), Agile (capital A), Object Orientation (OO), etc. etc. encompassing everything from methodologies to architectures to components. All with nice marketable acronyms, of course.

People even seem to go as far as putting badges on their blogs such as "I'm Test Driven" or similar, as if their strict adherence to a single approach whatever the details of the project project is actually a good thing.

It isn't.

Choosing the correct methodologies and architectures and components, etc., is something that should be done on a per-project basis, and depends not only on the type of project you're working on and its unique requirements, but also the size and ability of the team you're working with.

Greg Beech
Hurray for common sense! Having started life as an engineer, I'm often baffled by the "religious" tone of this field.
Mike Dunlavey
There is no silver bullet! quoting F.Brooks
epatel
@epatel: I used to think there was no silver bullet, until I stumbled on a couple of them. The problem is, as Greg says, tools should be chosen to match the project, not treated as cure-alls. I'm tired of all the "religion" in this field.
Mike Dunlavey
The only silver bullet in software development is "being smart about it". (imo)
Pop Catalin
Surely silver bullets would in fact not work very well?
Liam
Five worlds man. 1. Shrinkwrap 2. Internal 3. Embedded 4. Games 5. Throwaway www.joelonsoftware.com/articles/FiveWorlds.html
MarkJ
When was Brooks' essay last considered controversial?
Hanno Fietz
Programmer interview question: What are the pros and cons of methodology XYZ? What considerations should you account for when deciding whether to use it? QA people and developers should both be able to judge when to use what methodology, and recognize when methodologies are less (or not) useful.
Kimball Robinson
+37  A: 

Before January 1st 1970, true and false were the other way around...

annakata
http://en.wikipedia.org/wiki/Humour#Understanding_humour
annakata
Oh man, this is the funniest thing I've seen on SO in a long time.
MusiGenesis
LOL.. am tweeting this.
Amarghosh
I understand how *nix systems record time, and how true and false are represented. But, could someone explain this joke to me, I don't get it? Thanks.
Matt Blaine
I don't get it ._.
M28
it's like particles and anti-particles: for an arbitrary system (like a computer) it doesn't actually matter what label you ascribe to each value, the two things are defined by each other. Kaons spoil the metaphor a bit, but it's just a joke so you'll have to learn to let it go.
annakata
+28  A: 

Regurgitating well known sayings by programming greats out of context with the zeal of a fanatic and the misplaced assumption that they are ironclad rules really gets my goat. For example 'premature optimization is the root of all evil' as covered by this thread.

IMO, many technical problems and solutions are very context sensitive and the notion of global best practices is a fallacy.

Shane MacLaughlin
There are two types of optimisation, by architecture and by code. Architecural optimisation is clearly needed before you write code. However the term 'premature optimization' really applies to efforts to write code optimally instead of simply. This is evil.
AnthonyWJones
I am often called in to straighten out big messes that were architected ostensibly with the objective of "performance".
Mike Dunlavey
@Mike: There has to be some understanding of volumes and response requirements before the app is developed. Such things have to be considered in the inital archecture. Of course specific performance choices need to be justified.
AnthonyWJones
@Mike, as I mentioned, it's all to do with context. I work in the geospatial domain, where the default complexity of many problems is O(n^3). In this arena, optimization is a must, and it has to happen at design time. Analysing underperforming code with a profiler is rarely helpful.
Shane MacLaughlin
+9  A: 

That most language proponents make a lot of noise.

Varun Mahajan
Controversial, and simultaneously axiomatic. Nice.
ChrisA
Stu Thompson
+17  A: 

Here's one which has seemed obvious to me for many years but is anathema to everyone else: it is almost always a mistake to switch off C (or C++) assertions with NDEBUG in 'release' builds. (The sole exceptions are where the time or space penalty is unacceptable).

Rationale: If an assertion fails, your program has entered a state which

  • has never been tested
  • the developer was unable to code a recovery strategy for
  • the developer has effectively documented as being inconceivable.

Yet somehow 'industry best practice' is that the thing should just muddle on and hope for the best when it comes to live runs with your customers' data.

fizzer
"has never been tested" You do pre-release testing with assertions on and accept the assertion being triggered as passing the test? Weird idea. If you do that than I agree with you but I don't understand why you are doing this.
duncan
No, I'm just assuming that a failed assertion during testing causes a build to be rejected. Therefore, if one happens in the wild, the program has necessarily entered a state outside of test coverage.
fizzer
If during testing your assertions never failed and it does fail during production code, there is a problem with testing, but nevertheless, the error should be logged and the applications should end. Assertions or code that warrants the same should be in production. I agree.
David Rodríguez - dribeas
The problem is when the action of doing the assertion costs something that would otherwise slow down your code. If it is not in a hot path, I totally agree, the asserts should always be on.
nosatalian
++ I've followed this path, in the spirit of "hope for the best - plan for the worst". We test to the very best of our ability, but never assume we have found *every* possible problem. Assert (or throwing an exception) is a way of guarding against doing further damage if a problem occurs (heaven forbid).
Mike Dunlavey
It depends. Software that controls pacemakers or nuclear power stations should not be written like that.
MarkJ
+22  A: 

Opinion: That frameworks and third part components should be only used as a last resort.

I often see programmers immediately pick a framework to accomplish a task without learning the underlying approach it takes to work. Something will inevitably break, or we'll find a limition we didn't account for and we'll be immediately stuck and have to rethink major part of a system. Frameworks are fine to use as long it is carefully thought out.

Kevin
+1 for the most stunning spelling of 'inevitably' ever.
ChrisA
I disagree, how many StringUtils classes you have in your project? I once found project that had 5 of them. Most of that stuff could be replaced by third part lib.
01
Frameworks, yes. Useless overhead, many times. Third party components, no! Portions of the task already completed, tested and debugged by thousands of other people!
skiphoppy
@skiphoppy -- I can't help it. I really am a roll your own type of guy at heart. I will fully admit that I might be jaded as places I've worked at the past tried to buy the absolute cheapest things possible. It bit us in the end.
Kevin
Joel in defence of not-invented here syndrome: http://www.joelonsoftware.com/articles/fog0000000007.html
MarkJ
+1 disagree completly :)
ykaganovich
+68  A: 

Opinion: Never ever have different code between "debug" and "release" builds

The main reason being that release code almost never gets tested. Better to have the same code running in test as it is in the wild.

Cameron MacFarland
I released something week before last that I'd only tested in debug mode. Unfortunately, while it worked just fine in debug, with no complaints, it failed in release mode.
David Thornley
The only thing I differ between Debug/Release builds is the default logging level. Anything else always comes back to bite you.
devstuff
ummm - what about asserts? Do you either not use them, or do you leave them in the release build?
Daniel Paull
Again, I don't tend to use them. If you're asserting something in debug shouldn't you have it fail in release too? Use an exception if it's critical, or don't use an assert (or not care if the assert doesn't make it to release).
Cameron MacFarland
@Cameron MacFarland - a good point; code with assertions in Debug mode either ends up not handling the failure condition in Release mode, or with a second failure-handling path which only works in Release mode.
Graham Lee
It would be like writing to different applications. you're debug version would be nicely debugged, and your release version wouldn't. Tragic!
Jeremy
@Daniel Paull, if there is something fishy it is often better to stop the processing than having corrupt data.
tuinstoel
Agreed: Exceptions > Asserts.
postfuturist
Agree: there are some very nasty bugs in there that could be real detrimental to your rep!
Seventh Element
Hmmm. So, release code almost never gets tested, right? No offence Cameron, but remind me never to use any of your software
MarkJ
@MarkJ: That's what I'm saying, you should be testing the code that goes out the door, and not have a difference between "Release" that is not tested, and "Debug" that is tested, but never released.
Cameron MacFarland
James Curran
@James: Exceptions also bring the app crashing down. Also what happens when a user sees an assert error? Are they supposed to fix it?
Cameron MacFarland
All development and testing should be done on the release build, but a debug build should exist to assist in debugging. (Hello #ifdef!)
rpetrich
You just need to switch. Our QA uses debugging builds during development but switches to release towards the end. There are certain levels of sanity checking that you would like to be performed as much as possible before shipping, but cannot afford to ship due to performance reasons.
nosatalian
+7  A: 

There is no such thing as Object-Oriented programming.

Apocalisp
The problem I have with that article is that it argues that OOP doesn't model the real world properly and so it doesn't exist. I agree that OOP is a poor real-world model but that doesn't mean it doesn't exist.
Cameron MacFarland
@Cameron MacFarland: That's not what the article argues at all. It argues that there's no distinction between "OOP" and other kinds of programming, other than a rhetorical one.
Apocalisp
Why is there no reference to ADT which I believe OOP was sprung from?
epatel
@Apocalisp: You're right, I only skimmed the article. Now that I've read it properly, he compared making distinctions between code styles with making distinctions about race by using the argument made by capitalist libertarians, who believe in things that lead to slavery and killing poor people.
Cameron MacFarland
See I told you it was controversial. Enough to draw an ad hominem with a non-sequitur and a straw man in a single sentence. I'm impressed.
Apocalisp
@Cameron, actually liberals are the one killing poor people by telling them that they don't need to be responsible for their life, they just need to do what 'superior' liberals tell them to do. Liberalism is all about emotional and intellectual ego.
Lance Roberts
@Apocalisp: You impress easy. "Valid concepts are arrived at by induction" completely ignores Kant's idea of a priori concept, which is what OOP and Smurfs would be considered. Restricting concepts to facts of reality is itself a straw man argument.
Cameron MacFarland
"It is a useless distinction, in exactly the same way that “race” is a useless distinction." - And nationality, religion, sex, occupation. They are all useless distinctions if you follow the logic of the Ayn Rand article.
Cameron MacFarland
@Cameron: You've hit the nail on the head. I'm deliberately and completely in defiance of Kant because his ideas are drivel. To think is to think about something.
Apocalisp
"Java is object-disoriented" -- me
Svante
Nice answer... "No such thing as OOP"... And it's easy to prove. Just look at the assembly generated from any C++ compiler. I don't see any OOP in there... :)
LarryF
There needs to be an Object-Action Oriented Language. Actions are not Objects. It makes me angry when I write a void to modify an Object. ARRRGH............................
WolfmanDragon
@epatel: perhaps because the idea that OOP was sprung from ADT is wrong. See "OOP vs ADTs" (http://www.cs.utexas.edu/~wcook/papers/OOPvsADT/CookOOPvsADT90.pdf) and "On Understanding Data Abstraction, Revisited" (http://www.cs.utexas.edu/~wcook/Drafts/2009/essay.pdf) by William R. Cook.
MaD70
+331  A: 

If you're a developer, you should be able to write code

I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I'd initially started out with questions like:

Given that Pi can be estimated using the function 4 * (1 - 1/3 + 1/5 - 1/7 + ...) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.

It's a problem that should make you think, but shouldn't be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn't even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:

Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.

Amazingly, more than half the candidates couldn't write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had "C# developers" who could not write this function in C#.

I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!


Edit:

There's a lot of discussion in the comments about whether the first question is a good or bad one, and whether you should ask questions as complex as this in an interview. I'm not going to delve into this here (that's a whole new question) apart from to say you're largely missing the point of the post.

Yes, I said people couldn't make any headway with this, but the second question is trivial and many people couldn't make any headway with that one either! Anybody who calls themselves a developer should be able to write the answer to the second one in a few seconds without even thinking. And many can't.

Greg Beech
calculating how many terms you need to guarantee that the 5th decimal place does not change any more is actually not that straightforward.
martinus
@martinus - I agree with you. The only part where I'd have to think answering to such question is the "accuracy of 5 decimal places" thing. I would probably hack it (perform a lot more calculations than needed and cut the result after 5th place :-) )
Abgan
@martinus - yeah I realise 'simple' isn't quite fair, I've updated the text to be more accurate.
Greg Beech
@PhoenixRedeemer - this isn't to test your maths, it's to see how you factor the solution (do you use recursion or looping? why? how do you work out what the exit condition is? how do you test for it?)
Greg Beech
2 words: Fizz Buzz.
Kibbee
Here's a solution to the problem, if you're interested: http://stackoverflow.com/questions/407518/code-golf-leibniz-formula-for-pi#407530
Greg Beech
You're wasting your time interviewing people who can't answer basic questions. Give them a written test, the test only needs to ask really, really easy questions. Don't even bother with taking the time to read their resume until you've done the test. Its amazing how much time you'll save.
AnthonyWJones
Give them a real question not a mathematical question. Job interviews give nervous and stress to people. This kind of questions are a waste of time for all. Real questions for real Job. I like this questions but if someone hiring to me ask them, i don't want this job. The hirer is not professional.
FerranB
I like that kind of question, because it shows you who your dealing with. Id not like to work with this guy. good question.
01
For my very first job, I knew it was the place I wanted to work when I was asked "Can you tell me what a linked list is?" Noone else asked me that question. Apparently nobody else answered that question for them, either! I stayed there for nine years. Never used linked lists there. :)
skiphoppy
@ PhoenixRedeemer: I don't think this is a math question at all. A programmer should be able to implement a simple formula like that. That doesn't test your math skills. Besides, a programmer is supposed to have some math background, so it shouldn't be so confusing even if you are nervous.
Marc
A recent interview process I was involved in asked candidates to invert the order of a string of words. Less than 10% of candidates could even start a psuedocode response. Frightening and a response to the complaint about maths in the comment above. Very basic programming is rarer than it should be.
duncan
@FerranB - This isn't a mathematical question. Sure, it contains maths, but really it's a question of how you factor a solution (loops? recursion? functional?) and whether you can identify the patterns and implement it. Any senior developer should be able to do this. Also, working under stress is...
Greg Beech
(continued) sometimes part of the job. Sometimes you have deadlines and you have to implement complex code in a short space of time. That's just the way life is, especially working at a start-up. Not everyone enjoys that, but that's why you have interviews - to find out!
Greg Beech
Yes, I agree, in our dev team, they've hired people with the title "programmer analyst" and "software developer" who can't write code in any language, and have absolutely no development education. How the heck does THAT work?!
Jeremy
I'm unsure about asking people programming/math questions in an interview :-) I don't have an issue with a written test that said, sometimes it is really cool to get someone who is excited about discussing such problems.
billybob
Even if this is controversial, it shouldn't be. I think it should be obvious.
luiscubal
@luiscubal - my thought is that you can advertise for people and not get enough to fill positions who have any demonstrable coding skill - so do you hire one of the 'non-coders' as a developer or leave the position unfilled? That question is where the controversy lies, IMO.
duncan
I don't buy this. Need a builder, do you ask them to build a little wall first? Looking for a school for your kid? do you ask them to 'teach' you something? No, You look at their achievements and qualifications. This guy hit the nail on the head. http://burningbird.net/technology/perfect-example/
Skittles
The "write function to calculate area of a circle" is a favourite question of mine when running interviews. I usually bring it out at the start, and if they flunk that, game over. I am constantly surprised how many "experienced programmers" CANNOT even do something this basic.
madlep
@Skittles - basic logic dictates that you need to prove your analogies are actually applicable.
SDX2000
+1 because I agree, but mostly because I cannot believe how many people disagree with this. You're developers for Jeebus sake, you're supposed to be able to write code, anywhere, at anytime, under any conditions.
Binary Worrier
math == coding (sort of)
pi
I was once asked to draw a house... everyone else in the room drew a cartoon, i drew a blueprint.
Chris
@Skittles: just looking at achievements isn't enough because it is _very_ easy to take credit for other people's work. Many people look good on paper and then can't even answer very simple questions. Plus, prior performance doesn't always predict future performance.
Greg Rogers
@Skittles, getting somebody to renovate a house, or build a wall, I would go an examine a wall they had already built, or check an accreditation with a known body.As most developers are not free to show off their previous work and there are few places for accreditation, other alternates need be
Greg Domjan
BS Problem. To determine the accuracy of the algorithm you have to have an accurate value of Pi as a point of reference. My answer would be use a Constant and don't be a pedant.
_ande_turner_
@Ande: you fail the interview. Not that I'd ever ask question 1, because I can accept some people suck at math, but a sequence that alternately adds then subtracts a decreasing number has a well-defined upper and lower bound.
Jimmy
The problem is that the question doesn't test coding ability, it tests ability to pseudocode on a whiteboard in a high-stress situation. Many competent people, especially introverted developers, will merely freeze in the spotlight at this point. Give them a computer and a quiet room instead.
nezroy
@nezroy - I've never seen any competent developer freeze. These aren't the only type of questions asked (most are just discussing technical issues) and the people who can't start these questions don't tend to do very well in those either.
Greg Beech
Also we always tell candidates, up front, that the interviews will involve coding on a whiteboard and/or a laptop. So nobody is going in there to be surprised. Experience shows that those who can't do it on a whiteboard can't do it on a laptop in a quiet room either.
Greg Beech
The point of the pi calculation is not that you expect a perfect high-performance solution, but that you get to see the programmer at work. You tolerate whiteboard mistakes, and you tolerate fiddling around. Somebody who comes up with a slightly flawed answer is a good bet.
David Thornley
If that were an interview question, I'd give the job to the person who responded with: return System.Math.PI;
Craz
Many of you aren't really understanding the point.. It shouldn't be that hard to come up with *something* for the PI question. Even though it's right to use the constant, the intent of the question is to see that you can think, and not to get the best answer. This should be made clear up front.
FryGuy
"I've never seen any competent developer freeze" - well abviously. You define "competent programmer" as not equal to "froze in interview"...
WOPR
@Kieran - The sentence after the one you quoted says "These aren't the only type of questions asked (most are just discussing technical issues) and the people who can't start these questions don't tend to do very well in those either." I can judge whether they are competent from these too.
Greg Beech
Jeff (CodingHorror) likes Fizz Buzz as a simple programming test http://www.codinghorror.com/blog/archives/000781.html
MarkJ
I've only ever worked as a web developer for a year, and I failed my university studies, but even I can answer that first question in a handful of lines of code. This is NOT a math question at all, the math is handed to you on a platter. This is out and out coding, implementing the math in your code
Matthew Scharley
I don't see why there are people who complain about having to solve a "math problem". You're given a formula, program it. It has zero math in it. Only thing that requires some thinking is the accuracy to five numbers. I wouldn't want someone who can't solve this little problem.
Carra
Math.round(Math.PI, 5); ;)
Fraser
javascript... function GetArea(circle) { return Math.PI * (circle.radius^2); }
Tracker1
I guess as a math person, I recognize that as a telescoping series. Thus when the absolute value of the next term is less than 1/100000 it will be within 5 decimal places.
rlbond
I don't understand how anyone interviewing for a developer job could fail at the "area of the circle" test. Can you please give an example of what you hear from inadequate candidates?
David
This is a great test! I was able to write it in about 10 minutes in C#, and I know that alot of the people I've hired in the past wouldn't have been able to do it at all
Kevin Laity
Edit: I'm referring to the PI question, not the area of a circle question of course. That one would take more like 6 seconds.
Kevin Laity
This is a great question because it tests more than basic coding ability. It tests the maturity of the job candidate. A good candidate will treat the whole accuracy thing lightly and ask why it's necessary. Bad candidate will get really worked up it.
MrDatabase
How is that 'controversial'?
Chad Okere
I don't understand the purpose... Why do you need to calculate pi? PI is (essentially) constant. WTF. If anyone writes a function longer than:function() { return 3.14159 } They're wasting their time.
jason
@jason - It's to test your ability to think about a problem, break it down into its component parts, and write code that implements it. The subject matter is not important -- it's an interview question, not a real world requirement.
Greg Beech
"They're just questions, Leon."
quant_dev
I could understand not being able to write a fully working solution on a whiteboard off the top of their heads... but if they can't reason out the LOGIC to do it, that just makes me SAAAAAAAD!
Gabriel Hurley
@jason - you completely missed the point. You want to see if they can actually take a problem and develop a programmatic solution - if they can't so that, why would you let them take a list of design requirements and tell them to build a fully fledged business system?
Callum Rogers
return Math.PI; :P
Nick Bedford
Thats exactly what I was thinking Nick.
Kyle Rozendo
Sorry, but, did you just say that they couldn't even work out the second question? Damn... I actually find that hard to believe. Perhaps I have too much faith in humanity :(
@dstibbe - Yep, I'm not kidding, unfortunately. I found it hard to believe too. It was actually quite awkward being in an interview with supposedly professional programmers who couldn't begin to answer even the most basic programming questions.
Greg Beech
mine would just say print 3.14159. thats the algorithm.
corymathews
If you choose to just hardcode PI, you failed the test. One of the more important tasks of a developer is to correctly interpet the customers demands. The customer in this case wasn't interested of a function returning PI. The interest was in seeing you produce a simple function following text instructions.Trying to be smart and bypass the actual construction of a function by hardcoding the answer fails to recognize that.
Marcus Andrén
even if you don't know the Leibnitz formula (like me), you could use a method like double doMagic(int n) and work on a solution for the rest.
Zappi
Ignoring the fact that the known value of the series is PI, supposeyou were just given a series and asked to write a program to calculateit's value.Now, the real question is if the series is convergent or divergent, and how fast it converges. This can't be reliably determined by writing a program. Your winning candidate would tell you to hire a mathematician. Your super-winning candidate would then offer to put on his other hat and study the problem.
ddyer
`4 * (1 - 1/3 + 1/5 - 1/7) = not even close to PI` so if this was your question, they probably couldn't answer it because it doesn't add up
Russ Bradberry
What is so complex about about q1? Assuming you only need to run the loop 5 times. double pi(){ double ret = 0; int denom = 1; for (int i=0; i<5; i++){ double a=1/denom; denom+=2; double b=1/denom; denom+=2; ret = ret + (a - b); } return (4 * ret);}
tgandrews
Both of these questions are better than any "real world" problems that one could use as both require minimal background knowledge and have very precise requirements. Anyone who struggles with translating these into even pseudo-code can't be expected to do any better when dealing with real work.
lins314159
+1 I think the code (and the thinking behind) is perfectly achiveable in an job interview. OK, everyone can have a bad day, get nervous and not getting the point, and that's why I don't believe in asking for code and check it with a compiler, I think it's important to see the thinking process...Anyway, I've made a few "code tests" on several job interviews, which go well, and then not getting the job, so I think should be not uncommon as it seems.
Khelben
Anybody who asked me to write a function to calculate pi to 5 decimal places, would see me say "Thanks for your time" and leave. But if they asked me to write code that will calculate pi to 200 decimal places, then they've asked me something interesting. It is worth calculating pi from scratch, but not when you're interested in less precision than a pocket calculator, or what we have stored in our heads already.
Warren P
I coded a solution in C to the above PI question easily but got what i thought was a strange answer. It results in a poor approximation of PI, there are better approximation algorithms than that.
Gary Willoughby
My favorite language is C# so my answer would be:Double GetStupidQuestionAnswer() { return Math.Round(Math.PI, 5); }If this thread wasn't so long I would post this most controversial programming opinion:A good programmer doesn't need the math background to re-implement well known algorithms.
@jason: Oh dear. It might shock you, but when you got tested on the multiplication table in school, *it wasn't because the teachers wanted to know what 5 * 7 was.*
j_random_hacker
Man... those 2 questions would drive me out of the interview room! When a potential employer begins by asking me simple academic questions that have no bearing on real-life programming problems, my instincts tell me that the team being assembled will have problems with project delivery.
code4life
@code4life - Your instincts would be wrong then. In the last three years we've had 16 major releases and about 25 interim releases, and have hit the go-live date for all but one, which we slipped by a week due to unforeseen circumstances. I'd say that's a pretty good track record for project delivery.
Greg Beech
@Greg, I guess birds of a feather flock together... In any case it's just not my cup of tea. And certainly not the sort of questions I'd ask at an interview.
code4life
@j_random_hacker: Then you missed the point of my comment. A decent developer will realize that calculating a known constant is completely ridiculous, and should demonstrate that knowledge even in an abstracted interview question such as this. Part of being a good programmer is challenging even the fundamentals of the problem you're presented, to see if you can accomplish your goal in a more graceful and efficient manner.
jason
@jason: Sure, as an interviewer I would give a couple of points for mentioning that this is something that you would never do in practice -- I would acknowledge the truth of that, and then ask them to solve it anyway, *as an exercise to show that they have some basic skills*. If at that point they still don't want to try their hand, then they have either attitudinal or cognitive problems, and I'd show them the door.
j_random_hacker
I would vote you up, but you have 314 points...
Douglas
@jason, Programmers need to be able to deal with abstract problems, impractical demands, and underdeveloped technologies. Good programmers can handle theoretical problems, be diplomatic about suggesting alternate solutions, and be able to recognize when a situation is symbolic of something else (solving this problem symbolizes ability, not the need to solve the problem).
Kimball Robinson
whats a radius?
Uncle
You will not be able to explain why a given (or your own) solution to this problem is correct without mathematically analyzing it because otherwise, you will not be able to explain why your solution is accurate to 5 digital places. It is **not** enough to just assume that if your 5th digit does not change anymore after *n* steps, you have reached it. (For whatever *n*, probably people would have stopped with *n=1*.)
Albert
+68  A: 

Opinion: explicit variable declaration is a great thing.

I'll never understand the "wisdom" of letting the developer waste costly time tracking down runtime errors caused by variable name typos instead of simply letting the compiler/interpreter catch them.

Nobody's ever given me an explanation better than "well it saves time since I don't have to write 'int i;'." Uhhhhh... yeah, sure, but how much time does it take to track down a runtime error?

John Booty
What's your view on whether the *type* of the variable should be explicit or not? (Thinking of "var" in C#.)
Jon Skeet
Good one. If you have to work with legacy Fortran code, you wouldn't believe the headaches caused by this issue.
Mike Dunlavey
I actually wanted to write this same opinion, as well. IMHO, this is the major drawback of both Python and Ruby, for no good reason at all. Perl at least offers `use strict`.
Konrad Rudolph
Explicit declaration is good, to avoid typos. Assigning types to variables is frequently premature optimization.
David Thornley
Yup. *ONE* bug hunt involving an l (between k and m) becoming a 1 (between 0 and 2) wasted a lifetime of declaring variables.
Loren Pechtel
Anything else is not a real language. Now THAT'S controversial.
Andrei Taranchenko
Controversial... but true!!
MarkJ
I remember learning Visual Basic 6 in high school. If OPTION EXPLICIT was not the first line in each source file, we would fail.
rlbond
+1  A: 

That (at least during initial design), every Database Table (well, almost every one) should be clearly defined to contain some clearly understanable business entity or system-level domain abstraction, and that whether or not you use it as a a primary key and as Foreign Keys in other dependant tables, some column (attribute) or subset of the table attributes should be clearly defined to represent a unique key for that table (entity/abstraction). This is the only way to ensure that the overall table structure represents a logically consistent representation of the complete system data structure, without overlap or misunbderstood flattening. I am a firm believeer in using non-meaningful surrogate keys for Pks and Fks and join functionality, (for performance, ease of use, and other reasons), but I beleive the tendency in this direction has taken the database community too far away from the original Cobb principles, and we jhave lost much of the benefits (of database consistency) that natural keys provided.

So why not use both?

Charles Bretana
+4  A: 

Whenever you expose a mutable class to the outside world, you should provide events to make it possible to observe its mutation. The extra effort may also convince you to make it immutable after all.

Alexey Romanov
+367  A: 

Opinion: SQL is code. Treat it as such

That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable.

I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don't you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?

And check it into source control
Cameron MacFarland
sqlinform.com is your friend.
Christopher Mahan
It depends. There are an awful lot of idiots who set hard to maintain styles in SQL. It is one thing thing to set up indentation standards, it is another thing to make it so all the name = value or column as name all line up in order (because if you add something long you often have to re-indent)
Cervo
Amen! I have found if I ever have to update someone elses code and their formatting sucks, I have to reformat the code to make it readable before I can make my changes.
Jeremy
A lot of XML should be treated as code too. Whether this is XSLT or plugin.xml/web.xml configuration, it is not just data, it's wiring.
jamesh
I do scream! And, @Cameron: bravo, my friend.
JasonFruit
@Christopher Mahan hey, thanks, didn't know about that. You'd think after all these years I'd know to google for a solution to a reoccurring problem like that.
Manos Dilaverakis
++ to this and especially the source control bit. I am constantly flabbergasted that people don't 'get it'.
kpollock
I've also heard "There's no code change required. We just need to tweak the SQL"!
LaJmOn
People still write SQL? j/k :P
Lusid
One of my ex-bosses hasn't treated PHP like code, and build up a string with a single assignment that was around 100 lines long. Same for indentation. But Delphi he could at least format in a readable manner :S
phresnel
"Code or Code"? Did you mean "Schema or Code" perhaps?
Adam Nofsinger
This is a great answer. SQL and DDL should be treated both as code.
Kwang Mark Eleven
Very good point
dimus
Right - it's code. It's just that it's Bad code.
Ladlestein
+1 and I'll add the corollary that it should be tested (in some manner, via integration or Fit tests).
Michael Easter
Yeah! And don't type it all in CAPS either!
Chris Needham
Readable to you is not readable to me. I think your formatting is not readable. Get over it.
Joe Philllips
Agreed 100x over. Don't be afraid to use stored procedures where possible, either.
baultista
And release it like code. Check it into source control, and manage its release just like you would any code. Do not change it directly in production!
Mike Miller
+3  A: 

MVC for the web should be far simpler than traditional MVC.

Traditional MVC involves code that "listens" for "events" so that the view can continually be updated to reflect the current state of the model. In the web paradigm however, the web server already does the listening, and the request is the event. Therefore MVC for the web need only be a specific instance of the mediator pattern: controllers mediating between views and the model. If a web framework is crafted properly, a re-usable core should probably not be more than 100 lines. That core need only implement the "page controller" paradigm but should be extensible so as to be able to support the "front controller" paradigm.

Below is a method that is the crux of my own framework, used successfully in an embedded consumer device manufactured by a Fortune 100 network hardware manufacturer, for a Fortune 50 media company. My approach has been likened to Smalltalk by a former Smalltalk programmer and author of an Oreilly book about the most prominent Java web framework ever; furthermore I have ported the same framework to mod_python/psp.

static function sendResponse(IBareBonesController $controller) {
  $controller->setMto($controller->applyInputToModel());
  $controller->mto->applyModelToView();
}
George Jempty
Your bio is scary - all washed up at 20! Here is my own anti-MVC screed. http://stackoverflow.com/questions/371898/how-does-differential-execution-work
Mike Dunlavey
+59  A: 

Source Control: Anything But SourceSafe

Also: Exclusive locking is evil.

I once worked somewhere where they argued that exclusive locks meant that you were guaranteeing that people were not overwriting someone else's changes when you checked in. The problem was that in order to get any work done, if a file was locked devs would just change their local file to writable and merging (or overwriting) the source control with their version when they had the chance.

Cameron MacFarland
I've always local-mirrored the code. Then I would do the merging with Windiff and an emacs-macro, then lock it only long enough to check in the changes. I hated it when people would lock a file, then go on vacation.
Mike Dunlavey
I used to think that it was impossible to work in a team without file locks in your SCM. But after working with Subversion in four companies (and rolling it out myself in two of them, I find merging (auto when possible, manual when not) much better 99% of the time.
dj_segfault
Not controversial. Nobody used SourceSafe by choice.
MusiGenesis
@MusiGenesis: Yes they do. They exist.
Cameron MacFarland
My company is still using SourceSafe. The main reasons are a) General inertia and b) The devs are scared of the idea of working without exclusive locks.
T.E.D.
My personal feeling is that the ability to merge code files should be a skill all programmers need, like all programmers need to know how to compile their code. It's part of what we do as a byproduct of using source control.
Cameron MacFarland
@MusiGenesis: I've headed a move away from SourceSafe in two different companies over the last 5 years, and in both cases the reason for using SourceSafe was ignorance of the alternatives.
scraimer
SourceSafe doesn't even work on anything based on IIS7. So soon enough it's going to be pretty much redundant.
Ed Woodcock
Just to be pedantic...while exclusive locks were the default until recently, SourceSafe has actually supported edit-merge-commit mode since 1998.
Richard Berg
Richard Berg
@Richard: Yes but nobody who uses Source Unsafe uses it in Merge mode because they're afraid to, etc.
Cameron MacFarland
worked very well for many years for us.
peterchen
MKS baby! Finally just killing it off now.
TJ
I would never want to put my precious source in something notorious for corrupting files. Had to use it once due to a lack of alternatives, got burnt.
Oorang
@MusiGenesis we do at my work place, but I don't particularly enjoy it. I'm much happier with SVN.
baultista
+5  A: 

Arrays should by default be 1-based rather than 0-based. This is not necessarily the case with system implementation languages, but languages like Java swallowed more C oddities than they should have. "Element 1" should be the first element, not the second, to avoid confusion.

Computer science is not software development. You wouldn't hire an engineer who studied only physics, after all.

Learn as much mathematics as is feasible. You won't use most of it, but you need to be able to think that way to be good at software.

The single best programming language yet standardized is Common Lisp, even if it is verbose and has zero-based arrays. That comes largely from being designed as a way to write computations, rather than as an abstraction of a von Neumann machine.

At least 90% of all comparative criticism of programming languages can be reduced to "Language A has feature C, and I don't know how to do C or something equivalent in Language B, so Language A is better."

"Best practices" is the most impressive way to spell "mediocrity" I've ever seen.

David Thornley
Your last sencence is +1. The rest is IMHO wrong because zero-based indices are very useful because make cause the indices of a container of size N to be the set of integers in the half-open interval [0, N[. This has some nice mathematical/algorithmic/practical consequences.
Konrad Rudolph
Personally, I haven't seen as much use for the half-open intervals as you have. If you could leave a pointer in a comment, I'd be interested.
David Thornley
+1 because A) I disagree with paragraph 1, so I guess it answers the question, and, 2) I like the other paragraphs :)
Mike Dunlavey
Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.- Stan Kelly-Bootle
Gavin Miller
Yup, +1 for your final sentence.
Graham Lee
+1 for your comment about Common Lisp
Technical Bard
+1 for learning math, -1 for saying Lisp is best (it takes more than parentheses to make a good language)
Lance Roberts
in smalltalk arrays start with 1
nes1983
It's just a convention and it doesn't matter.
Seventh Element
Can't agree with the 1-based arrays, either. Would make add/remove elements much more complex (because you'd have to rebase your indexes during the operation). I'd opt for -1 being the last element in an array, though :)
Aaron Digulla
What's the difference between 0-based and 1-based arrays for add/remove? Python's notation using negative numbers for measuring from the end is kinda neat.
David Thornley
+56  A: 

All variables/properties should be readonly/final by default.

The reasoning is a bit analogous to the sealed argument for classes, put forward by Jon. One entity in a program should have one job, and one job only. In particular, it makes absolutely no sense for most variables and properties to ever change value. There are basically two exceptions.

  1. Loop variables. But then, I argue that the variable actually doesn't change value at all. Rather, it goes out of scope at the end of the loop and is re-instantiated in the next turn. Therefore, immutability would work nicely with loop variables and everyone who tries to change a loop variable's value by hand should go straight to hell.

  2. Accumulators. For example, imagine the case of summing over the values in an array, or even a list/string that accumulates some information about something else.

    Today, there are better means to accomplish the same goal. Functional languages have higher-order functions, Python has list comprehension and .NET has LINQ. In all these cases, there is no need for a mutable accumulator / result holder.

    Consider the special case of string concatenation. In many environments (.NET, Java), strings are actually immutables. Why then allow an assignment to a string variable at all? Much better to use a builder class (i.e. a StringBuilder) all along.

I realize that most languages today just aren't built to acquiesce in my wish. In my opinion, all these languages are fundamentally flawed for this reason. They would lose nothing of their expressiveness, power, and ease of use if they would be changed to treat all variables as read-only by default and didn't allow any assignment to them after their initialization.

Konrad Rudolph
Most functional languages are just like this; for example F# explicitly requires you to declare something as "mutable" if you want to be able to change it.
Greg Beech
Functional languages are just superior that way. Of the non-functional languages, Nemerle seems to be the only one offering this feature.
Konrad Rudolph
I like the bit in SICP where the authors dismiss 'looping constructs such as do, repeat, until, for, and while' as a language defect.
fizzer
Disagree but made me think. Interesting.
Steve B.
I personally like this. "Everything is immutable" makes multithreaded code a lot easier to write: locks are no longer needed since you never have to worry about another thread changing your object under your feet, so a whole class of errors related to race-conditions and deadlocking cease to exist.
Juliet
There's no such thing as a free lunch. Immutability despite its many benefits will have a cost. Generally I like the idea, in the same way I like the idea of functional programming. Can I get my head round that, no. Am I particular thick, may be, but I don't think so.
AnthonyWJones
@AnthonyWJones: what costs does immutable-by-default have?
Juliet
This makes me wonder what my code would be like and how I would need to change my understanding of programming paradigms. Could I deal with immutable variables? I can't begin to grasp the extent of the repercussions of doing this in C#, but I can't imagine anything good coming of it.
BenAlabaster
The thing I don't like about immutability is the amount of copying required.
TraumaPony
I though this was too much when I read it in Effective Java: Favor immutability. Then, when applied it make totally sense. Apps are MUCH easier to create and maintain using immutability. The only extra thing needed is a macro template to "code" the copy methods just as TraumaPony pointed out.
OscarRyz
Language constructs can't take care of all accumulator cases. Sometimes what you are adding up isn't a simple list. It also could make hairy logic in some cases as you can't have a default value.
Loren Pechtel
@TraumaPony: The nice thing about immutability is that in (almost?) all cases copying can be replaced by simple aliasing. This *does* require some changes in data structures, though.
Konrad Rudolph
Another case that can't be immutable: Any sort of iterative calculation or calculation within a loop. More generally, the data you are working on. How well would Microsoft Immutable Word sell??
Loren Pechtel
@Princess: immutable-by-default has a comprehension cost. It's much more difficult to think about (not reason about, think about) immutable-by-default objects/variables/what-have-you.
Jeff Hubbard
I agree that variables should be readonly whenever possible. It lets the compiler optimize and it lets the developer know the value never changes after a certain point.
Jeremy
@Loren: about your “other case”: how is that different from a special accumulator? It is actually just that, and well covered by many frameworks, such as LINQ. Notice that any kind of user interaction rarely benefits from immutability so Immutable Word is probably not a good idea.
Konrad Rudolph
@Jeff: I think this is *at least* debatable. Programming in general has a comprehension cost, any style of programming does. But I doubt that immutable-by-default incurs *any* additional comprehension cost at all, especially since it's much closer to the mathematical use of variables in equations.
Konrad Rudolph
@Loren Pectel, I think that databases should be immutable too.
tuinstoel
There's an obvious cost in complexifying and slowing down the code, to a huge degree. This idea must have been thought of by those who don't have to do too much math programming.
Lance Roberts
@Lance, The opposite is true. Immutability actually helps the compiler a great deal in producing *more efficient* code because it can apply many more automated optimizations. This style of coding works perfectly with “math programming” (I guess you mean arithmetically dense code).
Konrad Rudolph
I want an immutable apple. When I take a bite of the apple I get your apple with the bite taken out of it, and can give my apple to the next person who wants a whole apple.It's all so simple!
Greg Domjan
@Greg, Things always change, we developers are the orchestrators and conductors of this change, because we change and shape the future with our ideas and our code. That's the reason we want immutability!
tuinstoel
Yes, and we'll only access read-only databases, stored on read-only media. Maybe once our programs have no mutable state, and therefore accomplish nothing we can move on to truly pure functional programming where nothing happens and the compiler with the best optimization outputs nothing.
postfuturist
Might be little hard to animate anything if variables describing object to animate were immutable.
Kamil Szot
@Kamil: no, not at all. In fact, `Point` objects in .NET *are* immutable, and animate just fine. You just need to create a new object for each animation position – which *sounds* inefficient but really isn’t necessarily.
Konrad Rudolph
Interestingly, in Java even loop variables can be final: for (final item : list) { ... } Took me a while to discover that.
hstoerr
He's not saying that all variables should be final, he's saying all variables should be final *by default*. That's reasonable.
Craig P. Motlin
+253  A: 

Unit Testing won't help you write good code

The only reason to have Unit tests is to make sure that code that already works doesn't break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won't even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances.

And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.

In fact, I'll generalize that even further,

Most "Best Practices" in Software Engineering are there to keep bad programmers from doing too much damage.

They're there to hand-hold bad developers and keep them from making dumbass mistakes. Of course, since most developers are bad, this is a good thing, but good developers should get a pass.

Chad Okere
+1 - I think your further generalization sums up my opinion very well
Greg Beech
Although I agree with your second statement ( the first I'm not sure about ) who judges who the good developers are? Many of the smartest programmers I know will often make dumb mistakes out of pure arrogance because they believe themselves to be such good developers.
glenatron
If you don't know what the edge cases are, perhaps you don't understand the problem you're trying to solve.
Barry Brown
I'd go so far as to say all programmers are bad developers on some days, so they're there to minimize the damage you can do to yourself. I agree with you about Unit Tests, in a lot of situations the cost of maintaining Unit Tests gets really high compared to the benefits.
mweiss
I have to point out that NOT Unit Testing won't help you write good code, either. Writing the tests first does force you to think differently about your API, which can arguably make your code better. If you don't know what tests to write, then you don't know what code to write either.
Bill the Lizard
It can help you write good code... if you write excellent automated unit tests, it helps you to decouple your code so that it can be tested, and thus it becomes more reusable.
Jesse Pepper
@Barry: Some edge cases are inherent in the problem, others are artifacts of the implementation. A test written before the code will only be able to handle the first type.
Dave Sherohman
IMO people who believe that best practices don't apply to them because they're better than everyone else are actually likely to be the worst programmers around.
Michael Borgwardt
No it won't, but it will help you maintain the quality of already good code when you need to modify it later.
Dan
unit tests are invaluable for regression testing - e.g. to make sure your refactoring change didn't break anything *else*
kpollock
I think Kent Beck's "TDD by Example" would convince you that writing tests first is not ridiculous at all (albeit, as for any practice, there is no need to always follow it slavishly).
Fabian Steeg
Agreed, but I think most "best practices" in software engineering are actually there to ensure job security for the engineers.
MusiGenesis
-1 Anyone who thinks they're too good a developer to need to test hasn't progressed very far. Need to be humble to be good.
MarkJ
I think you missed the point. Unit tests for libraries serve as the most concise and correct documentation for the library in existance. Treat it as documentation - cause that's what it is.
Thorbjørn Ravn Andersen
the easiest way to keep bugs from reappearing is through tests. If something breaks, write a unit test that fails, THEN fix the code so that the unit test no longer fails. If the test ever fails in the future, then someone reintroduced the bug.
Laplie
Oh yeah sure the BAD developers need to have their hands held, but us GOOD developers know better, right? We don't need no stinkin' tests, our code is totally SOLID. Like a rock. SRP, OCP, DIP, you name it, us GOOD developers nail it first time, every time... Gimme a break, this makes me sick. -1
Paul Batum
I've tinkered with TDD, and found it to be a good way of developing. I usually write a test for the expected route ("the happy path") first, followed by a test per corner case. At the end, I have a piece of code in which I have a greater confidence, and it usually leaves me with a smile.
Kaz Dragon
+1 for the boldface statement.
flodin
Wow, couldn't disagree more! But not going to downvote you, as it's a valid opinion. But, I think of writing tests first makes you consider how your api will be consumed, so it's more 'client-first' development.
Travis
If your unit testing doesn't cover edge cases, you haven't written your tests thoroughly enough. If you introduce further edge cases into the software because of your coding technique, it's a good sign you haven't thought about the problem enough yet - something unit tests help you do.
notJim
The purpose of unit test cases is one aid to verify that the code has implemented the functionality. Of course you can't do white box testing before writing the code, but it is perfectly useful and desirable to determine positive and negative test cases for spec functionality.
Kwang Mark Eleven
@Thorbjørn Ravn Andersen - I agree, but most people are app programmers rather than library programmers.
user9876
Really disagree. Unit tests act as documentation. Writing code with mocking/unit testing in mind forces you to decouple and inject in your dependencies; it makes your code implcitly more resuable as you are writing the code with two uses in mind (its primary function and the test). Unit tests *should* cover the edge cases, because you should be adding more unit tests as you are writing the code and spotting the edge cases (yes - right there and then as you key the edge case in). Unit tests help prevent regressions, etc, etc. I can't help but feel that you've missed the point somewhat.
Rob Levine
-1 for attempting to turn developers off a valuable technique with which you clearly have little experience.
TrueWill
I have ended up with better APIs in production code as a result of writing unit tests, since the code needed to be restructured to allow better testability.
Matthew Wilson
I agree that this is controversial - but can't agree with the statement (which is why I agree it is controversial). TDD can really help if you do it right. The problem with most methodologies, though, is that people DON'T ever do them right and then dismiss them. Hands up if you've been on a Scrum project where the business and testers weren't actually involved!!!
Sohnee
If you don't know the edge cases before you write the code (to put them into the tests) how the *** are you going to write the code????
amischiefr
Unit Testing is about quality assurance. It's there to make sure your code works and fails as you would expect it to. And then when you modify it later. And then when someone else modifies it later. Used correctly it does improve the quality of the code in the development phase which is the cheapest place to catch and fix bugs.
Swanny
+1. "Unit tests as documentation" is poor man's DbC. Other uses are very much overrated. And the unfortunately common mentality of "design to limitations of my favorite testing framework" (like interfaces and factories for everything, or all members virtual, just so they can be mocked) results in ugly APIs and overcomplicated code.
Pavel Minaev
"good developers will keep cohesion low" don't you mean good developers will keep cohesion high? High cohesion means you likely have small single purpose classes.
ceretullis
I think good developers are the ones who consider writing unit tests, and try it out, and then find out that (a) it helps, or (b) it doesn't, and are capable of finding some other way to help their process stay under control if Unit Tests are not working for them. They are always worth it is a lie. They are never worth it, is also a lie. They are almost always worth it, in my opinion. But that's not always true. I would say "good developers" should show me an equivalent way of finding regressions that is automatic and can be used in a smoke-testing environment. And then I'll listen to them.
Warren P
Disagree, by writing unit tests you are forced into evaluating the way that a given method will actually work and therefore have to consider ways in which it could be broken by a programmer. This in turn may make you re-evaluate the method signature or return type etc, leading to better code, more sustainable and thought out code. It is then, of course, available for making sure that you don't introduce bugs later etc!
Gary Paluk
unit tests makes you see your interfaces from the consumer's perspective, which will help you make it simpler and more coherent for others to use. Unit tests are also awesome ways to experiment with new features and ease them into your codebase. Not just for regression testing and debugging.
burkestar
I wish I could upvote this a million times.
benjy
I would generalise your rule even further: **Most rules in life are designed to stop stupid people from doing stupid things** -> The "very good" people follow the rules to the tiniest detail, the brilliant people throw away the rule book, and do things there own way. note: I am not saying that rules are bad, but they are by there nature way too rigid
Nico Burns
+9  A: 

My one:

Long switch statements are your friends. Really. At least in C#.

People tend to avoid and discourage others to use long switch statements beause they are "unmanagable" and "have bad performance characteristics".

Well, the thing is that in C#, switch statements are always compiled automagically to hash jump tables so actually using them is the Best Thing To Do™ in terms of performance if you need simple branching to multiple branches. Also, if the case statements are organized and grouped intelligently (for example in alphabetical order), they are not unmanageable at all.

DrJokepu
Define long. I've seen a 13,000 line switch statement (admittedly it was C++ but still...)
Cameron MacFarland
Well, (in c#) if the switch statement is generated (as opposed to manually edited), I see nothing wrong with a 13K line switch statement to be honest. It's going to end up as a hashtable anyway.
DrJokepu
Of course, if it has 13K lines because there is loads of code in each "case" clause, that's totally different. It should be refactored then.
DrJokepu
Ever wondered why there is no "switch" statement in python?
Christopher Mahan
Actually, I do. Was it either that or if, and replacing all if's with switch's would have been a bit too verbose, even for python?
JB
What I want a compiler to do is generate good assembly code for me, and switch is how I tell it I want a jump table. That said, it's easy to think you're doing things for "performance" reasons when in fact you'll never notice the difference.
Mike Dunlavey
@Mike: if you you have a switch statement with thousands of cases, you _will_ notice the performance difference between a jump table and a series of if-else statements.
DrJokepu
How can you have thousands of cases? I can't imagine it, do you have an example?
tuinstoel
@tuinstoel: It's not that hard to imagine it if you try. Before the rise of floating point units, it was a common practice to keep trigonometric functions in lookup tables. I think that keeping the results of complex math functions in premade lookup tables still makes sense today.
DrJokepu
Great answer. Agree completely.
Jonathan C Dickinson
+180  A: 

Software development is just a job

Don't get me wrong, I enjoy software development a lot. I've written a blog for the last few years on the subject. I've spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting.

But in the grand scheme of things, it is just a job.

It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I'd rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding.

I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.

Greg Beech
I would say that about money: money is just a mean to enhance your life; don't let it get in the way of enjoying your life. This is getting way off topic ...
hasen j
It can be a passion for some people too!
SDX2000
I wish I could vote this up a million times. Moderators, is there a way I could transfer all my reputation points to this chap? Is that allowed, Jeff Atwood?
Vulcan Eager
Tell musicians their music is just a job.
icelava
you are great. Best answer!I've lost my girlfriend for "deep programming" and I will never forgive myself.
ugasoft
I agree, but I believe it can be more than just a job and still rank below family, friends, and happiness. I think that programming is just a job for you, but not for all.
brad
Well, "just a job" is for clock-watchers. Since we spend so much time on it, might as well like it, eh?
Andrei Taranchenko
@icelava: I know some musicians (classical music, bass and violin) to whom it is exactly that: Just a job.
Treb
Programmers who overvalue programming overvalue themselves.
Seventh Element
I disagree... I would still be doing software development, if I didn't have to work for a living... Though, it would be on projects *I* want to work on.
Tracker1
@Seventh Element, I totally agree!!
xoxo
This is something that applies to you. You assume that it automatically applies to everyone. I consider friends and family very important indeed, but I consider doing what I was born to do just as important. I cannot under any circumstance neglect either of them.
Lucas Lindström
I didn't assume anything. It's my opinion. You're free to disagree.
Greg Beech
While a nice blanket statement that has some controversial "pop" to it, your assertion leaves no room for passion. How can *any* work done for money be more than just a job according to your statement? Your opinion devalues the passion of millions and diminishes the efforts of hobbyists who don't even get paid. You are certainly entitled to your opinion, but come on, the world is more complex than that.
Bryan Watts
@Bryan: If I disliked cheese and you liked cheese, saying that because I don't really love cheese my opinion devalues your love of that particular bovine product would seem absurd. Telling people that I don't want eat cheese for every meal does not diminish your cheese blog, nor your love for cheese, nor the millions of other cheese aficionados. If I want have crackers with only ham on them, that doesn't stop you at all from making nachos, quesadillas, or cheesecake, and then blogging about your cheese experience, whether it be chedder, cream, provalone, american., or peperjack. (Mmmm...)
Robert P
@Robert P what a silly thing to say! If you dislike cheese then why do you eat it on a daily basis? (This is what you are effectively saying...think about it.)
SDX2000
@icelava It's even more true for musicians, since not all (or even most) musicians play other people's music rather than write their own.
Brian Ortiz
This is a great controversial topic. Probably should be #1. Personally, I come here to make money, so that I can pay for my kids, and vacation, and food, and beer. I chose a profession that I am passionate about, but at the end of the day, I would rather be teaching HS Math and coaching HS Football. Now, if I could only get the same kind of salary doing those...
amischiefr
icelava, I'm both a performing musician and a software developer. While I enjoy software development and do take an interest in it outside work, it's very obvious to me that it's much closer to "just a job" than music, which is a passion, and which I do gladly for free. I think my comment, your comment, and the parent answer are all a matter of personal values—with consequences for our respective jobs, but otherwise without much room for any kind of controversy.
eyelidlessness
No, your just a job.
freedrull
Software development is an *art*. At least to some. Which kind of programmer would you rather have on your team?
Loadmaster
It depends how much time you want to put into it outside of work. I like coming to StackOverflow to solve problems while learning something new, and I like reading up on the latest and greatest tools/technologies/techniques. I occasionally write code outside of work, but it's usually on one-off projects that I rarely finish.
baultista
Software development resulted in the internet and many other things that changed the world forever. Is that not important in the grand scheme of things?
Bart van Heukelom
It's not. I'd rather see software developing as an art. Make it a passion and you will become the best.
Exa
+35  A: 

Singletons are not evil

There is a place for singletons in the real world, and methods to get around them (i.e. monostate pattern) are simply singletons in disguise. For instance, a Logger is a perfect candidate for a singleton. Addtionally, so is a message pump. My current app uses distributed computing, and different objects need to be able to send appropriate messages. There should only be one message pump, and everyone should be able to access it. The alternative is passing an object to my message pump everywhere it might be needed and hoping that a new developer doesn't new one up without thinking and wonder why his messages are going nowhere. The uniqueness of the singleton is the most important part, not its availability. The singleton has its place in the world.

Steve
+1 because I disagree so strongly. Singletons (the design pattern) make testing such a nightmare they should never be used. Note that singletons (an object only instantiated once) are fine, but they should be passed in through dependency injection.
Craig P. Motlin
A logger is certainly not a perfect candidate for a singleton. You may want to have two loggers. I've been in that exact situation before. It may be a good candidate for being *global*, but certainly not for being forced into "one instance only". Very few things require that constraint.
jalf
The way I figure it, I've used some singletons in one project, and I might well do so again before I retire. Not the most widely useable patterns, but valuable for some things.
David Thornley
I really recommend reading http://misko.hevery.com/2008/08/25/root-cause-of-singletons/ to you.
codethief
I would like to add that in C++, the singleton pattern is extremely important due to the static initialization fiasco.
rlbond
Logging is the only common use of the singleton pattern, all others uses are mostly bad.
Emmanuel Caradec
I have never found a case of singleton that could not be substituted for a static, besides in languages that do not have a proper static inicialization time, bringing static fiasco.
kurast
+9  A: 

Rob Pike wrote: "Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming."

And since these days any serious data is in the millions of records, I content that good data modeling is the most important programming skill (whether using a rdbms or something like sqlite or amazon simpleDB or google appengine data storage.)

Fancy search and sorting algorithms aren't needed any more when the data, all the data, is stored in such a data storage system.

Christopher Mahan
It depends on the rawness of your original data. If the data is accumuleted by data entry in a UI it is true. But if you do something like Text Mining you need to process your data first, algos become more important.
tuinstoel
tuinstoel: ok, but text mining is eminently parallelisable, so the algo should be ultra simple and then be run by a few hundreds or thousand processes. Image processing needs solid algos though.
Christopher Mahan
I would agree if you also mean that data should be kept as minimal and normalized as reasonable. I see far too much data structure whose ostensible purpose is "better performance" that causes the opposite.
Mike Dunlavey
+1 If I was speaking to an assembly of CS Freshmen my first advice would be to "Know Thou Data_Structures" Amen Brother.
WolfmanDragon
Brooks, in "The Mythical Man-Month", had a comment that he'd be confused if you hid your tables and showed him your flow charts, but if you showed him your tables he wouldn't need to see your flow charts. This should give you an idea of how old this idea is.
David Thornley
+11  A: 

Junior programmers should be assigned to doing object/ module design and design maintenance for several months before they are allowed to actually write or modify code.

Too many programmers/developers make it to the 5 and 10 year marks without understanding the elements of good design. It can be crippling later when they want to advance beyond just writing and maintaining code.

kloucks
I will tell you from having dealt with entry-level and junior developers that they learn precisely nothing by performing "maintanence and bug fixes", they never develop any skills. Letting juniors build an app something from scratch teaches them an incredible amount in a short period of time.
Juliet
Quite so. Aptitude has very little to do with experience, which often just entrenches bad habits.
ChrisA
I would say the exact opposite. Let them write implementations of existing interfaces, that must pass existing unit tests. They will pick up some design skills just by working with the senior developer's designs for a few months.
finnw
Have to agree with finnw.
Software Monkey
@Juliet, absolute rubbish. When I was an entry-level developer I did maintenance and bug fixwork and learnt directly why consistency and separation of concerns is so essential in software. Maintaining code with "issues" it THE best way to improve your own designs.
Ash
i agree this is very controversial lol
Egg
Nothing teaches you the value of doing things the right way like the pain of doing things the wrong way and then having to live with the results.
Jeremy Friesner
+1  A: 

(Unnamed) tuples are evil

  • If you're using tuples as a container for several objects with unique meanings, use a class instead.
  • If you're using them to hold several objects that should be accessible by index, use a list.
  • If you're using them to return multiple values from a method, use Out parameters instead (this does require that your language supports pass-by-reference)

  • If it's part of a code obfuscation strategy, keep using them!

I see people using tuples just because they're too lazy to bother giving NAMES to their objects. Users of the API are then forced to access items in the tuple based on a meaningless index instead of a useful name.

Roy Peled
I'm glad you qualified this. Thank goodness for Python 2.6 adding [named tuples](http://docs.python.org/library/collections.html#collections.namedtuple).
bignose
Hey that's cool. I didn't know there was a such thing as a named tuple. I think for a tuple-perfect-storm you should design a GUI library in python that expects 2-tuples in x,y and y,x order in various places. :-)
Warren P
+5  A: 

Goto is OK! (is that controversial enough)
Sometimes... so give us the choice! For example, BASH doesn't have goto. Maybe there is some internal reason for this but still.
Also, goto is the building block of Assembly language. No if statements for you! :)

Lucas Jones
bash has break n; and continue n; instead. imho the only reason to use goto is when you don't have those (or don't have labelled break/continue)
Johannes Schaub - litb
In assembly everything is implemented as goto (jump/branch). Most languages have if and some form of loop, but many are lacking try/catch or break/continue all of which can be implemented by the goto. Admittedly it can be used really badly so be careful :)
Cervo
I see headaches in making gotos in a language that is parsed while running.
Joshua
@Joshua, you mean interpreted languages? A language like Basic used to be a interpreted language and it certainly had the goto statement. How old are you?
tuinstoel
@Joshua, I'd say it was simpler. I wrote a simple interpreted language (by "simple", I mean "didn't really do anything at all" :D) which had goto. No conditions though.
Lucas Jones
and there are `cmp` statements (`if` statements) in Assembly - otherwise you'd never know when to `jmp`
warren
I suppose.... :)
Lucas Jones
+342  A: 

Readability is the most important aspect of your code.

Even more so than correctness. If it's readable, it's easy to fix. It's also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.

Craig P. Motlin
I would temper this statement by replacing "readability" with "modifiability". I've seen entirely too much code that was made "readable" just by puffing it up with whitespace so you could see less of it, and being wordy instead of precise.
Mike Dunlavey
They certainly go hand in hand. And readability is subjective. Sounds like you think that whitespace made the code less readable. That's why a group standard is so important.
Craig P. Motlin
Agreed, this goes along with well factored code and ealier answer re: comments not being all that useful. In the C# 3 I suspect that lots of clever one line LINQ/Lambda expressions are being writen which are almost inscrutable and would be more readable in C# 2.
AnthonyWJones
Agreed. One line statements that do 16 things are a horror in the 80% of the life of the code spent in maintenance (especially when the maintenance is the duty of 'lesser' programmers). Write code that can be read by humans not just compilers.
duncan
I wouldn't say that this is overly controversial. Although, readability is more important that correctness is extremely controversial :-) Your customers may have a different view that you on this :-)
billybob
I would vote this up if I didn't suspect that you are thinking of some One True Brace Style.
Svante
Why do people associate readability so strongly with whitespace? It's a part of it, but a small part.
Craig P. Motlin
If it doesn't run, it doesn't matter.
Lance Roberts
http://www.expatsoftware.com/articles/2007/06/getting-your-priorities-straight.html
Jason Kester
Maintainability > Readability. I can auto-reformat code to make it readable anytime.
thenonhacker
again, readability is not white-space. readability includes level-of-nesting, function length, cyclomatic complexity, variable names, and a bunch of other things.
Jimmy
I agree 100%. Unreadable code puts unnecessary strain on my gray cells.
moffdub
I would code that works is more valuable than code that looks pretty.
Steve918
If the code is not correct, it is invalid. Code that is unreadable but works is always better than code that is readable but fails to do what it is supposed to do. That said, readable working code is much better than unreadable, non-working code.
Callum Rogers
Assuming you're working on a reasonably big team with typical code-flexibility needs, I agree. Code that's broken but *easy to change* is better than code that works but nobody understands. I'd say "maintainability" rather than "readability" though.
Iain Galloway
A: 

System.Data.DataSet Rocks!

Strongly-typed DataSets are better, in my opinion, than custom DDD objects for most business applications.

Reasoning: We're bending over backwards to figure out Unit of Work on custom objects, LINQ to SQL, Entity Framework and it's adding complexity. Use a nice code generator from somewhere to generate the data layer and the Unit of Work sits on the object collections (DataTable and DataSet)--no mystery.

Mark A Johnson
You've obviously never used a DataSet then :P
Cameron MacFarland
I have to disagree. IMO the DataSet is overkill for the vast majority of operations. And before it's asked, yes, I have used it.
Mike Hofer
By the same reasoning, LINQ to SQL, Entity Framework, NHibernate, etc. are also overkill for the "vast majority" of operations. BTW, did you mean the "vast majority" of all operations or the "vast majority" of places where I'd use DDD?
Mark A Johnson
+10  A: 

Using Stored Procedures

Unless you are writing a large procedural function composed of non-reusable SQL queries, please move your stored procedures of the database and into version control.

Shawn Simon
I concur: you can't version stored procedures, and having 200+ stored procedures in a large project becomes a maintenance nightmare. Embedded SQL is ok for small projects, but I'd rather use an ORM to write my queries for me.
Juliet
Princess: I must disagree with your statement that you can't version stored procedures. I version them myself by keeping the SQL for them in source code control. If you make a change to the database, re-export the script for it and check it into the repository.
Mike Hofer
I agree about versioning stored procedures. If you are writing SP, you need to take it upon yourself to version them in source control.
casperOne
Out of *your* database? There speaks a 1970s DBA
ChrisA
We can version SPs. The build process moves them from source control into the database.
Joshua
In DB2/400 stored procedures are an interface to native code on the system... In other words, hard to move over to the calling system.
Thorbjørn Ravn Andersen
+76  A: 

I've been burned for broadcasting these opinions in public before, but here goes:

Well-written code in dynamically typed languages follows static-typing conventions

Having used Python, PHP, Perl, and a few other dynamically typed languages, I find that well-written code in these languages follows static typing conventions, for example:

  • Its considered bad style to re-use a variable with different types (for example, its bad style to take a list variable and assign an int, then assign the variable a bool in the same method). Well-written code in dynamically typed languages doesn't mix types.

  • A type-error in a statically typed language is still a type-error in a dynamically typed language.

  • Functions are generally designed to operate on a single datatype at a time, so that a function which accepts a parameter of type T can only sensibly be used with objects of type T or subclasses of T.

  • Functions designed to operator on many different datatypes are written in a way that constrains parameters to a well-defined interface. In general terms, if two objects of types A and B perform a similar function, but aren't subclasses of one another, then they almost certainly implement the same interface.

While dynamically typed languages certainly provide more than one way to crack a nut, most well-written, idiomatic code in these languages pays close attention to types just as rigorously as code written in statically typed languages.

Dynamic typing does not reduce the amount of code programmers need to write

When I point out how peculiar it is that so many static-typing conventions cross over into dynamic typing world, I usually add "so why use dynamically typed languages to begin with?". The immediate response is something along the lines of being able to write more terse, expressive code, because dynamic typing allows programmers to omit type annotations and explicitly defined interfaces. However, I think the most popular statically typed languages, such as C#, Java, and Delphi, are bulky by design, not as a result of their type systems.

I like to use languages with a real type system like OCaml, which is not only statically typed, but its type inference and structural typing allow programmers to omit most type annotations and interface definitions.

The existence of the ML family of languages demostrates that we can enjoy the benefits of static typing with all the brevity of writing in a dynamically typed language. I actually use OCaml's REPL for ad hoc, throwaway scripts in exactly the same way everyone else uses Perl or Python as a scripting language.

Juliet
100% right. If only the Python developers would finally acknowledge this and change their otherwise exceptional language accordingly. Thanks for posting this.
Konrad Rudolph
But there is already one statically-typed Python-like language. Tt's called C# ;-)
zuber
C# is python-like? Maybe you meant Boo ;)
Juliet
If anyone says dynamic typing is more terse, just point them to Haskell =).I agree with all but your 3rd bullet point. Dynamic code often accepts parameters that can be one of two types. For example, Prototype functions accept either HTMLElements, or strings which you can use $() to look up to get HTMLElements. A good static typing system will allow you to do this =).
Claudiu
#2 is only true if you follow #1, which in my opinion is unnecessary. If it's clear what the code does, then it is correct. I have a code I use a lot that reads in data from a tab delimited file, and parses that into an array of floats. Why do I need a different variable for each step of the process? The data(as the variable is called) is still the data in each step.
notJim
+70  A: 

Code layout does matter

Maybe specifics of brace position should remain purely religious arguments - but it doesn't mean that all layout styles are equal, or that there are no objective factors at all!

The trouble is that the uber-rule for layout, namely: "be consistent", sound as it is, is used as a crutch by many to never try to see if their default style can be improved on - and that, furthermore, it doesn't even matter.

A few years ago I was studying Speed Reading techniques, and some of the things I learned about how the eye takes in information in "fixations", can most optimally scan pages, and the role of subconsciously picking up context, got me thinking about how this applied to code - and writing code with it in mind especially.

It led me to a style that tended to be columnar in nature, with identifiers logically grouped and aligned where possible (in particular I became strict about having each method argument on its own line). However, rather than long columns of unchanging structure it's actually beneficial to vary the structure in blocks so that you end up with rectangular islands that the eye can take in in a single fixture - even if you don't consciously read every character.

The net result is that, once you get used to it (which typically takes 1-3 days) it becomes pleasing to the eye, easier and faster to comprehend, and is less taxing on the eyes and brain because it's laid out in a way that makes it easier to take in.

Almost without exception, everyone I have asked to try this style (including myself) initially said, "ugh I hate it!", but after a day or two said, "I love it - I'm finding it hard not to go back and rewrite all my old stuff this way!".

I've been hoping to find the time to do more controlled experiments to collect together enough evidence to write a paper on, but as ever have been too busy with other things. However this seemed like a good opportunity to mention it to people interested in controversial techniques :-)

[Edit]

I finally got around to blogging about this (after many years parked in the "meaning to" phase): Part one, Part two, Part three.

Phil Nash
Generally when things are aligned in a columnar way it creates a maintenance burden for a developer. Ie aligning the data type and identifier in a method declaration... Line1(int id,) line 2(char id,) ... making sure the data type, variable name, and even commas all are in a column is a MESS
Cervo
it usually just takes a couple of extra keypresses, if that.I didn't go into too many specifics, but I usually only break it into two columns for alignment purposes (usually type - id). I have some other rules to ease the burden where parantheses are concerned. The biggest obstacle I have [cont...]
Phil Nash
[...cont] is fighting against auto-formatting editors. In fact, unless it's easy to disable I usually give up in those circumstances and "go with the flow". But with especially verbose languages like C++ I still prefer it.
Phil Nash
Interesting. I would like to see some examples. Do you have a blog?
Jay Bazuzi
Well, I have: http://www.levelofindirection.com (yes, it forwards to blogspot - the pun *was* intended), and also http://organic-programming.blogspot.com . However, you'll notice neither have been updated for quite a while - due in large part to http://www.vconqr.com ;-) [cont...]
Phil Nash
[...cont] - and I don't mention the layout stuff on either. I'll consider myself prodded - again!
Phil Nash
Code formatting matters so much, it doesn't matter at all.By that I mean that editors should always reformat code when you load it, and SCM systems should reformat to a canonical style on checkin. Then everyone sees the code the way that works best for them.
Kendall Helmstetter Gelner
@Kendall: Sounds nice. It's hard, though, because you have to be able to specify the exact formatting of every possible bit of code, including code that isn't legal in the language!
Jay Bazuzi
This is a pretty much standard opinion. Or, at least, it should be. If this is controversial, then there is a problem.
Eduardo León
+10  A: 

The ability to create UML diagrams similar to pretzels with mad cow disease is not actually a useful software development skill.

The whole point of diagramming code is to visualise connections, to see the shape of a design. But once you pass a certain rather low level of complexity, the visualisation is too much to process mentally. Making connections pictorially is only simple if you stick to straight lines, which typically makes the diagram much harder to read than if the connections were cleverly grouped and routed along the cardinal directions.

Use diagrams only for broad communication purposes, and only when they're understood to be lies.

RoadWarrior
+9  A: 

How about this one:

Garbage collectors actually hurt programmers' productivity and make resource leaks harder to find and fix

Note that I am talking about resouces in general, and not only memory.

Nemanja Trifunovic
Would you mind justifying that?
Juliet
I've seen 50mb leaked bescause some library programmer hooked an event and didn't make absolutely sure to unhook it.
Joshua
now imagine you have 8gb ram
01
8gb RAM is nothing to a repetitive leak on a server under high load.
Kendall Helmstetter Gelner
I guess it refers to RIIA idiom. In that case I must adhere to the proposal. RIIA is a solution for all resources, GC is a partial solution for memory resources only.
David Rodríguez - dribeas
+1 to that. Before GC, programmers took care of leaks before deployment. These days, applications are deployed and then when a 100 users are using the application, we discover that we've run out of database connections.
Vulcan Eager
Anyone who expects garbage collection to handle all resource management has desperately misunderstood garbage collection. GC is only for managing *memory*
benjismith
I'd give a +1 if you had said: "GC because it's not available for all resoures; only memory. So you can leak DB connections." GC has solved 100 issues and introduced 20 new ones, so it's still an advantage.
Aaron Digulla
Which "100 issues"? It has solved only one - memory management, and IMHO even that poorly.
Nemanja Trifunovic
Wait, memory management needed to be solved?
GMan
+12  A: 

SQL could and should have been done better. Because its original spec was limited, various venders have been extending the language in different directions for years. SQL that is written for MS-SQL is different than SQL for Oracle, IBM, MySQL, Sybase, etc. Other serious languages (take C++ for example) were carefully standardized so that C++ written under one compiler will generally compile unmodified under another. Why couldn't SQL have been designed and standardized better?

HTML was a seriously broken choice as a browser display language. We've spent years extending it through CSS, XHTML, Javascript, Ajax, Flash, etc. in order to make a useable UI, and the result is still not as good as your basic thick-client windows app. Plus, a competent web programmer now needs to know three or four languages in order to make a decent UI.

Oh yeah. Hungarian notation is an abomination.

Kluge
+1 for the abomination. Anything that's harder to read than write has got to be wrong.
ChrisA
This is a statement that two things that had been around for a long time, and have been heavily used, would be much better done if they'd known then what we know now. That is much closer to being a tautology than a controversy.
David Thornley
html layout is a lot easier than assembling widgets in C++
hasen j
+6  A: 

Globals and/or Singletons are not inherently evil

I come from more of a sysadmin, shell, Perl (and my "real" programming), PHP type background; last year I was thrown into a Java development gig.

Singletons are evil. Globals are so evil they are not even allowed. Yet, Java has things like AOP, and now various "Dependency Injection" frameworks (we used Google Guice). AOP less so, but DI things for sure give you what? Globals. Uhh, thanks.

Jeff Warnica
I think you have some misconceptions about DI. You should watch Misko Hevery's Clean Code talks.
Craig P. Motlin
I agree about globals. The problem is not the concept of a global itself, but what type of thing is made global. Used correctly, globals are very powerful.
PhoenixRedeemer
Perhaps I am. But if you had globals, you wouldn't need DI. I'm entirely prepared to believe that I'm mis-understanding a technology that solves a self-imposed problem.
Jeff Warnica
We use Globals all the time in java, every time we use a final public static in place of a Constant (C, C++, C#). I think the thought is that if it needs to be global then it should be a static. I can (Mostly) agree with this.
WolfmanDragon
+4  A: 

The class library guidelines for implementing IDisposable are wrong.

I don't share this too often, but I believe that the guidance for the default implementation for IDisposable is completely wrong.

My issue isn't with the overload of Dispose and then removing the item from finalization, but rather, I despise how there is a call to release the managed resources in the finalizer. I personally believe that an exception should be thrown (and yes, with all the nastiness that comes from throwing it on the finalizer thread).

The reasoning behind it is that if you are a client or server of IDisposable, there is an understanding that you can't simply leave the object lying around to be finalized. If you do, this is a design/implementation flaw (depending on how it is left lying around and/or how it is exposed), as you are not aware of the lifetime of instances that you should be aware of.

I think that this type of bug/error is on the level of race conditions/synchronization to resources. Unfortunately, with calling the overload of Dispose, that error is never materialized.

Edit: I've written a blog post on the subject if anyone is interested:

http://www.caspershouse.com/post/A-Better-Implementation-Pattern-for-IDisposable.aspx

casperOne
I like it! Now I wish that all the IDisposable objects in the framework would do this.
Jay Bazuzi
On a related note, MemoryStream is disposable but safe to leak. Think about it.
Joshua
Joshua:The fact that MemoryStream is disposable is an implementation detail, and as we all know, it's not good practice to rely on implementation details if you don't have to.It could very easily be changed to use a unmanaged memory pointer for it's buffer in the future. Think about that. =)
casperOne
I would prefer that all types that implement IDisposable were forced to be stack allocated, or some similar concept.
Daniel Paull
+95  A: 

SESE (Single Entry Single Exit) is not law

Example:

public int foo() {
   if( someCondition ) {
      return 0;
   }

   return -1;
}

vs:

public int foo() {
   int returnValue = -1;

   if( someCondition ) {
      returnValue = 0;
   }

   return returnValue;
}

My team and I have found that abiding by this all the time is actually counter-productive in many cases.

javamonkey79
If only I knew what SESE is?
tuinstoel
I found it: Single Entry Single Exit !!
tuinstoel
what the hell is SESE?
hasen j
I guess, that in other words it is "function should have only one return statement" - never agreed with that one.
Rene Saarsoo
Moreover, an exception is just another exit point. When functions are short and error-safe (-> finally, RAII), there is no need to follow SESE.
Luc Hermitte
Agreed. I cringe at the 100+ loc methods I've seen that carry a return value from the first line all the way to the bottom just to adhere to SESE. There is something to be said for exiting when you find the answer.
Rontologist
wow .. whoever came up with SESE must be a world class idiot
hasen j
Totally agree on that one, I was about to add it onto this post, you beat me to it ;)
dbones
Wait people actually do this? Why can't you just search for "return"?
nosatalian
SESE is law in unmanaged code, but in managed code it isn't, some post somewhere here in SO explains it better
Jader Dias
I'd like to see that post, but admittedly, my opinion comes from a strict managed code domain.
javamonkey79
This might be useful when your debugger only has a maximum of two breakpoints. Very common in embedded hardware environments.
Casey
I think SESE is a great example of a solution in search of a problem
Kevin Laity
SESE dates back to 1960s and structured programming. it made a lot of sense then. single entry is pretty much guaranteed today, clinging to single exit just betrays low iq.
just somebody
It only makes sense if it's SESRP: Single Entry, Single Return Point. This was important in languages like BASIC where you could GOTO here, there, and everywhere. Better practice was to always return where you came from, using GOSUB instead of GOTO. With modern programming languages this isn't so much of an issue...which seems to be how the sensible "return where you came from" morphed into the awful "exit from only one point of the method".
Kyralessa
+34  A: 

Null references should be removed from OO languages

Coming from a Java and C# background, where its normal to return null from a method to indicate a failure, I've come to conclude that nulls cause a lot of avoidable problems. Language designers can remove a whole class of errors relate to NullRefernceExceptions if they simply eliminate null references from code.

Additionally, when I call a method, I have no way of knowing whether that method can return null references unless I actually dig in the implementation. I'd like to see more languages follow F#'s model for handling nulls: F# doesn't allow programmers to return null references (at least for classes compiled in F#), instead it requires programmers to represent empty objects using option types. The nice thing about this design is how useful information, such as whether a function can return null references, is propagated through the type system: functions which return a type 'a have a different return type than functions which return 'a option.

Juliet
An interesting link to confirm your point of view: http://sadekdrobi.com/2008/12/22/null-references-the-billion-dollar-mistake/
Nemanja Trifunovic
Nemanja: Fascinating find, too bad I can't upvote comments :)
Juliet
I would rather have "non-nullable reference types" (with compiler checking) than completely remove null.
Jon Skeet
I have to agree with Jon; "null" is frequently a valid state and indicates something completely different from zero or empty. Eliminating it would be a mistake IMO; but for those cases where it's not appropriate, a non-nullable object type would be nice.
Mike Hofer
Correction: a non-nullable reference.
Mike Hofer
I disagree, but then I use Objective-C where nil is quite a handy concept.
Graham Lee
This is like prohibiting zero to prevent divide-by-zero errors. Nulls happen in real-world situations and forbidding them would force everyone to hand roll their own ad hoc implementations.
Dour High Arch
I really like Scala's approach to this: there is no null, and if you want the same effect you have to wrap it in an Option[T] object (either Some[T] or None) which forces you to notice and check it. No more accidental nulls.
Marcus Downing
I don't necessarily agree that they should be removed, but I do think the Null Object Pattern should be preferred over checking for null every four lines in your code.
moffdub
Princess, if you like Nemanja's link you can edit your answer and include it
MarkJ
Agree with Jon. It should be possible to have the language enforce that a given variable can never be assigned null.
Thorbjørn Ravn Andersen
The problem is your strongly typed language, not null. In a language where null is a valid value and calling any method on null returns null is great.
drawnonward
+5  A: 

Opinion: Data driven design puts the cart before the horse. It should be eliminated from our thinking forthwith.

The vast majority of software isn't about the data, it's about the business problem we're trying to solve for our customers. It's about a problem domain, which involves objects, rules, flows, cases, and relationships.

When we start our design with the data, and model the rest of the system after the data and the relationships between the data (tables, foreign keys, and x-to-x relationships), we constrain the entire application to how the data is stored in and retrieved from the database. Further, we expose the database architecture to the software.

The database schema is an implementation detail. We should be free to change it without having to significantly alter the design of our software at all. The business layer should never have to know how the tables are set up, or if it's pulling from a view or a table, or getting the table from dynamic SQL or a stored procedure. And that type of code should never appear in the presentation layer.

Software is about solving business problems. We deal with users, cars, accounts, balances, averages, summaries, transfers, animals, messsages, packages, carts, orders, and all sorts of other real tangible objects, and the actions we can perform on them. We need to save, load, update, find, and delete those items as needed. Sometimes, we have to do those things in special ways.

But there's no real compelling reason that we should take the work that should be done in the database and move it away from the data and put it in the source code, potentially on a separate machine (introducing network traffic and degrading performance). Doing so means turning our backs on the decades of work that has already been done to improve the performance of stored procedures and functions built into databases. The argument that stored procedures introduce "yet another API" to be manged is specious: of course it does; that API is a facade that shields you from the database schema, including the intricate details of primary and foreign keys, transactions, cursors, and so on, and it prevents you from having to splice SQL together in your source code.

Put the horse back in front of the cart. Think about the problem domain, and design the solution around it. Then, derive the data from the problem domain.

Mike Hofer
I agree with the principal, but the problem is in real world IT development you often have existing data stores that you must make use of - while total constraint to existing code might be bad you can save a ton of development effort if you conform to data standards that exist when you can.
Kendall Helmstetter Gelner
Hey, someone who understands the real purpose of stored procedures!
Lurker Indeed
Hmmm. Take the data out of a system and what do you have? A system that computes nothing. Put bad data into your system and what happens? Crash. Analogy: Bake your bricks (create strong data types) and mix your cement (enforce the constraints), then design/build your system with perfect blocks.
Triynko
+259  A: 

PHP sucks ;-)

The proof is in the pudding.

php sucks! justification? just use it for a while. it SUCKS!!!!1111 +10 (I wish hehe)
hasen j
So true - just try it, after having used a "normal" language that actually has rules that it follows.
Evgeny
Justification? How about the complete inability to find out that you typoed a variable name at compile time (well, syntax-check time, with PHP) instead of runtime? Even Perl has 'use strict', and Perl catches so much flak it's barely funny.
chaos
I could post "Perl sucks!" but that would start a flame-war. :-)
staticsan
How is that controversial? Anyone who uses PHP will agree with you!
comingstorm
it is controversial ... lots of people defend PHP! it's crazy, I know! what the hell are they thinking?
hasen j
It _can_ suck, especially in the hands of the inexperienced, where it spends most of its time. But, really, PHP 5 with the right framework can be fantastically productive. You can shoot yourself in the foot with it, but you can do that in any language.
postfuturist
Set error_reporting to E_ALL, and you will get a warning on using an uninitialised variable. I assume that's what you meant by typoed variable name?
troelskn
Upvoted! Couldn't agree more, I've been saying this for 7 years now and finally people are starting to agree with me!
nerdabilly
@troelskn: Yes, you get this, as you say, ON USING the variable, i.e. at runtime. I quite specifically described the ability to find out that the variable was typoed prior to runtime, which even as maligned a language as Perl gives me.
chaos
@chaos, why would you even want to do that? nothing happens before runtime. If your code screws everything up in ie a databse because you typoed a variable, then it's bad code, that's your fault not that of PHP.
Pim Jager
dont really see the reason. How many languages did you try before php?
Quamis
So what if I cant see that I typoed a variable name until runtime, 'runtime' happens for me at the same step, then compile time happens for you, if a good developer is writing PHP then a) (s)he'll use a good IDE that won't let them make that kind of mistake b) the code won't actully touch...
Unkwntech
(continued) anything mission critical until it has been verified to be in working order, BAD CODE CAN BE WRITTEN IN ANY LANGUAGE! ffs
Unkwntech
Does "function blah() { return array(1,2,3); }; print blah()[1];" work already? If not: SUCKS SUCKS SUCKS. :-)
pi
Programmers that still spend their time putting down languages are wasting away precious moments they could be using to increase their skills. "Men have become the tools of their tools." -Henry David Thorean
Lusid
Jeff on Coding Horror: PHP sucks, but it doesn't matter http://www.codinghorror.com/blog/archives/001119.html
MarkJ
It sucks because it's not Micro$oft?
Brock Woolf
You people just boggle my mind.
chaos
Henry David Thoreau sucked too. He mooched off his family while suggesting that the gov should rase childern instead of the family. PHP is the worst language ever.
WolfmanDragon
@Brock VB sucks too! If I want to use basic, give me back my spaghetti bowl and let me write my GOTO's.
WolfmanDragon
The cliche you're looking for is "The proof of the pudding is in the eating."
Daniel Earwicker
Other than the module notation is very flat, and not OO, which makes it difficult to use... PHP as a language isn't bad. and APIs are fairly inconsistent...
Tracker1
I don't see the problem here. I find easier PHP easy to use... variable types have never been an issue for me.
Mark
I thought these opinions were supposed to be controversial? PHP sucks seems more like a statement of fact :-).
Travis
PHP sucks, but it's still a good language. If you don't understand that statement, or don't agree with it, you haven't been writing PHP long enough.
notJim
I use PHP! You can be as productive as you want and write great code in PHP. Its possible. Really. However, it lacks cohesiveness and elegance for a language that I would _enjoy_ on day to day use. So to generalize, I use it every day, and IT SUCKS!
Nick
Please someone down-vote this answer. Php's simplicity outweighs it's non-object-orientiness. So what that it uses global functions? Even object oriented approaches are forced to use global singletons.
AareP
Badly written PHP sucks... it's just a shame that there are so many examples of it.
HorusKol
I worked on a Web project where the back end was written with PHP. As a result, whenever I'm asked about PHP, I describe it as "Perl with a lobotomy."
BlairHippo
It sucks generally speaking, but using it doesn't have to.
Brian Ortiz
All languages suck when they're used by apes.
Sohnee
*ahem* ...I love PHP. Really.
Pedro Ladaria
It may suck but you can't ignore its use everywhere on the net :-)
Hannes de Jager
It's not *pudding*, it's [web] *soup*!
Chris
Given an option, I'll always take ASP.NET
baultista
Hahaha well said OP, I've used PHP once like 7 years ago because my boss didn't know sh*t about languages and it truly sucked. Sure you can always use the donkey wisely if you really really want to but who does seriously? All you'll find on the web is lousy code that looks indented by a retard. I think the fact the language acronym means Pretty Home Page tells all the story.
I love this, how people can hate a language because it doesn't hold their hand and tell them the second they have done something wrong. Learn how to use a standard variable naming convention and learn to spell correctly. Problem solved.
pondpad
just because you're allowed to do a code mess in php, it doesn't mean that you should do it or that you're going to do it, so it's not PHP fault, it's developers that give php a bad fame.. people usually think that php sucks because the think that all php developpers mix html code with php code on the same file
pleasedontbelong
PHP sucked less than ASP. But that was decades ago...
burkestar
+32  A: 

You need to watch out for Object-Obsessed Programmers.

e.g. if you write a class that models built-in types such as ints or floats, you may be an object-obsessed programmer.

Ferruccio
If your language claims to be OO but has built-in types that are syntactically and semantically different from objects, and you think this is just fine, you may be a Java or C++ programmer.
Barry Brown
@Barry! What about us Objective-C programmers! That might be us too!
Kendall Helmstetter Gelner
C++ is multiparadigm, and as such it can decide to use whatever types it wants :P
David Rodríguez - dribeas
Object orientation is a means to a goal and not a goal in and of itself.
Seventh Element
A: 

I know everything there is to know about everything.

jTresidder
why do they downvote this? i often hear ppl say that and i find it indeed controversial. +1 of course... oh wait.. no you should add some text explaining your point. ill take my vote back haha :)
Johannes Schaub - litb
I got the point, but I'm not sure I've got the point.
andyk
What do you know about me?
Seventh Element
I know you don't get self-deprecating humour, for a start. I think I must have got lost... I could have sworn this was SO, not YouTube, but the commentary around here recently has got me wondering. Heads go on the top guys, where you've got 'em is bad for your neck.
jTresidder
hehe jTresidder not that bad your quote, it's the first time I see something so down voted, I up vote ;p
Nicolas Dorier
"the more you know you know, the more you know you don't know"so anyone claiming he knows everything, actually knows nothing.
alexanderpas
+456  A: 

Print statements are a valid way to debug code

I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app.

Just make sure to remove the print statements when you go to production (or better, turn them into logging statements)

David
Absolutely, Make them into logging statements to begin with, and make them output to screen during dev.
Christopher Mahan
Definitely. I did this for years out of necessity and often by preference now--having it all sitting there in a logfile often gives a lot more information than stepping through it.
Loren Pechtel
Yes, thus there are logging frameworks that make this process more organized.
thenonhacker
I agree, just remove them prior to checkin. People who leave Debug and even Console code in production deserve beatings.
Quibblesome
Depending on your language / platform / application style, it is often your only choice.
postfuturist
@Quarrelsome, I disagree. If you make them really useful and structure them properly then I say leave them in and create a way to toggle them on when needed. Sometimes problems only happen in the customers environment.
bruceatk
Until you forget to delete a debug statement and it goes to production, or delete an actual statement with a debug statement, when you are tired. Logging, dedicated debug output routines, and debuggers are your friends.
Andrei Taranchenko
SOMETIMES, it's the only way. Not all the time, but sometimes...
LarryF
Just remember to clean the bastards up, or at least include in the debug statement where it is being called in the code, otherwise you'l spend hours trying to find them to delete them later
johnc
Only very rarely do I use this technique to debug code. I personally see it as the cave-man method for debugging and for big applications large sections of logging files quickly become incomprehensible to anyone but the developer that included the logging statements.
Seventh Element
@Diego, you should look at log4j or log4net or whatever, their filtering of log comments (and levels) makes it trivial to dig into the appropriate area of even the largest app.
Si
This is called stubbing and it rocks! Debuggers suck.Period. finding a problem in a large loop, say one with 1000 iterations, yeah right! With stubbing I can go directly to the problem. Of course knowing the proper way to stub is a different subject altogether.
WolfmanDragon
sleske
@WolfmanDragon: a good debugger lets you break on the nth iteration of a loop. Eclipse does this for Java and I use it all the time
Laplie
I sometimes work on a platform so archaic (and yet a cash cow, which is usefully in this economy) that the debugger takes around about 5 hours to set up (no joke!). Debugging via printfs and similar is essential.
Kaz Dragon
Or create a unit test !
Yassir
not in GLSL... ..
shoosh
Every time you consider writing a debug printout, consider writing a unit-test instead. I've found I use far less time that way.
Markus Koivisto
You mean there is another way besides printing messages ????
RN
I concur with Andrei. In Java at least there's no excuse for writing 'System.out.println("foo")' versus 'LOG.debug("foo")'
Jherico
As fas as this is point is concerned that one might forget removing such print statements when going to release. Can't we just enclose them in " #ifdef TEST <newline> printf(something) <newline>; #endif " ? Later we can pass TEST to compiler, like " gcc -o exe -DTEST exe.c ". It will print all statements encolsed with "#ifdef TEST". When going to production we simply have to remove "-DTEST" from compile command so all those print statements will not make it to the released version.
Andrew-Dufresne
You're off target here--don't write to the screen unless you're looking at some sort of interaction or timing issue. Write to a log file set up for the purpose. Get rid of the file itself before release and there's no chance you'll leave behind any writes and by having it in a file you can go forward and backwards.
Loren Pechtel
Even when I was writing C code, I used this technique for 90% of my debugging. Back then I had a define macro that made the printf statements disappear when compiling the production code. It's amazing how many bugs just jump out at you when your printf statements say that it had to go wrong in one of these two or three lines.
Michael Dillon
println()s give you the *history* of the execution, which is something you don't get with interactive debuggers.
Loadmaster
I had a case once where I wished I could do printf() debugging. All I could do was twiddle bits that were wired to four LEDs.
Joshua
"When in doubt, print more out!"
Garen
Aaaah.. threading!
Partial
This is completely wrong! So I had to gave it a +1... intricate...
Danvil
Logging can't find segfaults the way debuggers can.
Ken Bloom
As a program grows, these can be difficult to track. I once encountered a crash in Firefox on Linux, and was presented with an alert message displaying the call stack. That's never a good thing.
baultista
no System.out please, the Logger is there for a purpose !!!
Dapeng
Horrible opinion...+1. :) I dislike working on code littered with print statements and logging code.
dgnorton
+6  A: 

I think that using regions in C# is totally acceptable to collapse your code while in VS. Too many people try to say it hides your code and makes it hard to find things. But if you use them properly they can be very helpful to identify sections of code.

Jeremy Reagan
IMHO Regions are great for one thing... visualizing code rot.
Gavin Miller
Hah LFSR.Jeremy, your code is too big.
Jay Bazuzi
Never gotten used to them, don't use them, but it may just be me.
Seventh Element
Regions is the thing I miss most about VS (I use Eclipse). so instead of using regions, we make Method that have calls to methods that have calls to methods............. just so we can read the darned things. Regions are GOOD! +1
WolfmanDragon
+7  A: 

Relational databases are awful for web applications.

For example:

  • threaded comments
  • tag clouds
  • user search
  • maintaining record view counts
  • providing undo / revision tracking
  • multi-step wizards
+1 always surprised that OODBs didn't take off for web apps
Graham Lee
The reason OODB didn't take off for web apps is because web apps are the single area where scalability and speed matter most - and OODB fall flat when load gets high. That's why MySQL took off instead of something more robust like Postgres, because of sheer read speed and scalability.
Kendall Helmstetter Gelner
kendall, that's just trash. the biggest databases in the world have traditionally been oodbs. they handle all kinds of workload.
nes1983
Only deep ignorance can prevent someone to implement such things even in SQL, which is a badly designed language and not faithful to relational data model.
MaD70
+2  A: 

To Be A Good Programmer really requires working in multiple aspects of the field: Application development, Systems (Kernel) work, User Interface Design, Database, and so on. There are certain approaches common to all, and certain approaches that are specific to one aspect of the job. You need to learn how to program Java like a Java coder, not like a C++ coder and vice versa. User Interface design is really hard, and uses a different part of your brain than coding, but implementing that UI in code is yet another skill as well. It is not just that there is no "one" approach to coding, but there is not just one type of coding.

+5  A: 

Not very controversial AFAIK but... AJAX was around way before the term was coined and everyone needs to 'let it go'. People were using it for all sorts of things. No one really cared about it though.

Then suddenly POW! Someone coined the term and everyone jumped on the AJAX bandwagon. Suddenly people are now experts in AJAX, as if 'experts' in dynamically loading data weren't around before. I think its one of the biggest contributing factors that is leading to the brutal destruction of the internet. That and "Web 2.0".

Dalin Seivewright
Couldn't agree with this more! It shows just how fashion conscious our industry really is. When I looked into what all the AJAX fuss was about I discovered I had already been doing it for 2 years. But it takes a marketing style buzzword to make stuff happen.
AnthonyWJones
A vision on the history of AJAX: http://www.theregister.co.uk/2008/11/27/microsoft_ignored_ajax/
tuinstoel
I remember when it was called DHTML :P
Kronikarz
A: 

Not everything needs to be encapsulated into its own method. Some times it is ok to have a method do more then one thing.

Jeremy Reagan
reminds me of an old manager of mine who abstracted himself out of a job. He spent months abstracting an app to make it "perfect" but in the end got nothing done.
Neil N
+93  A: 

You must know how to type to be a programmer.

It's controversial among people who don't know how to type, but who insist that they can two-finger hunt-and-peck as fast as any typist, or that they don't really need to spend that much time typing, or that Intellisense relieves the need to type...

I've never met anyone who does know how to type, but insists that it doesn't make a difference.

See also: Programming's Dirtiest Little Secret

Kyralessa
I know how to type (was an army teleprinterist) but I insist it makes no difference whatsoever.
Nemanja Trifunovic
Nemanja->"no difference whatsoever"?!I just got 70wpm on an online test. I could see how someone could scrape by at 20-30wpm, but if they are using two fingers, plugging away at 5wpm (yes, I've worked with people like that), it's holding them back.
KeyserSoze
No difference whatsoever. I don't even know what is my current wpm level, because i completely lost interest in it. Surely, it is useful to type quickly when you are writing documentation or ansering e-mails, but for coding? Nah. Thinking takes time, typing is insignificant.
Nemanja Trifunovic
Well, if your typing is so bad that you are thinking about typing, that's time you could have spent thinking about the problem you are working on. And if your typing speed is a bottleneck in recording ideas, you may have to throttle your thinking until your output buffer is flushed.
KeyserSoze
@Nemanja Trifunovic - I hear what you are saying but, respectfully, I think you are dead wrong. Being able to type makes a huge difference.
duncan
@keysersoze: I have never worked on a project when typing speed made any difference. Even when I write code from scratch and not fighting some crazy frameworks, a good editor makes typing skill almost worthless. With vim I usually just type a couple of letters before pressing Ctrl+P.
Nemanja Trifunovic
@duncan: No hard feelings, but you are dead wrong - it makes no difference :)
Nemanja Trifunovic
Even though I never learned to touch type my typing is very quick, and optimized towards writing code - not english. I always felt touch typists must be at a little bit of disadvantage, considering the heavy use of symbols in coding which touch typing is not optimized for.
Kendall Helmstetter Gelner
I know how to type. After twenty years of typing my index and middle fingers know where all the keys are, so I don't have to look down at keyboard all that often. But I had this argument in a different context long back: a colleague argued that camel case is [contd...]
Hemal Pandya
[...contd] better then underscores because it is easier to type. My argument is that you are not supposed to write code at speed of typing.
Hemal Pandya
I don't mind looking at the keyboard once in a while to relieve eye strain. You HAVE to change your focus at times. If you are a good typist, chances are you either have glasses or contacts.
Andrei Taranchenko
While I can't touch type and confirm this I do suspect that it helps. I have encountered many situations where slow typing speed gets in the way. Sadly learning is mind-numbingly dull. Yes, I know there are all kinds of fun games to help you, but it's still dull for me. Still trying though...
Manos Dilaverakis
+1. I repeatedly see people make tons of mistake because they are watching their keyboard instead of watching the code on their screen. Most common are syntax and code-formatting issues, but also real bugs that aren't caught by the compiler.
flodin
You must be using some ridiculously verbose language like Java. Thinking is the bottleneck when programming, not typing.
nosatalian
I agree here. Though thinking is important, watching the screen is key.
Chet
I agree that thought is the limiting factor behind programming, but who codes from the hip so much that they design the software as they type it? While I'm coding/typing, I have largely already designed the software... as a result, my thinking easily keeps up with my 80wpm+ typing speed.
SnOrfus
I can't think faster then I type. I am hunt and peck, using six fingers and the thumbs. Problem is not that I wouldn't benefit from ten finger, but that trying to train it slows me down to much.
peterchen
The strange thing is that hunters and peckers are just a hair's breadth away from full blown ten finger typing. After using a keyboard for years you know exactly where the keys are - you just don't know where your hands are without looking. And that's only a little bit of technique. BTW: using a Kinesis contoured keyboard helps a LOT. And using an english keyboard instead of a localized one.
hstoerr
Yeah, Steve Yegge surely DOES know how to type...
Headcrab
@hstoerr: When I first took a typing course, in sixth grade, I cheated and looked at my fingers. I was the fastest one in the class, the star pupil. Only I didn't really know how to type. Luckily, in seventh grade, I took typing again and this time did it right. It's the only useful thing I learned in junior high. (Well, that and "Always carry your books in a backpack so they can't get knocked out of your hands and scattered down the hall.")
Kyralessa
The way I look at it, if you don't know how to type, how much programming experience could you really have? So yeah, I think a good programmer is one who knows how to type.
Renesis
I disagree. I never took any typing lessons, but spending most of my life behind a computer has made me remember where all the keys are so I can quickly type without looking at the keyboard. Maybe my hands aren't placed in the optimal position as you would learn in a typing lesson, or I don't use a DVORAK keyboard, but my typing is fine. And I sure don't want to type faster than I can think.
Dennis
I generally type with 4 fingers or so and I've tested my typing speed - 90 wpm.
Jake Petroules
Since when does wpm matter when programming? Programming requires thought, not just mindless typing.
pondpad
Typing is mindless by definition. If you're not typing, but hunt-and-pecking, you're using up brain cells to type that you could otherwise be using to think about your program.
Kyralessa
-1 for dead wrong: you don't need to type at all to be a programmer. Then, +2 for what it really means: you must know how to type to be a ***good*** programmer. When I interview people I'd pass immediately if they can't touch type.
Geoffrey Zheng
+12  A: 

Every developer should spend several weeks, or even months, developing paper-based systems before they start building electronic ones. They should also then be forced to use their systems.

Developing a good paper-based system is hard work. It forces you to take into account human nature (cumbersome processes get ignored, ones that are too complex tend to break down), and teaches you to appreciate the value of simplicity (new work goes in this tray, work for QA goes in this tray, archiving goes in this box).

Once you've worked out how to build a system on paper, it's often a lot easier to build an effective computer system - one that people will actually want to (and be able to) use.

The systems we develop are not manned by an army of perfectly-trained automata; real people use them, real people who are trained by managers who are also real people and have far too little time to waste training them how to jump through your hoops.

In fact, for my second point:

Every developer should be required to run an interactive training course to show users how to use their software.

Keith Williams
Programming has a lot in common with cleaning your room. The same principles of organization apply.
Alex Baranosky
Maybe... rather than dealing with your accounts as bits of paper you abstract them into folders, and encapsulate them in a filing cabinet or box. If you find a way to unit test laundry, let me know!
Keith Williams
Generally having a plan before building a web site/ desktop app/ house/ nuclear sub is always a good idea! Mapping things out, either with a sketches on a pad of paper, a wireframe, visio, work flow, mind map, whatever. And the training users...I see this missed by even the most brillant programmers. User acceptance in the long run determins your apps success. If they don't understand it, no matter what it does or how well it is done, your app will fail.
infocyde
+667  A: 

"Googling it" is okay!

Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn't hold that against people that use it.

Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don't know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer.

What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.

(although if you are getting answers from hallucinatory talking frogs, you should probably get some help all the same)

PhoenixRedeemer
Google groups is one of the greatest gifts to even the most nerdy of developers along with stackoverflow and porn :)
Jeremy
Sorry PR, but I disagree... There are no right answers ;-)I do agree with your overall point, although you need to be careful of the people who Google for an answer, take it verbatim and then have to hack away (usually introducing numerous bugs) until it works for "most" cases
billybob
Google will provide *knowledge*, but it cannot provide *skill*. Poor developers will not be aware of the difference.
Tom
Thorsten79
@Tom - That's true, but I'm just saying I don't think that should be held against Google. If we're going to judge whether someone is a good or bad developer, Google usage isn't going to be the indicator.
PhoenixRedeemer
By mentioning Google, it also means getting references to billions of programming references on the Internet. Like what we find in books, except it's free and fast.
thenonhacker
the talking frog thing is sometimes valid; just describing the problem to someone (imaginary or otherwise) can help you get a better handle on it yourself.
Colin Pickard
I taught myself C# from scratch with no real knowledge of computers. I was thrown straight in the deep end at my job and had no help, so i turned to google. I have hundreds of bookmarked pages full of interesting code and information that no person could have taught me!
xoxo
How did people do it before the internet?
asp316
I agree-- it's far better to google than to go bother someone else (ie me) for a simple question. learning isn't always about knowing all the answers but is about knowing the best places to find the answers.
carolclarinet
What do you mean, before the internet? =P
Erik Forbes
I got a job and lost a job with the "google is ok" claim.I had an on-the-spot question, and my answer was "I'd google it, and be done with it". Immediate disqual.In another interview, same situation, I explained the means and said "I'd google the syntax"... and here I am.
Don't 'google it', 'StackOverflow it', you'll have community feedback !
Think Before Coding
My GP (medical doctor if you're not from the UK) Googles my symptoms.
Pete Kirkham
The problem is not the people that Google as a reference; it's the subset of people that Google blocks of code, paste them into the project and then monkey with the variables/flow until it compiles. It compiles?! Ship it!
joshperry
Never remember anything that you can google
Nat
"Life is an open-book test" and "ethical theft is good practice" rolled into one.
Will
>"does it really matter where you got the information?" - Yes it does. The proofreading and research that goes into most (reputable) books is worth it. I just can't say the same for joe schmoe's website.
SnOrfus
@Tom: I'd go a step further, and say that Google only provides facts, and not knowledge. Knowledge implies vast amounts of relationship between facts, and Google results only barely scratches the surface of that (which is the whole point of the semantic web, which is still vaporware). Having said that, I agree with your basic point. There's a big difference between having knowledge and supplementing with references and simply relying on references as a substitute for knowledge.
Ben Collins
@snorfus: How does a new developer tell a good book from a bad one? Many books about PHP programming contain horrible practices, consistently repeated in every code example (for example, concatenating $_GET variables straight into a query). A person is better off with google in those cases, because at least they'll get a mix of good and bad code. If you're new to a field you should always look at a variety of sources, and google can be one.
Joeri Sebrechts
Good Googling is a skill. I'm surprised more people haven't figured this out. A small variation in search terms can make a big difference in the quality of results.
Mark Ransom
@Tom neither will reading a book.
Stuart
Finding the answer efficiently is just as important as being able to apply it. I would always prefer my developers to Google something that they don't know, find out what and why, and learn how to apply it. Used correctly, it isn't just a search application, it is a learning tool. If people mindlessly look something up and copy and paste code snippets without understanding what they do, it is more likely a problem with the developer than the tool.
joseph.ferris
I'd be a little concerned if you got your information from a hullicinated talking frog.
Mark
+1 for the talking frogs
Chris Needham
**Googling *doesn't* provide knowledge**. *It provides information*. How well it is used is another issue
WmasterJ
@Wmaster That's exactly what I was going to type. Well said!
Ben McCormack
+39  A: 

If you have any idea how to program you are not fit to place a button on a form

Is that controversial enough? ;)

No matter how hard we try, it's almost impossible to have appropriate empathy with 53 year old Doris who has to use our order-entry software. We simply cannot grasp the mental model of what she imagines is going on inside the computer, because we don't need to imagine: we know whats going on, or have a very good idea.

Interaction Design should be done by non-programmers. Of course, this is never actually going to happen. Contradictorily I'm quite glad about that; I like UI design even though deep down I know I'm unsuited to it.

For further info, read the book The Inmates Are Running the Asylum. Be warned, I found this book upsetting and insulting; it's a difficult read if you are a developer that cares about the user's experience.

AnthonyWJones
Excellent point. I re-learn this point the hard way every time I try to teach my parents (in their early 70s) how to use something on the computer or their cell phones.
MusiGenesis
I disagree. I don't think they are mutually exclusive. To take the opposite, people who have never used a computer before are the best interface designers.
James McMahon
I disagree, but only in the sense that most interface design decisions seem to be made by management.
Dave
I'd say they're definitely not mutually exclusive. I would more likely say that management should never decide where to put the button. I've had some of the most complicated interfaces ever created that way.
Sam Erwin
I wish I could upvote this twice. Yes, it's not universally true, but programmers tend to have the completely wrong mindset to design UI. We are too forgiving of interface flaws when it gives power and flexibility that end users don't need.
Robert J. Walker
I hope you don't let Doris take the wheel when you start up your IDE...
James Jones
That's one of my favorite books. Should be a must read - particularly for programmers who think they are web designers...
CMPalmer
This is like saying "If you know anything about how a car works, you should not be allowed to design the interior."There is an entire discipline around UI design and if you are doing things just based on your mental model of some imaginary elderly user, then you are not doing it correctly. No one can account for everyone's mental model. Applying extensive research, best practices, statistical analysis, and user testing are the ways to get to your desired result. Programmers can learn this discipline too.
Ben Reierson
@Ben: no you can't account for "everyones" mental model but its a sure thing that the developers mental model is entirely different from everyone else. Thats why an Interaction design professional will invent a person that best represents the typical user. If a system has users of very different persona (e.g., in addition to Doris we may invent Jeff the IT admin guy) then good interaction design will use Jeff as the target audience for the tasks he is likely to engage in.
AnthonyWJones
Interaction Design by users is what gave MySpace its reputation for vomit-inducing pages.
Kelly French
+4  A: 

QA should know the code (indirectly) better than development. QA gets paid to find things development didn't intend to happen, and they often do. :) (Btw, I'm a developer who just values good QA guys a whole bunch -- far to few of them... far to few).

sam
(to -> too)^2
Christopher Mahan
+819  A: 

Programmers who don't code in their spare time for fun will never become as good as those that do.

I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.

(Note: I'm not saying good programmers do nothing else than programming, but they do more than program from 9 to 5)

rustyshelf
Well, yeah, everybody knows that.
chaos
I never code for fun. I code and get paid for it, that's it. I do think of it as just a job, but I do the best job I can, that's the difference. It's not coding for fun that makes a programmer good, it's the research, learning and training that does. And I do that on work time.
Jeremy
Jeremy, if you never code for fun, then why are you trolling SO with the rest of us geeks who get off on programming?
postfuturist
Maybe he is at work?
BlackWasp
Yeah i dont code for fun really, but i come on here at work. Just because you dont do it in your spare time (if i ever had any), doesnt mean you dont have a passion for it!
xoxo
One of my fellow new hires said she hadn't coded in two years and was anxious to get back into it. Back into it? And you're a developer? She should spare everyone the show and quit now.
moffdub
I partially agree, we dont realy need to do 'coding' at spare time, just going through SO/Blogs/other technical podcasts etc as part of the hobby will work very well.The point is spend time in the community and get a feeling of what others doing out there.
Jobi Joy
1/ There is a difference between enthusiasm and ability.2/ Imagine if they said that about doctors. Or demolition experts. Or soldiers, or....
kpollock
I think that whilst the people who code in their spare time may become better at pure coding, but I would argue that that isn't the be all and end all of the job. Yes it is a large part but not everything. Just my opinion
chillysapien
Maybe they will never be as good as the ones who do it on their spare time, but they may have more fulfilling lives. I mean, come on, you spend at least 40 hours a week typing away at the stuff, do you really want to go home and do it some more? Play some tennis or something :P
Ace
I don't code for fun in my spare time. But I reverse engineer. Does that count?
Treb
People who's sole interest is programming, both on and of the job, may very well be execellent programmers. But I don't think I would want to "hang out" with this type of person. You don't need to become autistic about it. There is more to being a human being than writing code.
Seventh Element
Just because you code for fun off the job doesn't mean you're a certain autistic "type of person". You can lead a balanced life *and* code for fun. For me being parent to a toddler is much more of a social life killer than my at-home-for-fun-coding ever was, even during my late teens.
Andreas Magnusson
@Diego:'But I don't think I would want to "hang out" with this type of person. You don't need to become autistic about it.'I don't think I'd like to "hang out" with a judgemental 9-5'er troll either.Feel sorry for you that you can't understand having a real passion for something. Your loss.
kronoz
@kronoz: having a passion for something is great. But I feel sorry for those who have a passion for only one thing and nothing else. Their loss.
Treb
HUMBLY: Amen. I love coding in my spare time, and I have surpassed most around me.
Ronnie Overby
I don't see how coding for fun makes you an 'autistic' with no life. Would you think the same if I told you that I like to watch TV 2 hours a day after work? And if I told you that instead of watching TV I like to code for fun for 2 hours after work?
Sergio Acosta
I always code for fun in my spare time, and I try to have fun at work, though unfortunately work coding is often not fun. But then, hearing nice music and getting off anyways seems to outweigh nonfun :PAbout autistism: I totally get off when I code, but I also get off when my gf is around, yay.
phresnel
@Sergio Acosta: Agreed, totally... Personally I do more reading about coding on my own time than a lot do. I do some personal projects, but it evens out.
Tracker1
A 'little' spare time coding, a lot of fun doing other things
Daz
"People who's sole interest is programming, both on and of the job, may very well be execellent programmers. But I don't think I would want to "hang out" with this type of person."Programming in your spare time doesn't mean you ONLY program in your spare time. I don't bore people to death about programming if they're not interested in it.
Chad Okere
I used to code for fun -- before I had a job coding. Now I code for work and do other things for fun. My job is now my previous hobby and my hobby's are other things. I can't think of a better setup or a more well rounded life.
Nemi
joshlrogers
re: doctors, etc. Think about how one *becomes* a top cardio surgeon for example. You aren't even allowed to "solo" until you've been doing 80-120 hours weeks for most of your 20's and many keep up that schedule long after residency. They spend their spare time in residency stitching up fruit, etc. In other words, MANY fields have their best and brightest putting in FAR more than 40 hours a week and much of it unpaid.
J Wynia
@kpollock: its unreasonable to expect doctors, demolitionists or soldiers practice their technique in their spare time because of the nature of what they do. However, I think its inevitable that were they able to do so, the ones who did would be better at their respective jobs than the ones who did not.
Jherico
I fully agree that one has to continue learning outside of work if they wish to improve their abilities. That is not to say that it won't happen for those who only code while they are at work. However, Captain Obvious would state that programming outside of work allows a person to improve their skills much more quickly in a broader area of topics. I find it admirable and highly beneficial career-wise for those that do. Lastly, it is highly important to find a subtle balance between coding and life outside of computers.
transmogrify
@ace "you spend at least 40 hours a week typing away at the stuff" HA! how many programmers do you know that work <=40hrs? agreed tho.
sequoia mcdowell
@sequoia: uh, a lot? Including me. I've found that programmers that put in less hours have clearer minds and are able to be fresh enough to look at a problem and solve it rather quickly.
temp2290
@moffdub, I hadn't coded in two years, and I was eager to get back into it. I was pregnant when I stopped, my brain didn't work right for programming. Perl might have died and web 2.0 came around while I was away, but I remember good practice and pseudocode. Should I "spare everyone the show and quit now"?
Elizabeth Buckwalter
Apparently, yes.
Ed Swangren
Definitely the most controversial topic, just based on the comments :)
amischiefr
I didn't say people who code for fun make great drinking buddies, nor did I say you had to spend your life married to a computer. But it still stands that those that have a passion for programming, and take it home and tinker with it, will quickly excel those that don't. For the rest it is a job. Sure it can be optimised, but it's a job. To a real programmer it's a passionate obsession, not a 9-5 job ;)
rustyshelf
Desire to code something good at home usually means that day time job does not allow it, which means that nearly 40 hours per week is wasted. Those who happen to not waste those 40 hours will be the best. But for others, desire to code something good is what matters.
alpav
In all fields of endeavour, those ahead of the curve have always put in more time than those that haven't.
Gary Willoughby
There is so much elitism in programming; I don't get it. Who cares how much better you are than someone else at coding? For 99.9% of coders it is a JOB. When you are 80 and on your death bed, you most certainly won't be thinking "Man I wish I would have written a few more LOC."IMO those who try and knock other's abilities are, in general, insecure about themselves and their own abilities.
DevDevDev
I can see two sides to this. On the one hand, yes, when you do something a lot you can get very good at it. Still, if your vocation is also your main avocation, you can forget what people less single-minded need. You can sometimes become *less* capable of documenting your work in a way that works for others, or participating well in the "soft" early portion of projects than someone whose life experience is broader.
Joe Mabel
Programming in your spare time doesn't mean that you don't do anything but programming, it just means you work on projects you enjoy as well as projects that pay the bills. I enjoy programming, and I program in my spare time. That doesn't mean I have no friends and don't do anything but programming. I do think however that it makes me a better programmer, as I discover new tricks and technologies more often this way.
wvdschel
@DevDevDev: you say that just because for you, 99.9% of coders it is a JOB. I think it's not the reality. Or maybe there are more and more people that do it only as a job (and that's why it's sooooo hard to find a really good developper nowadays (remember what Joël Spolsky said: "a good developper can do a better job than 10 average ones"... do you really think a "good" developper works 8 hours a day and goes home and stops? It seems you still live in cloud-cuckoo ;) ))
Olivier Pons
I'd be a little less 'agressive': I'd say 'a programmer who does NOT code in his/her spare time will never be as good as he would be HIM/HERSELF if they did'. I say that because I know some people who actually like coding for fun, but, on the other hand, don't enjoy 'pushing it' to the 'next level'.
Rafael Almeida
Very true. I hate being compared to professional developers even if everything I bring to the company is from experience done at home.
The Elite Gentleman
@DevDevDev, the only time that being better than others doesn't matter is when you're in kindergarten. The rest of your life you're constantly compared to others. If you're not better than they'll take your job, simple as that. Myself, I love programming on my free time. Best summer of my life was between 10th and 11th grade when I spent 19 hours a day programming. Consequently that sort of dedication is why I'm making money programming today. Why in the world would I just want to program at work?
Peter
@wvdschel, I enjoy the projects that pay my bills. What am I doing wrong? Am I a bad brogrammer?
Pavel Shved
Great doctors become bad doctors when they think they're done learning. If you'd ever had anything unusual wrong with you, you'd know this. "If it's not in a book I read cover to cover 5 years ago, then it doesn't exist." -> bad doctor
Jason
-1 if I could:) So you become a better programmer and? What's the point if all you do is program 24/7. You become another workaholic. If life would be 100% programming it would be ok, otherwise not. To me it is a bit too much. I have many other hobbies bycicle, reading (non programming related books), learning to play on musical instruments, or anything that is actually more fun and challenging than just the narrow field of programming. When do you have time to enjoy life when all you do is program.
kudor gyozo
Great doctors read specialized magazines, go to refresher courses, and even blog on House MD (http://www.politedissent.com/house_pd.html) in their spare time. Of course they don't do surgery in spare time, but maybe spend their vacation for MSF...
Lorenzo
Great athletes do not become good in their field by merely doing the bare minimum; they train hard and keep in shape! But moreover, they are balanced; not obsessive. Then, at some point, they slow down and do something else; perhaps coaching... I believe programming can be the same; you can't be quite as productive if you just do the bare min. But you keep in shape in your spare time (because you can't always keep yourself up-to-date at work). ...It's all a question of balance and priorities. (And, yes, I do code in my spare time, and I do have a broad knowledge/experience of things too!)
Yanick Rochon
@Yanick, the best athletes are obsessive though. Which explains awkwardness in some other aspects of their lives. There's a reason people like Jordan and Bryant come off as callous
b8b8j
@b8b8j, yes, this is why I mentioned "great" and not "best". :) I think a great role model I could cite in example (regarding programming) would be Linus Torvalds; maybe a little obsessive (after all, we completely rewrote a UNIX kernel from scratch... how obsessive is that?) But he is well balanced and one can say that he is living quite a successful life. I believe that he fits quite well in the description of my last comment. I don't necessarily aspire to be like him, however I tend to admire his life's achievements.
Yanick Rochon
IMHO, every developer should be part of some open-source projects, to which they contribute in their spare time just for the feeling of _making-the-world-a-better-place_.
Vikrant Chaudhary
From my experience, we employ developers who can interact well with people, this is a major asset. It's good to program outside work time, but socialising is important. Still it seems the clever-er the programmer the more socially in-ept they are, this doesn't constitute to a good programmer.
wonea
Okay... I'll vote for this one being controversial. You see, the thing is, I have a **life** and I don't like being told that unless I'm a total nerd then I'm no good. If that's the price of entry, then I'll just be "mediocre" I suppose. I happen to think that my other hobbies help me be a better programmer -- especially the ones that involve those other carbon-based lifeforms known as **people**. Build the most *efficient* software you want, but then put an interface that only a closet geek would use (think Unix) and I'll guarantee you that you'll never win the fight for market share.
Brad
I would have to agree with comment. I have personally lived on both sides of the argument. At a time I did it 8-5 (what’s this 9-5 stuff??) and not a second more. My peers that spent time on the side personally or professionally accelerated fast past me. That was because I spent a lot of the 8-5 hours putting out fires and bug fixes and not having the ability to move forward...at least as fast as the others that took extra time to do additional programming outside the bounds of work. Now having ventured into "software development" is as much a hobby as a job, I moved past those around me.
atconway
I never said it was healthy, or made them better people, I just that's how you get the best programmers.
rustyshelf
I love coding and do it more than 9-5 as I'm passionate about it. Doesn't mean I don't play Hockey, Tennis, socialise and do loads of other things. I love coding and constantly try to come up with new ideas and learn new things. Everyone prioritises things differently and if your happy 9-5 coding fine. I prefer to go a bit further and learn more in the process. Really depends on what you want to get from it I suppose! :)
Andi
Many people don't get exposed to all the technologies that they'd like to at their current job. If you don't stay focused and disciplined with your knowledge, your skills will be out of date. Make sure you have the skills for the Next phase of your career, if you happy and secure... then you don't need to. IMHO.
wcpro
+1  A: 

Exceptions considered harmful.

Jim In Texas
Checked exceptions. Unchecked exceptions are fantastic and do a great job of stabilizing your app.
Bill K
+162  A: 

Software Architects/Designers are Overrated

As a developer, I hate the idea of Software Architects. They are basically people that no longer code full time, read magazines and articles, and then tell you how to design software. Only people that actually write software full time for a living should be doing that. I don't care if you were the worlds best coder 5 years ago before you became an Architect, your opinion is useless to me.

How's that for controversial?

Edit (to clarify): I think most Software Architects make great Business Analysts (talking with customers, writing requirements, tests, etc), I simply think they have no place in designing software, high level or otherwise.

rustyshelf
Amen! Smart and gets stuff done: Knows how to code, and actually produces production-ready code. It is better to say you don't know than pretend you do when you don't.
Christopher Mahan
I think there's a big necessary difference between architecting software and coding. What you say might apply to simple applications but there are many scenarios involving multi components spread across several servers, that requires archtecting, THEN coding.
Jeremy
@Jeremy I don't deny that things need designing, just that the design should be done by programmers, not Software Architects.
rustyshelf
A friend of mine once went to a lecture by Jim Coplien, where he said that architect is a noun, not a verb. So what does and architect do? They create plans to be used in making something (wordnet). You hace to be good at building to be an architect.
Hemal Pandya
I consider the term as a role... a role better played for real programmers :)
Alex. S.
I don't completely agree with you. I believe software architects are important. The thing is, architects should be developers first. I agree that many people who worked a couple decades ago carry the fancy architect name and don't do anything. I hate them too.
Mehrdad Afshari
Real Software Architects should be very experienced developers. Employing a non-technical person into the role of a Software Architect is nonsensical IMO.
weiran
Just because an architect doesn't code full times, doesn't mean he doesn't know how. Architects are best in the proof-of-concept area assuming their talents are as wide and deep as they should be.
Jas Panesar
A good software architect stays elbow-deep in code and works with the development team. Anyone who thinks architects are useless either (a) has never worked with a good architect (they do exist), or (b) never worked on a project big enough to need one, and just can't imagine needing one.
Rex M
They should be forced to implement their Architecture and or Design. Then let's see further....
Friedrich
Software architects who have never coded are truly evil. No one who has never done maintenance should be allowed near a design project.
HLGEM
I think that software architecture is just one of the responsibilities of the Software Developer. If you want to have a person with the title 'Software Architect' fine. But he is just the software developer that happens to be officially accountable for the architecture quality.
Sergio Acosta
In agreement with many other comments here, I'll say this: **to be a REAL Architect, you must be an excellent coder (among other things).**
Charlie Flowers
If you ever have to spend a year rewriting an entire application that was written by programmers with no architecture, you'd likely change your tune.
ctacke
I'm impressed. The top two answers that I disagree with most for this question were both posted by you (rustyshelf). I'm not sure how controversial your opinions are to most people, but I for one disagree with them completely.
Beska
The need for an architect usually is a symptom of substandard developers kicking about. I've been assigned the duty of an architect in the project I'm working on currently, but since all the developers I work with currently are excellent and up to the task, I can pretty much concentrate on programming myself. It hasn't always been like that, though. I've worked with people that need constant overlooking or they'll copypaste the living sh*t out of your DRY and other important design principles.
theiterator
The question did ask for controversial. In reality neither is to me. Architects make rubbish architects. Some of you assume that the inverse is automatically true (that programmers make great architects) and it's not. What I'm saying is that Architects will always be lousy architects who need to stick to BA work and forget about the fancy notion that they know how to design something they don't work with day in and day out. Good programmers on the other hand can make great architects as long as they stay programmers (confused yet?) :)
rustyshelf
+4  A: 

Although I'm in full favor of Test-Driven Development (TDD), I think there's a vital step before developers even start the full development cycle of prototyping a solution to the problem.

We too often get caught up trying to follow our TDD practices for a solution that may be misdirected because we don't know the domain well enough. Simple prototypes can often elucidate these problems.

Prototypes are great because you can quickly churn through and throw away more code than when you're writing tests first (sometimes). You can then begin the development process with a blank slate but a better understanding.

I don't know how controversial that opinion is. What you describe seems to be the well-known “Spike Solution” pattern http://c2.com/xp/SpikeSolution.html and is a good pattern to have.
bignose
+50  A: 

"Java Sucks" - yeah, I know that opinion is definitely not held by all :)

I have that opinion because the majority of Java applications I've seen are memory hogs, run slowly, horrible user interface and so on.

G-Man

GeoffreyF67
I think what you're trying to say is Swing sucks (as in JAVA UIs). Java back ends don't suck at all...unless that's the controversial bit ;)
rustyshelf
You don't have to be a Java partisan to appreciate an application like JEdit. Java has some serious crushing deficiencies, but so does every other language. Those of Java are just easier to recognize.
dreftymac
+1 cause I agree, java sucks.
Unkwntech
I a C# fanboy, but I admire quite a few Java apps as being very well done.
Neil N
I think what you are trying to say is that the barrier for Java coding is so low that there are many sucky Java "programmers" out there writing complete crap.
Software Monkey
I agree that most Java desktop apps I've seen suck. But I wouldn't say the same of server apps.
Sergio Acosta
You]re going to blame a programming language for 'horrible user interfaces'? Surely that is a fault of the UI designer. And while I'm sure Java has its share of poorly coded software that runs slowly and consumes too much memory, it is not at all hard to write Java programs that run efficiently and use memory only as needed. Having worked on a Java based web crawler capable of crawling 100s of millions of URIs I can attest to this.
Kris
A: 

Two lines of code is too many.

If a method has a second line of code, it is a code smell. Refactor.

Jay Bazuzi
Or you could make your entire program one (reaaaly long) line of code. That's always fun.
Kiv
BAKA!!even in a functional language, like haskell, you can have several lines in a function!
hasen j
When one combines the rule that a class should fit on the screen and every method has only one line a class can contain only approximately 7 lines of code.
tuinstoel
I'm amused that this is currently the lowest-ranked answer; I think I've succeeded at the "controversial" part.
Jay Bazuzi
It is indeed controversial, so I up.
tuinstoel
I agree completely, when will people see the light? I use Perl so I don't know how to write a function with more than one line of code, also, what is this "Refactor" thing you speak of? :-O
Robert Gamble
You must be a functional programmer... but one line per function is still a little extreme ;)
ceretullis
I'm sorry this is nonsense. -1 from me
Friedrich
It's not controversial - it's inane.
Software Monkey
That depends on your definition of "line".For some methods even a single line is too much.
G B
No method I've ever written (as far as I recall) has just one line of code =)
Jader Dias
int screwYou() { printf("This is balls...\n"); }
Jasarien
Typically, when I write a *VOID* dummy method, just due to formatting conventions, it takes at least two lines. Non-void functions typically take three lines. Of ocurse, like Kiv said, you can have 10.000 characters in a single line - so "lines" might not be the best metric for program size counting.
luiscubal
This is controversial because I do not think you can apply this type of statement to all languages.
atconway
@atconway: C++ fails, because you can't do anything useful in one statement. Perl fails because even one line is confusing. (To all: there is sanity behind this, but I was going for shock value.)
Jay Bazuzi
+16  A: 

Classes should fit on the screen.

If you have to use the scroll bar to see all of your class, your class is too big.

Code folding and miniature fonts are cheating.

Jay Bazuzi
You must have a really large screen then. Do you also think, that class can have no more than 3 or 4 methods, because no more clearly fits on the 41 lines that fit on my screen. Voting up, because this is really controversial.
Rene Saarsoo
Rene: thanks for disagreeing with me without dismissing my answer out of hand. I sense an open mind.
Jay Bazuzi
I have to disagree as well. I write a lot of Python classes and not many of them fit on my screen. Of course, I'm not counting my netbook's screen because that would just be unfair to me. =P
sli
Screen size varies widely depending on your visual acuity. I keep my screens running at 1680×1250, and use Consolas 8pt. What I can see on one screen is likely *much* more than a guy running at 640×480 using Courier New 10pt.
Mike Hofer
Make that, "Screen capacity varies widely depending on your visual acuity and display settings." :-) Not enough coffee yet. :-)
Mike Hofer
@Mike: it's true, screen capacity varies. To follow my guideline, you have to decide which screen you want to fit on. On a team, you have to make that decision together. Still, the principle is sound: I want to be able to look at a whole class and comprehend it in its entirety, without scrolling.
Jay Bazuzi
This might be quite challenging to implement in some languages that are more verbose (require more plumbing), but I admire the general sentiment.
Rob Williams
@Rob: thanks, and you're right. In some languages you can Extract Class and get some compactness, hopefully for the benefit of your code. In others (C++ I'm looking at you!) even simple classes have to work very hard to function.
Jay Bazuzi
Do you have any other rules to go with this? The list of classes in an API should fit on one screen? What is it in the class that you need to see anyway, surely the name tells you all about what it can do! What need for to look at the methods on a list.
Greg Domjan
Some other rules that may fit: "Methods should have one statement" and "blocks should have only one statement" and "switch cases much be trivial" and "each 'enum' type should be mentioned in a conditional only once". :-)
Jay Bazuzi
Ouch. It can be hard enough to make a method fit on the screen, never mind an entire class (my main language is Java BTW)
finnw
For some of my classes, I can barely fit the member list on the screen. If an obect is to represent something, it should do so in its entirety. Breaking it up into many smaller classes is just adding visual complexity (right click > go to definition - ad nauseum) where it need not exist.
SnOrfus
@SnOrfus: I bet that there are bits of self-contained, general-purpose, reusable bits of functionality in those big classes, that would make COMPLETE SENSE as a new class. You wouldn't be confused when looking at a reference to one, because the name and its functionality would be obvious.
Jay Bazuzi
I think this is baiting. The implication is that a class should have a limit to the number of attributes it can have because their declaration eats into the space for method bodies. This sounds like a language troll as in, any language that can't fit a class onto one screen isn't fit to use. Try coding something complex like the contact details for a person which includes an international address including phone numbers, email, fax, etc.
Kelly French
r u talking abt classes in c++ where function body is declared outside the class? then may be u r right...
Amarghosh
@Amarghosh No, that's not what I'm talking about. It's not possible to do this in C++ because the language is too complex and unwieldy. Also, I wish you would write Englis.
Jay Bazuzi
Not if you're programming for a mobile phone.
Daniel Daranas
+10  A: 

Explicit self in Python's method declarations is poor design choice.

Method calls got syntactic sugar, but declarations didn't. It's a leaky abstraction (by design!) that causes annoying errors, including runtime errors with apparent off-by-one error in reported number of arguments.

porneL
I've certainly forgotten to type "self" many times myself, but what would you have done instead? You can't just imply self in all method declarations because of classmethods and staticmethods.
Kiv
I often mistype it as `slef` and I get errors because `self` is undeclared
hasen j
I think that `def` in `class` should imply `self`, and other types of methods could use different/additional keyword, like `defstatic`/`static def`.
porneL
It's actually due to an implementation problem early on in the language design -- apparently Guido and team could not figure out how to bind the implicit self parameter to its enclosing environment, short of just passing it explicitly. Hope I got that right, not a compiler/translator guru.
Please read around and reconsider your opinion: http://effbot.org/pyfaq/why-must-self-be-used-explicitly-in-method-definitions-and-calls.htm and http://www.artima.com/weblogs/viewpost.jsp?thread=214325 are two good places to start.
Daz
@Daz: links you've given talk about either body of a function (but I'm talking about declaration of arguments) or semantics of functions being 1st class (which is completely orthogonal issue to the syntax).
porneL
+5  A: 

Primitive data types are premature optimization.

There are languages that get by with just one data type, the scalar, and they do just fine. Other languages are not so fortunate. Developers just throw "int" and "double" in because they have to write in something.

What's important is not how big the data types are, but what the data is used for. If you have a day of the month variable, it doesn't matter much if it's signed or unsigned, or whether it's char, short, int, long, long long, float, double, or long double. It does matter that it's a day of the month, and not a month, or day of week, or whatever. See Joel's column on making things that are wrong look wrong; Hungarian notation as originally proposed was a Good Idea. As used in practice, it's mostly useless, because it says the wrong thing.

David Thornley
It makes programs quite quite slower. Compare python to C or C++ and you'll see a huge performance difference when working with integers. It will avoid overflows at the expense of full checking all the time. That is a source of premature-pessimization in many cases.
David Rodríguez - dribeas
In at least Common Lisp, you can specify data types later, once you get the program working correctly. That's how CMU Common Lisp beat out a Fortran compiler in a number-crunching contest once.
David Thornley
That's basically Alan Perlis: "Functions delay binding: data structures induce binding. Moral: Structure data late in the programming process."
just somebody
+8  A: 

We do a lot of development here using a Model-View-Controller framework we built. I'm often telling my developers that we need to violate the rules of the MVC design pattern to make the site run faster. This is a hard sell for developers, who are usually unwilling to sacrifice well-designed code for anything. But performance is our top priority in building web applications, so sometimes we have to make concessions in the framework.

For example, the view layer should never talk directly to the database, right? But if you are generating large reports, the app will use a lot of memory to pass that data up through the model and controller layers. If you have a database that supports cursors, it can make the app a lot faster to hit the database directly from the view layer.

Performance trumps development standards, that's my controversial view.

jjriv
An excellent example of how sometimes rules are made to be broken. Do everything right but be prepared to do some things wrong from necessity.
Kendall Helmstetter Gelner
Interesting point!
Seventh Element
Performance trumps development standards -- if it is too poor to stand. As long as performance is not a problem, there is no need to fix it.
Aaron Digulla
Don't forget, what is considered "right" in terms of development standards was just somebody's common-sense temporary opinion that happened to get picked up by a lot of people. It is not a commandment from "on high" - common sense can change but is always useful. Good work.
Mike Dunlavey
+7  A: 

I believe the use of try/catch exception handling is worse than the use of simple return codes and associated common messaging structures to ferry useful error messages.

Littering code with try/catch blocks is not a solution.

Just passing exceptions up the stack hoping whats above you will do the right thing or generate an informative error is not a solution.

Thinking you have any chance of systematically verifying the proper exception handlers are avaliable to address anything that could go wrong in either transparent or opague objects is not realistic. (Think also in terms of late bindings/external libraries and unecessary dependancies between unrelated functions in a call stack as system evolves)

Use of return codes are simple, can be easily systematically verified for coverage and if handled properly forces developers to generate useful error messages rather than the all-too-common stack dumps and obscure I/O exceptions that are "exceptionally" meaningless to even the most clueful of end users.

--

My final objection is the use of garbage collected languages. Don't get me wrong.. I love them in some circumstances but in general for server/MC systems they have no place in my view.

GC is not infallable - even extremely well designed GC algorithms can hang on to objects too long or even forever based on non-obvious circular refrences in their dependancy graphs.

Non-GC systems following a few simple patterns and use of memory accounting tools don't have this problem but do require more work in design and test upfront than GC environments. The tradeoff here is that memory leaks are extremely easy to spot during testing in Non-GC while finding GC related problem conditions is a much more difficult proposition.

Memory is cheap but what happens when you leak expensive objects such as transaction handles, synchronization objects, socket connections...etc. In my environment the very thought that you can just sit back and let the language worry about this for you is unthinkable without significant fundental changes in software description.

Einstein
Return codes have the problem of coupling too many elements of a chain of calls to understand what they mean. That is to say, that everything between a called function and something that might handle an error has to understand the return codes, at least to pass them along - that can be a mess.
Kendall Helmstetter Gelner
My general advice is to follow a convention and don't fall into the trap of attempting to have them indiciate specific error conditions.At each level you should take steps to ensure meaning is normalized. (Which ususally isn't hard/necessary if you follow a convention)
Einstein
Good error code compared with bad exception code is better. But then again, there is good exception handling code, where exceptions are thrown and caught only where it makes sense... good exception code separates error handling from the error, and need not be replicated in each function of the stack
David Rodríguez - dribeas
If a GC platform is not right for your particular situation, use good judgmenet and don't use it. It's as simple as that.
Seventh Element
+3  A: 

Excessive HTML in PHP files: sometimes necessary

Excessive Javascript in PHP files: trigger the raptor attack

While I have a hard time figuring out all your switching between echoing and ?>< ?php 'ing html (after all, php is just a processor for html), lines and lines of javascript added in make it a completely unmaintainable mess.

People have to grasp this: They are two separate programming languages. Pick one to be your primary language. Then go on and find a quick, clean and easily maintainable way to make your primary include the secondary language.

The reason why you jump between PHP, Javascript and HTML all the time is because you are bad at all three of them.

Ok, maybe its not exactly controversial. I had the impression this was a general frustration venting topic :)

What? To build a dynamic, server-side generated website you'll need all three (Unless you use another system.)For PHP, you've got your templating, server power etc. For HTML you have the basis of the actual site. JS: Dynamically loaded content, special features (syntax highlighting).
Dalin Seivewright
+63  A: 

Opinion: developers should be testing their own code

I've seen too much crap handed off to test only to have it not actually fix the bug in question, incurring communication overhead and fostering irresponsible practices.

Kevin Davis
+1. This a matter of ownership, we tend to care better for things we own than the things we don't. Want proof? Take a look at your company vehicles.
AnthonyWJones
It also comes with the onus that people reporting bugs can report in sufficient detail so that it can be reproduced and tested to be proven fixed. It sucks to be so maligned when you reproduce a defect according to description, fix it, and find that the tester still has issues you didn't.
Greg Domjan
I think testing and developing are different skills, they should be done by those who are good at them. Isolating testers from developers and making it hard for testers to get ther bugs fixed: no excuse.
Benjamin Confino
But they shouldn't be the only ones to test their code.
peterchen
Sounds like bad developers to me. I'd file this under not all lazy developers are good developers.
gradbot
+1 for controversy: I'm only going to test the things I think to test for, and if I design the particular method... I've already thought of everything that can go wrong (from my point of view). A good tester will see another point of view -> like your users.
SnOrfus
if the developer could not write bug-free code, he should also not test it.
Orentet
+9  A: 

Web applications suck

My Internet connection is veeery slow. My experience with almost every Web site that is not Google is, at least, frustrating. Why doesn't anybody write desktop apps anymore? Oh, I see. Nobody wants to be bothered with learning how operating systems work. At least, not Windows. The last time you had to handle WM_PAINT, your head exploded. Creating a worker thread to perform a long task (I mean, doing it the Windows way) was totally beyond you. What the hell was a callback? Oh, my God!


Garbage collection sucks

No, it actually doesn't. But it makes the programmers suck like nothing else. In college, the first language they taught us was Visual Basic (the original one). After that, there was another course where the teachers pretended they taught us C++. But the damage was done. Nobody actually knew how to use this esoteric keyword delete did. After testing our programs, we either got invalid address exceptions or memory leaks. Sometimes, we got both. Among the 1% of my faculty who can actually program, only one who can manage his memory by himself (at least, he pretends) and he's writing this rant. The rest write their programs in VB.NET, which, by definition, is a bad language.


Dynamic typing suck

Unless you're using assembler, of course (that's the kind of dynamic typing that actually deserves praise). What I meant is the overhead imposed by dynamic, interpreted languages makes them suck. And don't come with that silly argument that different tools are good for different jobs. C is the right language for almost everything (it's fast, powerful and portable), and, when it isn't (it's not fast enough), there's always inline assembly.


I might come up with more rants, but that will be later, not now.

Eduardo León
C may be fast to execute, but dynamic, interpreted languages are faster to develop in. I think you're being a little close-minded here.
Kiv
C is NOT the right tool for everything! it's not the tool for web development! there's _that_ at least!
hasen j
What are dynamic, interpreted languages good for, besides Web development? Note, I happen to hate Web apps.
Eduardo León
Sure, dynamic languages should be burned. From now on I shall always compile my shell scripts to machine code.
Rene Saarsoo
Dynamic languages are good for different jobs. They tend to be ideal for quick and dirty throw away scripts for admin stuff, as well they tend to be better geared for applications that require a lot of string manipulation and need to be developed quickly.
Rontologist
That's 3 opinions in one answer, and they're all dupes
finnw
What do you mean by dupes?
Eduardo León
+1  A: 

Never make up your mind on an issue before thoroughly considering said issue. No programming standard EVER justifies approaching an issue in a poor manner. If the standard demands a class to be written, but after careful thought, you deem a static method to be more appropriate, always go with the static method. Your own discretion is always better than even the best forward thinking of whoever wrote the standard. Standards are great if you're working in a team, but rules are meant to be broken (in good taste, of course).

Davis Gallinghouse
+99  A: 

C++ is one of the WORST programming languages - EVER.

It has all of the hallmarks of something designed by committee - it does not do any given job well, and does some jobs (like OO) terribly. It has a "kitchen sink" desperation to it that just won't go away.

It is a horrible "first language" to learn to program with. You get no elegance, no assistance (from the language). Instead you have bear traps and mine fields (memory management, templates, etc.).

It is not a good language to try to learn OO concepts. It behaves as "C with a class wrapper" instead of a proper OO language.

I could go on, but will leave it at that for now. I have never liked programming in C++, and although I "cut my teeth" on FORTRAN, I totally loved programming in C. I still think C was one of the great "classic" languages. Something that C++ is certainly NOT, in my opinion.

Cheers,

-R

EDIT: To respond to the comments on teaching C++. You can teach C++ in two ways - either teaching it as C "on steroids" (start with variables, conditions, loops, etc), or teaching it as a pure "OO" language (start with classes, methods, etc). You can find teaching texts that use one or other of these approaches. I prefer the latter approach (OO first) as it does emphasize the capabilities of C++ as an OO language (which was the original design emphasis of C++). If you want to teach C++ "as C", then I think you should teach C, not C++.

But the problem with C++ as a first language in my experience is that the language is simply too BIG to teach in one semester, plus most "intro" texts try and cover everything. It is simply not possible to cover all the topics in a "first language" course. You have to at least split it into 2 semesters, and then it's no longer "first language", IMO.

I do teach C++, but only as a "new language" - that is, you must be proficient in some prior "pure" language (not scripting or macros) before you can enroll in the course. C++ is a very fine "second language" to learn, IMO.

-R

'Nother Edit: (to Konrad)

I do not at all agree that C++ "is superior in every way" to C. I spent years coding C programs for microcontrollers and other embedded applications. The C compilers for these devices are highly optimized, often producing code as good as hand-coded assembler. When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code. You can write it in C++, but then you're really just writing C, and the C compilers are more optimized in these applications.

I wrote a MIDI engine, first in C, later in C++ (at the vendor's request) for an embedded controller (sound card). In the end, to meet the performance requirements (MIDI timings, etc) we had to revert to pure C for all of the core code. We were able to use C++ for the high-level code, and having classes was very sweet - but we needed C to get the performance at the lower level. The C code was an order of magnitude faster than the C++ code, but hand coded assembler was only slightly faster than the compiled C code. This was back in the early 1990s, just to place the events properly.

-R

Huntrods
I would upvote if it wasn't for the "it's a horrible first language", I think it sucks but it's a good first language, particularly because it does suck, then one can appreciate the need for better languages!
hasen j
It's very difficult to create usable classes in C++, but once you create them, life is very easy. Way easier than using plain C. What I do is the following: I implement the functionality in C, then wrap it using C++ classes.
Eduardo León
The way I see it, a lot of misgivings about C++ stem from the fact that C++ is generally taught wrong. One typically needs to unlearn a lot of C before one can grok C++ well. Learning C++ after C never seems a good idea to me.
Nocturne
And I think that C++ is superior to C in every way, except that it unfortunately was designed to be “backwards” compatible to C.
Konrad Rudolph
I think C++ is a good example of "design by committee" done *RIGHT*. It's a mess in many ways, and for many purposes, it's a lousy languages. But if you bother to really learn it, there's a remarkably expressive and elegant language hidden within. It's just a shame that few people discover it.
jalf
Yea - that "elegant language, hidden within" ... IS C!!! ;-)
Huntrods
I've got another bone to pick with you: “You can teach C++ in two ways” – this is wrong. Apparently you have only ever used C++ in two ways, without unlocking its true potential. This also explains your microcontroller related experience: C is *no* faster than (well-written) C++.
Konrad Rudolph
+1: Of all the languages I've ever played with, C++ is the only one which has made me sick every time I've approached it. I've had a book on C++ for years, I pick it up every once in a while and tell myself "it really can't be that bad" and read until my eyes bleed, I've made it to page 47.
Robert Gamble
There is a third approach to learning C++: Accelerated C++ takes it. It builds from the very beginning (variables, functions) but using real C++ elements (STL). I recommend it for anyone who wants another view into C++.
David Rodríguez - dribeas
@dribeas: I appreciate the recommendation, it looks like a good book. I doubt I'll ever be able to "appreciate" what C++ has to offer but if I ever recover from my previous experiences I will take you up on your recommendation.
Robert Gamble
Okay, if C++ code was ten times slower than C code, what sort of Mickey Mouse compilers were you using? Or what idiotic code conventions were you required to use? Were you asked to do exception specifications, for example (almost always a bad idea)?
David Thornley
Just throwing this out there, but the Programming Language benchmark game has quite a few examples of C++ being faster then C.
James McMahon
"When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code." - who says you *have* to use classes, rtti and whatnot?
Johannes Schaub - litb
you don't *have* to use those features. if you only use the C subset, then C++ is equally fast as C. then, you can selectively pick those C++ features *you* like. some vector sugar here, some other stuff there. isn't that nice?
Johannes Schaub - litb
and i agree it's all but a nice first language. it's not wise to teach it first IMHO. and it's good that it's compatible to C. nuff said :)
Johannes Schaub - litb
Well said. Also read 'Worse is Better'
Vardhan Varma
I agree that it's got a whole raft of problems. but worst ever? Ever seen intercal? BFUNGE? assembly language?
Brian Postow
Regarding your anecdote about C++ being an order of magnitude slower, keep in mind that C++ compilers of the '80s are not the same as C++ compilers of today.
notJim
I agree that it's the worst language ever. Except for all the others.
Kaz Dragon
I don't agree that its the worst language; I do agree that its a bad language; I also agree that its a bad first language. C++ is powerful and has a lot of features that are very useful. This makes C++ a good choice - sometimes. C++ also has a lot of hidden evil (lots of undefined behavior that looks perfectly fine..) which makes it a bad language and definitely a bad first language.
Dan
@david-basarab - C++ compilers are now much better! I use c++ not only for MIDI but for audio DSP algorithms - utilizing C++ templates makes it very powerful to make tunable compile time parameters such as buffer size and layout which allows for automatic SSE/altivec optimizations. The benefit of C++ now is not the language which is always a template-puzzle nowadays, but because the compilers available are better at optimizing real time functions than Haskell, Ada, Scheme and Scala are
jdkoftinoff
-1. C++ is still the most powerful multi-paradigm widely available language there is. It's the most adaptable of them all, therefore it can solve many different problems, which in some applications is _very_ useful. It might not be best at each specific thing, but overall, it's seldom a really bad choice.
Marcus Lindblom
C++ is like Democracy, "Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time." -Sir Winston Churchill
gradbot
C++ is massive, and massively popular. Like all languages, it has applications for which is it well suited, and applications for which it is poorly suited.
baultista
+1 For second language. I learned Java first and a bit of C one year later. I'm glad I learned the low-level C stuff because it makes me a better high-level programmer, but I'm also glad I didn't have to start with C.
Bart van Heukelom
A: 

Higher level lanugages should be one based instead of zero based. This would eliminate "off by one" errors when dealing with arrays/collections.

I think arrays should not be based at all. If you want to refer to the first item one should use l_array[l_array.first], to the last item l_array[l_array.last]. If you want to loop: for i in l_array.first..l_array.last loop ..do your stuff..end loop;
tuinstoel
@tuinstoel, that's what lists are for. Sometimes you need random access to elements. For that, you need an index. By the way, I don't agree that arrays should be one based. Zero is more convenient most of the time IMHO.
Matthew Crumley
One based indexing can get pretty awkward... I like this article by DIjkstra:http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html
Kiv
@Matthew Crumley, if you want to access the second element, do l_array[l_array.first+1].
tuinstoel
@Kiv, it's Dijkstra, not Dljkstra.
tuinstoel
Wrong. Zero-based arrays are the most natural ones. When you use zero-based arrays, the array's length is the set of valid indices, according to Peano arithmetic.
Eduardo León
@Eduardo León, According the en:wikipedia 0 or 1 doesn't matter. "For example, the natural numbers starting with one also satisfy the axioms." See: http://en.wikipedia.org/wiki/Natural_number#Peano_axioms
tuinstoel
I find one-based leads to even more off by one erros.
Matthias Wandel
Changing from 0 to 1 would just change the OBOEs not eliminate them. I have to use languages that use both and the errors are just as common in both (just different). A non-idea.
duncan
I stick with my idea that arrays shouldn't not be based.
tuinstoel
One-based is the source of huge errors in Delphi where Strings are 1 based and everything else is zero-based. VB variant arrays can be initialized to have any lower and upper bound you like. And that's a perfect hell.
Warren P
By the way, this one should be upvoted instead of down-voted. It's definitely OUT THERE and controversial. Also, crazy.
Warren P
+9  A: 

Preconditions for arguments to methods/functions should be part of the language rather than programmers checking it always.

kal
I like it, but it is controversial?
erikkallen
+33  A: 

There's an awful lot of bad teaching out there.

We developers like to feel smugly superior when Joel says there's a part of the brain for understanding pointers that some people are just born without. The topics many of us discuss here and are passionate about are esoteric, but sometimes that's only because we make them so.

Brian
Amen ------------
Dario
Those who can't do, teach. By that logic, the people who can't program are the ones teaching us how to program.I've experienced it myself where the professors I've had have admitted to being unable to do the problems and exercises they assign. Protip: Take the classes with the teachers contracted by the university, not tenure (or tenure-pathed) professors.
baultista
+15  A: 

The best code is often the code you don't write. As programmers we want to solve every problem by writing some cool method. Anytime we can solve a problem and still give the users 80% of what they want without introducing more code to maintain and test we have provided waaaay more value.

Todd Friedlich
It reminds me of a quote (I can't remember who said it though) - "Measuring a program by lines of code is like measuring a plane by weight."
Cristián Romo
@Cristián: It was Bill Gates who said that.
Dan Dyer
+18  A: 

C (or C++) should be the first programming language

The first language should NOT be the easy one, it should be one that sets up the student's mind and prepare it for serious computer science.
C is perfect for that, it forces students to think about memory and all the low level stuff, and at the same time they can learn how to structure their code (it has functions!)

C++ has the added advantage that it really sucks :) thus the students will understand why people had to come up with Java and C#

hasen j
so everybody should suffer, because you have suffered? its always nice to learn useless things, but come on.
01
Not really, I really loved C++ back in the day, I was in denial when I heard from a prof that it's the worst language he's ever seen.
hasen j
+1: Everyone should learn C first because programming isn't for everyone and it isn't for anyone that can't grasp C.
Robert Gamble
Blast them with raw machine code. Suffer!!! The assembler course was the most fun in had (during class time) in university.
Jonathan C Dickinson
Mythology. Before encountering C I learned the assembly of 2/3 CPUs and familiarized with others. Some CPUs are a pleasure to program because of their orthogonal instruction sets, others are a pain but less idiosyncratic than C. C fails for its intended use, i.e. a portable assembly.
MaD70
.. and I find pathetic the elitism that too many programmers show.
MaD70
My university taught programming almost exclusively in Java. I felt simultaneously aroused and cheated when I finally got around to learning C and C++.
iandisme
I disagree. Its hard to get first-timers excited about memory allocation.. Start with a language where you can get near instant gratification. The web languages are good for this.
Matt
@Matt: you're not supposed to agree ;)
hasen j
I did a lot of teaching introductory CS. What I found was most useful was first a few weeks on a decimal machine simulator, to set up the basic mental framework of addresses, memory, instructions, and stepwise execution. Then we did Basic (sorry), then Pascal. I like C (and C++) but those are hell to teach to newbies, because there are too many subtle ways for students to get confused, like the difference between pointers and array referencing, and nested types. It's not acceptable to say "sink or swim" - they pay tuition.
Mike Dunlavey
+11  A: 

A random collection of Cook's aphorisms...

  • The hardest language to learn is your second.

  • The hardest OS to learn is your second one - especially if your first was an IBM mainframe.

  • Once you've learned several seemingly different languages, you finally realize that all programming languages are the same - just minor differences in syntax.

  • Although one can be quite productive and marketable without having learned any assembly, no one will ever have a visceral understanding of computing without it.

  • Debuggers are the final refuge for programmers who don't really know what they're doing in the first place.

  • No OS will ever be stable if it doesn't make use of hardware memory management.

  • Low level systems programming is much, much easier than applications programming.

  • The programmer who has a favorite language is just playing.

  • Write the User's Guide FIRST!

  • Policy and procedure are intended for those who lack the initiative to perform otherwise.

  • (The Contractor's Creed): Tell'em what they need. Give'em what they want. Make sure the check clears.

  • If you don't find programming fun, get out of it or accept that although you may make a living at it, you'll never be more than average.

  • Just as the old farts have to learn the .NET method names, you'll have to learn the library calls. But there's nothing new there.
    The life of a programmer is one of constantly adapting to different environments, and the more tools you have hung on your belt, the more versatile and marketable you'll be.

  • You may piddle around a bit with little code chunks near the beginning to try out some ideas, but, in general, one doesn't start coding in earnest until you KNOW how the whole program or app is going to be layed out, and you KNOW that the whole thing is going to work EXACTLY as advertised. For most projects with at least some degree of complexity, I generally end up spending 60 to 70 percent of the time up front just percolating ideas.

  • Understand that programming has little to do with language and everything to do with algorithm. All of those nifty geegaws with memorable acronyms that folks have come up with over the years are just different ways of skinning the implementation cat. When you strip away all the OOPiness, RADology, Development Methodology 37, and Best Practice 42, you still have to deal with the basic building blocks of:

    • assignments
    • conditionals
    • iterations
    • control flow
    • I/O

Once you can truly wrap yourself around that, you'll eventually get to the point where you see (from a programming standpoint) little difference between writing an inventory app for an auto parts company, a graphical real-time TCP performance analyzer, a mathematical model of a stellar core, or an appointments calendar.

  • Beginning programmers work with small chunks of code. As they gain experience, they work with ever increasingly large chunks of code.
    As they gain even more experience, they work with small chunks of code.
cookre
"Once you've learned several seemingly different languages, you finally realize that all programming languages are the same - just minor differences in syntax." - you just broke many hearts, some people learn new language every year.
01
And it gets easier and easier, doesn't, doesn't it?
cookre
"you finally realize that all programming languages are the same" -- you hear that a lot from people who have only programmed in C#, C++, flavors of VB, Java, and maybe Python. Then you finally learn Haskell, Ocaml, Erlang, Prolog, and Lisp, and you feel like an idiot for having missed so much.
Juliet
It's always nice to have lots of toys, but we know they all serve the same purpose - to entertain us in some way. Likewise with every programming language I've seen over the past forty some odd years. As mentioned above, it's all about algorithm - not syntax.
cookre
@cookre: try to use algorithms designed to be expressed in an imperative programming language (PL) with a pure lazy functional PL like Haskell or in a (constraint) logic PL like Prolog (and derivatives) or in a PL designed for fault tolerance and massive concurrency, like Erlang and you will discover that semantics differences are all that really counts.
MaD70
+6  A: 

According to the amount of feedback I've gotten, my most controversial opinion, apparently, is that programmers don't always read the books they claim to have read. This is followed closely by my opinion that a programmer with a formal education is better than the same programmer who is self-taught (but not necessarily better than a different programmer who is self-taught).

Bill the Lizard
I'm proud to say I've read all the programming books I own. Even the monsterous Programming Python and Programming Perl.
sli
I have a B.A. in English. It is likely that I'm a better programmer for it. Is that what you mean?
postfuturist
You over-estimate the value of education. I've been a full time programmer for 15 years and am self-taught. When I meet developers who are fresh out of school, I sometimes wonder if there whole education wasn't a big waste of time. They know next to nothing about "the real world", can seldomly work independently and their skills are average at best.
Seventh Element
@Seventh Element: I would expect someone fresh out of school with no work experience to have average skills. Comparing a fresh graduate to someone with 15 years of work experience is comparing apples to oranges. I worked as a programmer for 8 years before going back to school to get my degree. I think I have a pretty strong grasp of the value of my education *to me*. You get out of it what you put into it.
Bill the Lizard
+10  A: 

Jon Skeet is not all that special!

hasen j
Almost a +1 becouse of the controversy, but can't since you don't back it upp
martiert
I did back it up! don't you see the exclamation mark??
hasen j
+8  A: 

The best programmers trace all their code in the debugger and test all paths.

Well... the OP said controversial!

Enigme
Please justify your position. Note: test all paths requires that you only write paths you can test. Mindless error handlers go away.
Jay Bazuzi
Ever heard of unit tests? Using unit tests you don't need to "test all paths" after each change you made to the code. (Anyway, I think it's is impossible to test all paths except in a tiny little application)
Stefan Steinegger
A corollary: The fewer paths a piece of code has the better.
dangph
+24  A: 

The more process you put around programming, the worse the code becomes

I have noticed something in my 8 or so years of programming, and it seems ridiculous. It's that the only way to get quality is to employ quality developers, and remove as much process and formality from them as you can. Unit testing, coding standards, code/peer reviews, etc only reduce quality, not increase it. It sounds crazy, because the opposite should be true (more unit testing should lead to better code, great coding standards should lead to more readable code, code reviews should improve the quality of code) but it's not.

I think it boils down to the fact we call it "Software Engineering" when really it's design and not engineering at all.


Some numbers to substantiate this statement:

From the Editor

IEEE Software, November/December 2001

Quantifying Soft Factors

by Steve McConnell

...

Limited Importance of Process Maturity

... In comparing medium-size projects (100,000 lines of code), the one with the worst process will require 1.43 times as much effort as the one with the best process, all other things being equal. In other words, the maximum influence of process maturity on a project’s productivity is 1.43. ...

... What Clark doesn’t emphasize is that for a program of 100,000 lines of code, several human-oriented factors influence productivity more than process does. ...

... The seniority-oriented factors alone (AEXP, LTEX, PEXP) exert an influence of 3.02. The seven personnel-oriented factors collectively (ACAP, AEXP, LTEX, PCAP, PCON, PEXP, and SITE §) exert a staggering influence range of 25.8! This simple fact accounts for much of the reason that non-process-oriented organizations such as Microsoft, Amazon.com, and other entrepreneurial powerhouses can experience industry-leading productivity while seemingly shortchanging process. ...

The Bottom Line

... It turns out that trading process sophistication for staff continuity, business domain experience, private offices, and other human-oriented factors is a sound economic tradeoff. Of course, the best organizations achieve high motivation and process sophistication at the same time, and that is the key challenge for any leading software organization.

§ Read the article for an explanation of these acronyms.

rustyshelf
It sounds like you're seeing process being used to compensate for poor programmers, not to enhance great developers. This is why the Agile Manifest says "Individuals and interactions over processes and tools". Instead of adding process for poor programmers, add it when # of programmers grows.
Jay Bazuzi
@jay not quite. I think that process even put around the best developers causes a decrease in code quality. I would liken it to meeting a famous painter, and then telling him the rules he needs to abide by to make a good painting. It might make sense to you, but it's ridiculous.
rustyshelf
I suspect great painters have their own processes.
Alex Baranosky
Process takes away energy that makes code better - that applies to coders good and bad.Some process is useful but process breeds process and you always end up with too much.
Kendall Helmstetter Gelner
@GordonG Perhaps I should have said the more 'external' process...
rustyshelf
I couldn't agree with you more! The arguments I've gotten into with other programmers over their strict adherence to processes could fill a book the size of War And Peace. That includes both "good" and bad processes, though.
sli
I've seen the opposite effect. I worked a company which used an Agile methodology, and the code quality was nightmarishly bad, beyond awful. I now work at a company with a very rigid process, lots of red tape around undocumented changes, and the resulting code is top notch.
Juliet
One size does not fit all. Small project, small team in one location, experienced developers, domain expert on site, software not absolutely critical? (some software, if you have a bug someone might die.) Then yes, just run wild. If not, you need more process.
MarkJ
If your processes make things harder, you're doing it wrong. It should be like a aircraft takeoff checklist, helps you remember to do stuff in the right order. Automate things: you're a software developer dammit. Make the easy thing the right thing.
Tim Williscroft
+1  A: 
dreftymac
Replace a markup language with YAML... you must be crazy. Voting up for good controversy.
Rene Saarsoo
+5  A: 

Inheritance is evil and should be deprecated.

The truth is aggregation is better in all cases. Static typed OOP languages can't avoid inheritance, it's the only way to describe what method wants from a type. But dynamic languages and duck typing can live without it. Ruby mixins is much more powerful then inheritance and a lot more controllable.

vava
When I teach this, I make a big point of telling people that I'm only teaching it because they have to know the syntax to do it. There are other things we have to teach because there is special syntax involved, and people take what they learn from special syntax and use it all the time.
brian d foy
My controversial opinion in this regard is anyone who describes a technology as "evil" is evil. Patterns don't kill people, people kill people.
dreftymac
I don't think I agree, but I found your post interesting: upvoted.
Jay Bazuzi
"Static typed OOP languages can't avoid inheritance," -- OCaml is a statically typed OOP language, but it also supports structural typing ((http://en.wikipedia.org/wiki/Structural_type_system), which is more or less "duck typing for static languages". It also downplays the role of inheritance.
Juliet
Even in statically typed languages inheritance is overused. Prefer composition to inheritance in each and any language.
David Rodríguez - dribeas
"Static typed OOP languages can't avoid inheritance," Of course they can, with interfaces, delegations and programming by contract. Apart from that, and the "in all cases" part (I'd have said "in most cases"), I agree.
fbonnet
+8  A: 

Correct every defect when it's discovered. Not just "severity 1" defects; all defects.

Establish a deployment mechanism that makes application updates immediately available to users, but allows them to choose when to accept these updates. Establish a direct communication mechanism with users that enables them to report defects, relate their experience with updates, and suggest improvements.

With aggressive testing, many defects can be discovered during the iteration in which they are created; immediately correcting them reduces developer interrupts, a significant contributor to defect creation. Immediately correcting defects reported by users forges a constructive community, replacing product quality with product improvement as the main topic of conversation. Implementing user-suggested improvements that are consistent with your vision and strategy produces community of enthusiastic evangelists.

Dave
not really "controversial" - it's the standing practice everywhere I've worked
warren
+112  A: 

A degree in computer science does not---and is not supposed to---teach you to be a programmer.

Programming is a trade, computer science is a field of study. You can be a great programmer and a poor computer scientist and a great computer scientist and an awful programmer. It is important to understand the difference.

If you want to be a programmer, learn Java. If you want to be a computer scientist, learn at least three almost completely different languages. e.g. (assembler, c, lisp, ruby, smalltalk)

Starkii
The first one is not really controversial, at least not in the CS field.
wds
I disagree. I know many people studying computer science that think they are getting a degree in programming. Every time I hear whining about why CS programs don't teach everyone Java I offer up a pained sigh.
Starkii
Java doesn't really teach you how to be a real programmer, since there's so much you can't learn with it. It's like building a car with legos.
Lance Roberts
I may agree with the first point, but saying that knowing only Java could make a programmer ..... that's a crime, punishable with death!!!
hasen j
Can you move your second answer to another post so it can be rated separately.
Greg Domjan
@Greg: Done. Thanks for the suggestion.
Starkii
I agree with "does not", but not with "is not supposed to". Where else in academia are you supposed to learn to program? There is no analog in software to the Engineering disciplies (mechanical, electrical, civil etc.).
MusiGenesis
@MusiGenesis: My local community college has an "Associate in Applied Science Degree" in "Computer Programming". (Washtenaw Community College) That is where I would go to be a programmer. It is important not to confuse Computer Science with Computer Programmer. They are _NOT_ the same thing
Starkii
@MusiGenesis: I've actually just completed my degree in Engineering (Software). I'm certainly not a computer scientist, and I don't want to be.
A J Lane
A CS degree is indeed not a programming degree. But then again, a programming degree doesn't make you a good programmer either. Both can introduce you to the basics and some special subfields, but it's up to you to use that as one of many sources of information as you develop your skills. Now, you may be able to solve any problem your work poses to you using a single language, like Java. But is it the best way? Learning several different languages and paradigms can help expand your perception of how problems can be solved using program code, and allow you to create better solutions.
Lucas Lindström
I disagree that CS does not teach you to be a programmer. It DOES and SHOULD do that - incidentally by teaching multiple languages, not one only - but that's not ALL it should do. CS degrees should also teach you about as many different areas of CS as possible, eg basic programming, functional languages, databases, cryptography, AI, language engineering (ie compilers/parsing), architecture and math-leaning areas like computer graphics and various algorithms.
DisgruntledGoat
Programming is easier in some fields than in others. Web development and most of the work you do in Information Systems is not hard. If you have a bit of a knack for programming, you can do this stuff very well without a CS or engineering degree. If you want to be a game programmer, write device drivers, work with embedded systems, or other things of the like, you'll need to know certain things from the degree.
baultista
+8  A: 

Web services absolutely suck, and are not the way of the future. They are ridiculously inefficient and they don't guarantee ordered delivery. Web services should NEVER be used within a system where both client and server are being written. They are mostly useful for micky mouse mash-up type applications. They should definitely not be used for any kind of connection-oriented communication.

This stance has gotten myself and colleagues into some very heated discussions, since web services is such a buzzy topic. Any project that mandates the use of web services is doomed because it is clearly already having ridiculous demands pushed down from management.

Jesse Pepper
My company writes auto-insurance software, and we rely on several off-site web services to verify VIN numbers and run OFAC checks on people. We also make some of our APIs available through web services to third-party vendors. How would you suggest our software be written without web services?
Juliet
@Juliet: what in " Web services should NEVER be used within a system **where both client and server are being written** " do you not understand? It's clear that in your situation you don't control both parts of the system, so your rhetorical question is irrelevant.
MaD70
+17  A: 

My controversial opinion: Object Oriented Programming is absolutely the worst thing that's ever happened to the field of software engineering.

The primary problem with OOP is the total lack of a rigorous definition that everyone can agree on. This easily leads to implementations that have logical holes in them, or language like Java that adhere to this bizarre religious dogma about what OOP means, while forcing the programmer into doing all these contortions and "design patterns" just to work around the limitations of a particular OOP system.

So, OOP tricks the programmer into thinking they're making these huge productivity gains, that OOP is somehow a "natural" way to think, while forcing the programmer to type boatloads of unnecessary boilerplate.

Then since nobody knows what OOP actually means, we get vast amounts of time wasted on petty arguments about whether language X or Y is "truly OOP" or not, what bizarre cargo cultish language features are absolutely "essential" for a language to be considered "truly OOP".

Instead of demanding that this language or that language be "truly oop", we should be looking at what language features are shown by experiment, to actually increase productivity, instead of trying to force it into being some imagined ideal language, or indeed forcing our programs to conform to some platonic ideal of a "truly object oriented program".

Instead of insisting that our programs conform to some platonic ideal of "Truly object oriented", how about we focus on adhering to good engineering principles, making our code easy to read and understand, and using the features of a language that are productive and helpful, regardless of whether they are "OOP" enough or not.

Breton
It sounds like you're mixing programming methodologies and language design philosophies, while also recognizing the damage of zealotry. As a result, your potentially interesting thoughts are cluttered and unclear.
Jay Bazuzi
The "Truly XYZ" idiom is usually a case of the "No True Scotsman" fallacy. As far as the rest, have you read http://xahlee.org/Periodic_dosage_dir/t2/oop.html? Also, this seems very similar to a perlmonks post, have you written on this before?
dreftymac
a Language is user interface that can make a programming methodology easier. An OOP language, therefore, is a language designed to make OOP programming easier, making them closely related subjects. This position was argued better by Apocalisp, elsewhere in this question.
Breton
I've never hear anyone pontificate on the phrase "truly object oriented" in the past 10 years I've been programming. Never. Not even once. Are you actually quoting some obnoxious manager?
Juliet
Anyone who started with java, or C++, and then tried lua, or javascript, or some other language that doesn't have some arbirary java feature. Anyone entrenched in the Java world who has a self superior view that singletons are a terrible idea. Anyone who's read teh GoF book and thought it was future
Breton
Almost, IMHO. I think OOP is the ideal way to deal with some aspects of programming, but it's not what it's made out to be: It's not a replacement for every methodology and/or piece of code you ever come across; It's not immune from being taken too far; It's not your master; It's not irreplaceable.
jTresidder
Do you come from a VB6 background and never embraced OOP?
Velika
Incorrect. There's nothing wrong with OOP, it's just a strategy. What the problem is, is the attitude that I should have "embraced" it, or the only alternative is I'm some backwards beginner. It is not the end all be all, it is not a religion, and I don't have to be crucified in order to expunge me from the pool of programmers so that all "right" thinking programmers can live free of sin. I posted my answer to this question because it is the most controversial opinion I have. That was the question.
Breton
the reason it's the worst thing to happen to programming is that it prevents programmers from looking at other solutions that may actually be better suited to the problem, and it prevents us from looking ot or accepting new paradigms that might be better suited to most problems.
Breton
I hate when newcomers lecture me about the greatness of OOP when I program in OO languages from mid '80s. They are totally blind to OOP shortcomings, they don't know that "OOP" is an ill-defined concept and, worst of all, they ignore a whole world of options w.r.t. programming paradigms.
MaD70
+1 Wish I could upvote more. This field is rife with bandwagons, gurus, "right thinking", and occasionally good ideas made into religions. To a mechanical/electrical engineer (like me) this is so weird. I assume if something is true there's a scientific reason why. I also assume inventiveness is a good thing. Very little of that in this field.
Mike Dunlavey
+363  A: 

UML diagrams are highly overrated

Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.

Ludwig Wensauer
I usually need to sketch up classes when designing an object-oriented system. I may as well use a standardized syntax for sketching. I'm not even forced to use ALL of the syntax, just the parts that I like.
Lucas Lindström
The way I see it, using a "standardised" diagram notation forces you into using some unnecessary syntax much of the time. I do agree with what UML *does*, but I think a standard is pointless. Circles and arrows are perfectly fine for nearly every case.
DisgruntledGoat
Ok, let's say that UML is worthless. Do you have any diagram templates to use it place of UML? Do you think diagrams in general are a waste of time? Is this a personal preference, as in do you use the list of directions (turn left, go one mile, turn right, etc.) to get somewhere you've never been? Do maps confuse you? I'm not trying to be snide, I truly believe that there is a personality difference between the visual and non-visual preferences of people. That could be what causes people to dislike UML: it's usefulness depends on the visual nature of the individual which is subjective.
Kelly French
I have seen all of the various diagrams that UML outline, and there are great times to use all of them. The problems start to occur when diagrams are made "for completeness" when no one is asking for them. In these cases, find the best UML diagram (or two) and make it fully qualified.
TahoeWolverine
Best use of UML is to not take it too seriously. Opening up a big package piece of software for UML? = You're doing too much big-design-up-front. Sketching on notepads? = Good.
Ollie Saunders
Blobby-Grams are my preferred derogatory term for UML diagrams.
Warren P
Best use of UML is for documentation. Once you finished the proyect.
Random
UML is not intended to be used as documentation, although it is often used as such. It can be nice to have class diagrams and object interaction diagrams on record, but IMO they're much more useful when trying to conceptualize the innerworkings of new features and illustrate them to other developers.
baultista
Kent Beck described his Galactic Modelling Language (GML) -- on an index card of course. It has three primitives: Box, Line, Label. I find it works for 90% of discussions.
Mike Clark
+6  A: 

You shouldn't settle on the first way you find to code something that "works."

I really don't think this should be controversial, but it is. People see an example from elsewhere in the code, from online, or from some old "Teach yourself Advanced Power SQLJava#BeansServer in 3.14159 minutes" book dated 1999, and they think they know something and they copy it into their code. They don't walk through the example to find out what each line does. They don't think about the design of their program and see if there might be a more organized or more natural way to do the same thing. They don't make any attempt at keeping their skill sets up to date to learn that they are using ideas and methods deprecated in the last year of the previous millenium. They don't seem to have the experience to learn that what they're copying has created specific horrific maintenance burdens for programmers for years and that they can be avoided with a little more thought.

In fact, they don't even seem to recognize that there might be more than one way to do something.

I come from the Perl world, where one of the slogans is "There's More Than One Way To Do It." (TMTOWTDI) People who've taken a cursory look at Perl have written it off as "write-only" or "unreadable," largely because they've looked at crappy code written by people with the mindset I described above. Those people have given zero thought to design, maintainability, organization, reduction of duplication in code, coupling, cohesion, encapsulation, etc. They write crap. Those people exist programming in every language, and easy to learn languages with many ways to do things give them plenty of rope and guns to shoot and hang themselves with. Simultaneously.

But if you hang around the Perl world for longer than a cursory look, and watch what the long-timers in the community are doing, you see a remarkable thing: the good Perl programmers spend some time seeking to find the best way to do something. When they're naming a new module, they ask around for suggestions and bounce their ideas off of people. They hand their code out to get looked at, critiqued, and modified. If they have to do something nasty, they encapsulate it in the smallest way possible in a module for use in a more organized way. Several implementations of the same idea might hang around for awhile, but they compete for mindshare and marketshare, and they compete by trying to do the best job, and a big part of that is by making themselves easily maintainable. Really good Perl programmers seem to think hard about what they are doing and looking for the best way to do things, rather than just grabbing the first idea that flits through their brain.

Today I program primarily in the Java world. I've seen some really good Java code, but I see a lot of junk as well, and I see more of the mindset I described at the beginning: people settle on the first ugly lump of code that seems to work, without understanding it, without thinking if there's a better way.

You will see both mindsets in every language. I'm not trying to impugn Java specifically. (Actually I really like it in some ways ... maybe that should be my real controversial opinion!) But I'm coming to believe that every programmer needs to spend a good couple of years with a TMTOWTDI-style language, because even though conventional wisdom has it that this leads to chaos and crappy code, it actually seems to produce people who understand that you need to think about the repercussions of what you are doing instead of trusting your language to have been designed to make you do the right thing with no effort.

I do think you can err too far in the other direction: i.e., perfectionism that totally ignores your true needs and goals (often the true needs and goals of your business, which is usually profitability). But I don't think anyone can be a truly great programmer without learning to invest some greater-than-average effort in thinking about finding the best (or at least one of the best) way to code what they are doing.

skiphoppy
+5  A: 

Variable_Names_With_Bloody_Underscores

or even worse

CAPITALIZED_VARIABLE_NAMES_WITH_BLOODY_UNDERSCORES

should be globally expunged... with prejudice! CamelCapsAreJustFine. (Glolbal constants not withstanding)

GOTO statements are for use by developers under the age of 11

Any language that does not support pointers is not worthy of the name

.Net = .Bloat The finest example of microsoft's efforts for web site development (Expressionless Web 2) is the finest example of slow bloated [email protected]@re ever written. (try Web Studio instead)

Response: OK well let me address the Underscore issue a little. From the C link you provided:

-Global constants should be all caps with '_' separators. This I actually agree with because it is so BLOODY_OBVIOUS

-Take for example NetworkABCKey. Notice how the C from ABC and K from key are confused. Some people don't mind this and others just hate it so you'll find different policies in different code so you never know what to call something.

I fall into the former category. I choose names VERY carefully and if you cannot figure out in one glance that the K belongs to Key then english is probably not your first language.

  • C Function Names

    • In a C++ project there should be very few C functions.
    • For C functions use the GNU convention of all lower case letters with '_' as the word delimiter.

Justification

* It makes C functions very different from any C++ related names.

Example

int some_bloody_function() { }

These "standards" and conventions are simply the arbitrary decisions handed down through time. I think that while they make a certain amount of logical sense, They clutter up code and make something that should be short and sweet to read, clumsy, long winded and cluttered.

C has been adopted as the de-facto standard, not because it is friendly, but because it is pervasive. I can write 100 lines of C code in 20 with a syntactically friendly high level language.

This makes the program flow easy to read, and as we all know, revisiting code after a year or more means following the breadcrumb trail all over the place.

I do use underscores but for global variables only as they are few and far between and they stick out clearly. Other than that, a well thought out CamelCaps() function/ variable name has yet to let me down!

Mike Trader
Any justification for your positions?
Jay Bazuzi
So you see no value in using style (camelCase vs CamelCase vs ALL_CAPS) to indicate whether the reference is to a Class a variable an const or whatever? I can't agree. It seems you may not be aware of naming conventions as an idea. e.g. http://www.possibility.com/Cpp/CppCodingStandard.html#names
duncan
+9  A: 

Requirements analysis, specification, design, and documentation will almost never fit into a "template." You are 100% of the time better off by starting with a blank document and beginning to type with a view of "I will explain this in such a way that if I were dead and someone else read this document, they would know everything that I know and see and understand now" and then organizing from there, letting section headings and such develop naturally and fit the task you are specifying, rather than being constrained to some business or school's idea of what your document should look like. If you have to do a diagram, rather than using somebody's formal and incomprehensible system, you're often better off just drawing a diagram that makes sense, with a clear legend, which actually specifies the system you are trying to specify and communicates the information that the developer on the other end (often you, after a few years) needs to receive.

[If you have to, once you've written the real documentation, you can often shoehorn it into whatever template straightjacket your organization is imposing on you. You'll probably find yourself having to add section headings and duplicate material, though.]

The only time templates for these kinds of documents make sense is when you have a large number of tasks which are very similar in nature, differing only in details. "Write a program to allow single-use remote login access through this modem bank, driving the terminal connection nexus with C-Kermit," "Produce a historical trend and forecast report for capacity usage," "Use this library to give all reports the ability to be faxed," "Fix this code for the year 2000 problem," and "Add database triggers to this table to populate a software product provided for us by a third-party vendor" can not all be described by the same template, no matter what people may think. And for the record, the requirements and design diagramming techniques that my college classes attempted to teach me and my classmates could not be used to specify a simple calculator program (and everyone knew it).

skiphoppy
+34  A: 

A picture is not worth a thousand words.

Some pictures might be worth a thousand words. Most of them are not. This trite old aphorism is mostly untrue and is a pathetic excuse for many a lazy manager who did not want to read carefully created reports and documentation to say "I need you to show me in a diagram."

My wife studied for a linguistics major and saw several fascinating proofs against the conventional wisdom on pictures and logos: they do not break across language and cultural barriers, they usually do not communicate anywhere near as much information as correct text, they simply are no substitute for real communication.

In particular, labeled bubbles connected with lines are useless if the lines are unlabeled and unexplained, and/or if every line has a different meaning instead of signifying the same relationship (unless distinguished from each other in some way). If your lines sometimes signify relationships and sometimes indicate actions and sometimes indicate the passage of time, you're really hosed.

Every good programmer knows you use the tool suited for the job at hand, right? Not all systems are best specified and documented in pictures. Graphical specification languages that can be automatically turned into provably-correct, executable code or whatever are a spectacular idea, if such things exist. Use them when appropriate, not for everything under the sun. Entity-Relationship diagrams are great. But not everything can be summed up in a picture.

Note: a table may be worth its weight in gold. But a table is not the same thing as a picture. And again, a well-crafted short prose paragraph may be far more suitable for the job at hand.

skiphoppy
I don't agree that a picture is not worth a thousand words. I do agree with the sentiment in the answer. Perhaps it would be better to ask "Would you use a 1000 words when only a few (or even one) would do?". Using an image instead of well choosen text is may effectively be just that.
AnthonyWJones
Some words are worth thousands pictures. (What about sounds, music, odours, etc?)
moala
Yes but a 32,000 byte bitmap IS one thousand words. At least until you move to a 64-bit CPU.
Kelly French
+668  A: 

XML is highly overrated

I think too many jump onto the XML bandwagon before using their brains... XML for web stuff is great, as it's designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.

My 5 cents

Can you give some specific examples of how you see XML being misused?
Jay Bazuzi
One specific example: Sitemaps (note capital “S”). What a f*cking waste of bandwidth, where a simple list would suffice. <http://sitemaps.org/>
Konrad Rudolph
it sucks even for web.
hasen j
configuration files that aren't changed after building the project - just use your programming language (optionally an embedded DSL)
Thomas Danecker
I work at a college and do lots of data returns, I personally prefer to send in xml. The normal requests are usually for csv or even worse fixed length records. It's a pain finding bugs and having to get all the documentation. XML certainly simplifies if I just need some example records.
PeteT
I store out alot of information to automatically generated XML files when an app is closed and then reload it again when it is started, so i would have to disagree!
xoxo
Data transmission. I've seen limited bandwith channels with things like <AVeryLongFieldName>A</AVeryLongFieldName>. In general, if you need concise, XML is probably not the solution.
David Thornley
You should only use XML for what it's designed, transporting data between different applications. It's no storage engine (defenitly no database! as some web developpers seem to think) and it's also not for storing your app state on shutdown.
Pim Jager
@Pim Jager: I disagree. It is very useful for storing data that may changed outside the app, without requiring a full GUI app (custom or otherwise) to make changes to that data.
Schmuli
I don't like XML too, i try to use json whenewer possible.
Edin
I strongly agree as well. Most XML Parsers/Generators are also over-engineered to the point of hopelessness.
Ryan Delucchi
I use INI for config files and CSV for data transmission.
Unkwntech
Joel says " XML is a Dumb Format For Storing Data" http://www.joelonsoftware.com/articles/fog0000000296.html
MarkJ
Agreed. XML is being used where DSLs (domain specific languages) make much more sense. Dealing with XML when defining a build file is painful. Never trust anyone who says something is "just" XML.
Lee Harold
XML is like violence. If it isn't working for you, you're not using enough of it. :)
Mikeage
XML is luggage, not a closet. And even then, often you don't need a roll-around suitcase with the pull-out handle when a duffle (or a Walmart bag) will do.
n8wrl
I wish I could vote this one up twice. Also check out: http://xmlsucks.org/
grieve
I think Ant buildfiles are a perfect example of XML abuse. XML is for data, not scripts.
Daniel Straight
I really like XML for the stuff I do, as long as you use it for what it's good at: storing recursable hierarchies of information.
Kevin Laity
JSON is usually a better format for "web" stuff.. ;)
Tracker1
Unkwntech: I think it depends, INI doesn't work for nested data structures, and I think trying to combine the two is a monstrosity (ex:Apache's configuration)
Tracker1
I think XML is only good where the format may change a bit (adding new fields, etc), or where there is a bunch of nesting and it needs to be human-readable. Otherwise, something like var:value list would suffice (like JSON)
Mark
<opinion><subject>god</subject><verb>bless</verb><object>you</object></opinion>
Steve B.
jSON is much better, but LUA is the best taken as configuration language (its deisgned that way)
majkinetor
I'd disagree if it weren't for the fact that SOO many people overstated XML's place in the world, hype hype hype. ... Nothing - even good ol' XML could live up to the original billing it got when it first entered the scene.
Gabriel
Good thing we have DataSet.ReadXml() function. It translates any god's forsaken xml hierarchy into database-like table-list. Relational database-people have been managing with simple 2d-tables for ages now. But noooo.. it's too technical for xml-people. Once again programmers are focusing on Extensibility, reinventing the wheel and making it possible for a car to have arbitrary amount of arbitrary sized wheels. "Nice job" I say! ;)
AareP
An example of XML being misused? I don't think I've ever seen an example of being used in a way I WOULDN'T call misuse. The worst misuse is, as stated before, ANT buildfiles... or XSLT. I don't think anyone on Earth could understand more than about 10 lines of XSLT without tools to help them.
Daniel Straight
If you absolutely MUST use XML-like markup, then at least use YAML. Its easier to read (less noise), small in size and equal to XML in every other way. I usually prefer simple "key=value" or "command parameters" files for configuration and such and in the rare rare cases where that isn't enough then YAML. Note that I've never had a case where I had to use YAML :-D but I imagine it will eventually happen.
Dan
Large XML is human readable, only if you *really* read hard :)
OnesimusUnbound
++ Yeah, IMHO XML is one of those popular things that get way overused. (But I'm jealous. Years ago I thought Lisp was a good syntax for exchanging information, and I think XML is just a bad Lisp.)
Mike Dunlavey
So, eight months at #5 and still not one Erik Naggum (RIP) quote. (P.S. and have you *seen* MathML?)
Cirno de Bergerac
XML is not efficient for either machine readability or human readability. It is however extensible. If THAT is what you need, you should use it.
DanO
One of the best uses of XML hasn't been mentioned - object serialization/deserialization. Sending stateful objects from one point to another has tremendous traction in distributed caching. In fact, whenever you need a disk-based storage of a hierarchy, I think XML is appropriate - and that includes configuration files.
joseph.ferris
XML is of no use for programmers, but very useful for smart people... and machines! :)
Max Toro
XML is a *document* format, not a random data serialization format. If you don't have like 10x more text than tags, you're doing it wrong :P
Nicolás
Amen! XML is not a "programming language" although many have tried to make it one. XML is a markup language. Its a poorly formatted style that could easily be replaced with CSV or delimited files. I have been anti-XML since day one.
Devtron
joseph.ferris, XML is bloated. Object serialization has nothing to do with XML. XML cannot serialize itself. It's not a language, its bloated and overused by the Microsoft hype community.
Devtron
XML databases are probably the most pernicious data storage scheme I've ever seen, and I'm including MS Access.
BobMcGee
One of XML misuses is its use to generate GUI, following quite a fairytaleworld idea, that one does not to be a programmer to write usable user interfaces.
Taisin
+1 XML was a horrible choice for Microsoft to base XAML on. Programming in XML is a nightmare.
chaiguy
"(any idiot can invent a data exchange format better than xml)" - Douglas Crockford
Dave
@Dave: Except for virtually all the idiots who have actually tried. YAML and JSON are only really suitable for simpler cases (e.g., where there aren't lots of namespaces) and ASN.1 is *horrible* if powerful. Mind you, some people should never have been let near a schema editor…
Donal Fellows
Text files usually do the job, with less parse time and complextity :)
Mohsen
Parsing XML is always a chore and inelegant. However, XML is useful because you don't need to roll your own parser or learn yacc/bison or BNF grammars.
burkestar
A: 

"Good Coders Code and Great Coders Reuse It" This is happening right now But "Good Coder" is the only ONE who enjoy that code. and "Great Coders" are for only to find out the bug in to that because they don't have the time to think and code. But they have time for find the bug in that code.

so don't criticize!!!!!!!!

Create your own code how YOU want.

Access Denied
In the working world it is not an option to rewrite code "the way you want it" you have to deal with what is there regardless of who wrote it. The rest of your post is incomprehensible.
duncan
I totally disagree with you: do not reinvent the wheel, they say!
Luis Filipe
+4  A: 

Reuse of code is inversely proportional to its "reusability". Simply because "reusable" code is more complex, whereas quick hacks are easy to understand, so they get reused.

Software failures should take down the system, so that it can be examined and fixed. Software attempting to handle failure conditions is often worse than crashing. ie, is it better to have a system reset after crashing, or should it be indefinitely hung because the failure handler has a bug?

Matthias Wandel
"failures should take down the system" - you're definitely on crack with this one! My entire system should ***NEVER*** die because **one** component hicoughed
warren
+18  A: 

It's okay to be Mort

Not everyone is a "rockstar" programmer; some of us do it because it's a good living, and we don't care about all the latest fads and trends; we just want to do our jobs.

Wayne M
I agree, with the caveat (and I'm turning and looking in the direction of several teams in Redmond, Washington) that Mort is often unfairly scoped and not always well understood.
Gabriel
I'm with you Wayne, though to stay in the industry, I think we all need to go Elvis and Einstein at times. And we need to put in effort outside of work too. I rested on my laurels for a while (got married, moved, had other stuff going on) and I can see tech moving beyond me and now I have to play catch up. Tech is moving too fast for extra effort not to be put in. I'm learning and doing side projects again, and I'm having fun. But I do resent the 14 hour a day folks. They will blossom, whither, and then fade. Balance is the key, but the day of being exclusively a Mort are numbered.
infocyde
+4  A: 

Java is not the best thing out there. Just because it comes with an 'Enterprise' sticker does not make it good. Nor does it make it fast. Nor does it make it the answer to every question.

Also, ROR is not all it is cracked up to be by the Blogsphere.

While I am at it, OOP is not always good. In fact, I think it is usually bad.

Alex UK
oop is really bad for small-size software because it has so much overhead. but, my prof said that it's super good for large scale software, and I think you can tell by my wording that I don't know so I will just believe my prof until proven false =P
hasen j
+4  A: 

Opinion: most code out there is crappy, because that's what the programmers WANT it to be.

Indirectly, we have been nurturing a culture of extreme creativeness. It's not that I don't think problem solving has creative elements -- it does -- it's just that it's not even remotely the same as something like painting (see Paul Graham's famous "Hackers and Painters" essay).

If we bend our industry towards that approach, ultimately it means letting every programmer go forth and whack out whatever highly creative, crazy stuff they want. Of course, for any sizable project, trying to put together dozens of unrelated, unstructured, unplanned bits into one final coherent bit won't work by definition. That's not a guess, or an estimate, it's the state of the industry that we face today. How many times have you seen sub-bits of functionality in a major program that were completely inconsistent with the rest of the code? It's so common now, it's a wonder anyone cause use any of these messes.

Convoluted, complicated, ugly stuff that just keeps getting worse and more unstable. If we were building something physical, everyone on the planet would call us out on how horribly ugly and screwed up the stuff is, but because it more or less hidden by being virtual, we are able to get away with some of the worst manufacturing processing that our species will ever see. (Can you imagine a car where four different people designed the four different wheels, in four different ways?)

But the sad part, the controversial part of it all, is that there is absolutely NO reason for it to be this way, other than historically the culture was towards more freedom and less organization, so we stayed that way (and probably got a lot worse). Software development is a joke, but it's a joke because that's what the programmers want it to be (but would never in a million years admit that it was true, a "plot by management" is a better reason for most people).

How long will we keep shooting ourselves in the foot, before we wake up and realize that we the ones holding the gun, pointing it and also pulling the trigger?

Paul.

Paul W Homer
That's just a lesson one has to learn through time and experience. Nevertheless, the "problem" won't get fixed because the "novices" don't realize or call it out, and too many "experienced" suffer from "not invented here" syndrome. By the way, this influences *every* profession to some extent.
dreftymac
You might want to check what the original meaning of "shoot yourself in the foot" means (as opposed to the 'new' meaning) and then think if maybe creating a bit of pain and confusion for the return of long-term survival is what is going on here. There is a survival strategy in hard to maintain code.
duncan
That type of survival strategy only works in a few large static corporate environments. If hard-to-maintain code causes the project to fail and be disbanded, it provides no long term gain. But even if it works, it's a miserable existence ...
Paul W Homer
Kudos for pointing this out. The truth is that sloppiness and heroism in software development are NOT self-evident. It's an effect of the (SW development) culture of the 60s/70s.
Thorsten79
"If we were building houses like we're building software, the first woodpecker would be the end of mankind." -- dunno who said that but he is still right ;)
Aaron Digulla
You sense the disease but the diagnosis is incorrect: writing software is **not** a manufacturing process, period. It is a wrong analogy. "Manufacturing" is reproducing a physical "thing" n times, starting from a blueprint. Now this process is not perfect, so you need to control this process of reproduction. Writing software is more akin to design, i.e. producing the blueprint. Given the blueprint (the program) a computer perfectly reproduce it, i.e. it accurately solves every instance of the problem for which it was designed (it "manufacture" each solution, given the blueprint).
MaD70
Now, designing something in engineering disciplines is certainly a creative process but equally certainly it is **not** unconstrained, undisciplined. For example: structural engineers use math, sciences and other disciplines. Their practice is founded on knowledge, theory, experience. What you correctly describe, with an uneasy that I concur, is a field not even at a level of good craftsmanship, not engineering and certainly not art.
MaD70
+43  A: 

A Clever Programmer Is Dangerous

I have spent more time trying to fix code written by "clever" programmers. I'd rather have a good programmer than an exceptionally smart programmer who wants to prove how clever he is by writing code that only he (or she) can interpret.

Tom Moseley
Real clever programmers are those that find the good answer while making it maintainable. Either that or those who hide their names from comments so users won't backfire asking for changes.
David Rodríguez - dribeas
Real genius is seing how really complex things can be solved in a really simple way. People who write needlesly complex code are just assholes who want to feel superior to the world around them.
Seventh Element
+1 Good programmers know their own limitations - if it's so clever you can only just understand it when you're writing it, well, it's probably wrong now, and you'll never understand it in 6 months time when it needs changing.
MarkJ
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --unknown
Robert J. Walker
Robert, great quote: BTW it's from Brian Kernighan not "unknown"
MarkJ
+31  A: 

If a developer cannot write clear, concise and grammatically correct comments then they should have to go back and take English 101.

We have developers and (the horror) architects who cannot write coherently. When their documents are reviewed they say things like "oh, don't worry about grammatical errors or spelling - that's not important". Then they wonder why their convoluted garbage documents become convoluted buggy code.

I tell the interns that I mentor that if you can't communicate your great ideas verbally or in writing you may as well not have them.

I agree clear communication is important. But grammar is secondary. Some people have poor grammar but can communicate clearly (I'm thinking of some non-native English speakers) and some people have perfect grammar but can hardly communicate at all.
John D. Cook
Ironically, there are many developers that think this is beneath them. Comments and documentation that looks like it's written by a retard should somehow convey that they are truly great hackers.
Seventh Element
This isn't just about grammar and spelling either. It is possible to write something that has correct grammar and spelling yet is nearly impossible for others to understand (just as you can write a program that compiles and runs yet is impossible to understand the code). Being able to express yourself clearly in writing is very important. Having taught a comp-sci course that involves writing design documents for the last six years I've found it distressing how few of my students seem to possess this ability. And it seems to be getting worse each year.
Kris
@John D CookPoor grammar is most often detrimental for communication. These rules weren't invented for no reason (goes to check if there are no grammar mistaeks in those comment).
quant_dev
"If a developer cannot write **a** clear, concise and grammatical comment **s**..."Deliberate irony?
Mark Bannister
+52  A: 

Realizing sometimes good enough is good enough, is a major jump in your value as a programmer.

Note that when I say 'good enough', I mean 'good enough', not it's some crap that happens to work. But then again, when you are under a time crunch, 'some crap that happens to work', may be considered 'good enough'.

John MacIntyre
+12  A: 

Most consulting programmers suck and should not be allowed to write production code.

IMHO-Probably about 60% or more

John MacIntyre
That is not controversial; that is fact!
icelava
Most non-consulting programmers are stuck in a rut and live in a company bubble maintaining dinosaur code while never being exposed to anything that challenges there assumptions; except for the occasional outside consultant. How's that for controversial? ;-)
Seventh Element
@Diego; true and consultants have an opportunity to become amazing programmers with everything they are exposed to. But in my experience, I've seen too much crap written by hacks who just picked up enough knowledge to make it work, knowing they'd never have to maintain it, and they just don't care.
John MacIntyre
I consulted for many years. There were cases where the company programmers were good but didn't understand how I was doing things, and so were inclined to criticize. Nevertheless, I'm inclined to agree with you - there are half-hearted programmers in contracting positions.
Mike Dunlavey
+5  A: 

Opinion: There should not be any compiler warnings, only errors. Or, formulated differently You should always compile your code with -Werror.

Reason: Either the compiler thinks it is something which should be corrected, in case it should be an error, or it is not necessary to fix, in which case the compiler should just shut up.

JesperE
I have to disagree. A really good warning system will warn you about things that are probably bad code, but may not be depending on how you use them. If you have lint set to full, I believe there are even cases where you can't get rid of all the warnings.
Bill K
That would mean I would have to throw out my C# compiler. I have 2 (AFAIK, unfixable) warnings about environment references (that are indeed set correctly) that don't appear to break anything. Unless -Werror merely supresses warnings and doesn't turn them into errors >_>
Dalin Seivewright
Finally, someone disagrees. It wouldn't really be a controversial opinion otherwise, now would it?
JesperE
Doesn't your C# compiler allow you to disable the warnings? If you know they are unfixable and "safe", why should the compiler keep warning? And yes, -Werror turns all warnings into errors.
JesperE
I try to get the warnings down to zero but some warnings are 50:50: They make sense in case A but not in case B. So I end up sprinkling my code with "ignore warning"... :(
Aaron Digulla
Well, as long as I'm the one writing the compiler, then I agree with you. But is someone else wrote the compiler, I would the ability to disagree with them when they claim perfectly valid constructs are warning worthy.
nosatalian
That is why most compilers allow you to disable warnings. That's fine. What I mean is that you either disable the warning or fix it. Don't just leave it there.
JesperE
+148  A: 

Most professional programmers suck

I have come across too many people doing this job for their living who were plain crappy at what they were doing. Crappy code, bad communication skills, no interest in new technology whatsoever. Too many, too many...

petr k.
Hardly controversial, is it? It's hard not to notice that this is the case.
jalf
+1: Good programmers are hard to find, I wouldn't hire 98% of the programmers I've met, even if they offered to work for free.
Robert Gamble
That is not controversial; that is fact!
icelava
I'd +1 this but it's not controversial
annakata
It's probably controversial to the 98%.
Fabian Steeg
@fsteeg: you stole my line. :)
MusiGenesis
90% of everything is crap...
Brian Postow
+100 (if I could)
Frank V
Most people doing a job think that they do their job better than everybody else. Most of them are wrong.
Aaron
@BrianPostow, the scary thing is when you realize that this also applies to medical doctors too!
jdkoftinoff
Very true, but not very controversial. Most doctors suck, most hairdressers suck, most mechanics suck, even most housekeepers suck. They suck compared to the best in the field, but to outsiders or the ignorant they are okay. They are just doing the minimum to get through the day.
Kirk Broadhurst
@ Brian Postow actually 99% of everithing is space (void, nothing, whatever).
Random
+5  A: 

A majority of the 'user-friendly' Fourth Generation Languages (SQL included) are worthless overrated pieces of rubbish that should have never made it to common use.

4GLs in general have a wordy and ambiguous syntax. Though 4GLs are supposed to allow 'non technical people' to write programs, you still need the 'technical' people to write and maintain them anyway.

4GL programs in general are harder to write, harder to read and harder to optimize than.

4GLs should be avoided as far as possible.

Alterlife
'non technical people' never want to write code, but some people will never get it.
01
"harder to optimize than ..." what?
Mike Dunlavey
Interesting opinion, though a bit harsh, maybe.
Mike Dunlavey
All 4GLs suck. Not a majority. 100%.
Warren P
+19  A: 

Don't comment your code

Comments are not code and therefore when things change it's very easy to not change the comment that explained the code. Instead I prefer to refactor the crap out of code to a point that there is no reason for a comment. An example:

if(data == null)  // First time on the page

to:

bool firstTimeOnPage = data == null;
if(firstTimeOnPage)

The only time I really comment is when it's a TODO or explaining why

Widget.GetData(); // only way to grab data, TODO: extract interface or wrapper
rball
Your "explaining why" rationale is also subject to change if the API you are working with, for example, gets updated or improved.
dreftymac
In my small example I'm trying to show why I already did what I did. Like there's a better way to grab data, but this is the only way right now. Kind of like a note to refactor or why something happened. Also it's mainly related to my own code and not an external dependency.
rball
Icky. Don't declare a variable if you're only going to use it once. Your suggestion is not much better than, "int i,this_is_a_counter;". If you're forced to *add* extra code to get rid of comments, you've made things MORE complicated!
Brian
I have to agree with Brian, nothing worst then having a bunch of one time use variables.
James McMahon
I'm sick of reading this crap. The reality is that the large majority of code out there is badly written, let alone reasonably refactored. If you can't write decent (understandable) code at least have the decency of adding comments.
Seventh Element
Why are one-time variables bad? They explain what you do, they don't cost anything (if you have a half decent compiler), and you can easily use them again for the same thing. Without the firstTimeOnPage, I would be very likely to put in the if (data == null) condition somewhere else as well.
erikkallen
-1: Comments are good. Comments are a cornerstone of code. I'd rather spend 10 seconds reading a one-line comment than spend two hours trying to figure out what some really complex code does.
tsilb
You might spend 10 seconds reading a one-line comment and then 3 hours finding out that the comment is outdated and led you down the wrong path. A well named variable or method is preferable, then I know what your intentions were and know that it hasn't changed. Also easily refactorable.
rball
@brian, one time variables can give names to faceless expressions, which is nice, especially in long parameter lists.
Thorbjørn Ravn Andersen
@rball: I agree and disagree, depending on how declarative or domain-specific the language is. You have a functional spec somewhere, if only in your head. If the language is declarative enough to directly encode the functional spec, then there's no need for comments. Usually, that is not the case, so IMO the purpose of comments is to express the mapping between implementation and functional spec, to the extent that the code itself is not able to. That way, when the spec changes, as it always does, you know what code to change.
Mike Dunlavey
+16  A: 

Only write an abstraction if it's going to save 3X as much time later.

I see people write all these crazy abstractions sometimes and I think to myself, "Why?"

Unless an abstraction is really going to save you time later or it's going to save the person maintaining your code time, it seems people are just writing spaghetti code more and more.

Paul Mendoza
Yay!Also look up "YAGNI"
Bjarke Ebert
If you're writing abstraction using spaghetti code, then you're doing something very, very, wrong.
JesperE
+40  A: 

Bad Programmers are Language-Agnostic

A really bad programmer can write bad code in almost any language.

CMS
And good programmers just about the same.
David Rodríguez - dribeas
Yeah, that's why it's controversial :)
CMS
+10  A: 

My controversial opinion? Java doesn't suck but Java API's do. Why do java libraries insist on making it hard to do simple tasks? And why, instead of fixing the APIs, do they create frameworks to help manage the boilerplate code? This opinion can apply to any language that requires 10 or more lines of code to read a line from a file.

Jeremy Wall
+60  A: 

Architects that do not code are useless.

That sounds a little harsh, but it's not unreasonable. If you are the "architect" for a system, but do not have some amount of hands-on involvement with the technologies employed then how do you get the respect of the development team? How do you influence direction?

Architects need to do a lot more (meet with stakeholders, negotiate with other teams, evaluate vendors, write documentation, give presentations, etc.) But, if you never see code checked in from by your architect... be wary!

kstewart
I think this was already covered.
Jay Bazuzi
Architects that *do* code are worse than those that don't. i.e. their productivity is negative.
finnw
Can we paraphrase... Architects need to respect coders?
gbn
+81  A: 

Don't use inheritance unless you can explain why you need it.

theschmitzer
Inheritance is the second strongest relationship in C++ and the strongest relationship in most other languages. It strongly couples your code with that of your ascendant. If you can just use it through interfaces go for it. Prefer composition over inheritance always.
David Rodríguez - dribeas
Most uses of inheritance as a form of reuse, overriding whatever is needed to change. They generally don't know/care if they violate LSP, and can achieve what they need with composition.
theschmitzer
I tend to think that delegation is cleaner in most cases where people use inheritance (esp. lib development) because:- abstraction is better- coupling is looser - maintenance is easierDelegation defines a contract between the delegating and the delegate that is easier to enforce among versions.
fbonnet
He's not saying don't use inheritance at all, just don't use it if you can't explain why you need it. If you're wanting to code an OO application and think throwing a little inheritance in here and there is just gonna make it OO, then you're dumb and should be fired from the ability program.
Wes P
Like many other programming constructs, the purpose of inheritance is to avoid duplicated code.
Kyralessa
interesting....
Frank V
Or as Sutter and Alexandrescu said in C++ Coding Standards: Inherit an interface, not the implementation.
blwy10
You should expand that to: "Don't ever code *anything* that you can't explain." Everything you do in code should have a reason.
Oorang
A: 
  • Global variables are ok (there are times where it is a very good solution)
  • gotos have their place, both are missed (i rarely use both.)
  • defines/macros are wonderful but incredibly evil
  • Singletons should NEVER be used*1

    and my most controversial yet...

  • COMMENTS ARE EVIL AND A WASTE OF TIME

*1 logging may be ok but i dont even do that. What if you would like to output log data on a per thread basis. You want to which thread is outputting that line, chances are you need a non static member unique to your own thread. So logging i see benefits of NOT using a singleton.

acidzombie24
Why are comments evil and a waste of time?
Scotty Allen
Comments can be outdated very easily and may be hard to tell if a comment is outdated. It waste programming time if comment before the func is finish. it will be changed and more time will be spent. Func shouldnt need comments and should be readable via variable names. For API, there should be a man
acidzombie24
A code in itself can easily explain HOW it does what it does, but it can't explain WHY something is done - comments can explain that.
Rene Saarsoo
I agree with some of what you've said, but I don't think you did a good job of presenting your ideas and justifying them, so downvoting.
Jay Bazuzi
There are some really good reasons for comments e.g explanation of intent, clarification, warning of consequences, TODO comments.But 98% of comments i've read evil and a waste of time.
Ludwig Wensauer
Comments are evil if you're often bored and need a 10 minute job to take all day. I prefer to find something new to tinker with :)
jTresidder
Do you really believe this stuff or are you just trying to be provocative?
Seventh Element
I believe in this stuff.
acidzombie24
I was with you on gotos... but if you like globals so much, how can you hate the OO global singleton.
Software Monkey
well, i didnt explain anything which is possibly why i am downvoted, controversial indeed.I use globals as only quick debug and test cases that are NOT meant for production code. They should be deleted as soon as the problem/test is solved. Singleton do not look like test/temp code to delete.
acidzombie24
... You believe in Globals but not Singletons? How is that consistent?
Christopher W. Allen-Poole
That's definitely controversial.
C. Ross
Christopher W. Allen-Poole: My comment before yours -> Singleton do not look like test/temp code to delete. I'm repeating to avoid confusion. I am kind of glad i got into the negatives w/o meaning to :)
acidzombie24
The only controversy here is how you qualify to answer questions on stack overflow.
Stefan Valianu
+22  A: 

If your text editor doesn't do good code completion, you're wasting everyone's time.

Quickly remembering thousands of argument lists, spellings, and return values (not to mention class structures and similarly complex organizational patterns) is a task computers are good at and people (comparatively) are not. I buy wholeheartedly that slowing yourself down a bit and avoiding the gadget/feature cult is a great way to increase efficiency and avoid bugs, but there is simply no benefit to spending 30 seconds hunting unnecessarily through sourcecode or docs when you could spend nil... especially if you just need a spelling (which is more often than we like to admit).

Granted, if there isn't an editor that provides this functionality for your language, or the task is simple enough to knock out in the time it would take to load a heavier editor, nobody is going to tell you that Eclipse and 90 plugins is the right tool. But please don't tell me that the ability to H-J-K-L your way around like it's 1999 really saves you more time than hitting escape every time you need a method signature... even if you do feel less "hacker" doing it.

Thoughts?

IMHO, if you need so badly code completion, it's a code smell, or even a design smell : it indicates that the design has grown too complicated, too interdependent, too tightly coupled to other module's responsibilities. It's a bit controversial too: refactor it until it fits into your brain !
vincent
Code completion slows typing. Even set to zero delay, there's the tiniest pause while you wait for code completion.I agree that if you need code completion on your own code, that may well be a sign something needs simplification. But libraries are so large now, I think it helps more than hurts.
Kendall Helmstetter Gelner
@vincent: Do you never use massive libraries (.NET Framework / Windows API etc)?
erikkallen
I'm using Django, and RoR before. Both encourage cohesion and small files. At the same time I'm helping out a beginner with VB.net, and I have to say VS is impressive, and it certainly influences the code style itself ; but code completion has to be a double-edged sword.( BTW, I *HATE* eclipse )
vincent
VS has really fast completion @Kendall: it doesn't impede my typing. Half the time I write Con.Wr[Down]( for Console.WriteLine(. That's 10 keystrokes less. @vincent, I agree, Eclipse needs to improve their code completion.
Jonathan C Dickinson
Vim can do completion.
Benoit Myard
I work with only one other developer on a project with 240k lines of code and almost a thousand files. We couldn't live without code completion.
Matthew Iselin
+17  A: 

You don't always need a database.

If you need to store less than a few thousand "things" and you don't need locking, flat files can work and are better in a lot of ways. They are more portable, and you can hand edit them in a pinch. If you have proper separation between your data and business logic, you can easily replace the flat files with a database if your app ever needs it. And if you design it with this in mind, it reminds you to have proper separation between your data and business logic.

--
bmb

bmb
True, but Sqlite is very portable too. I'm not gonna start with flat files if there is a change it should be moved to Sqlite.
tuinstoel
There are other benefits of a DB. Shared access across a network for a client/server program. Easy access and manipulation of data (although technologies like LINQ help with that).
Cameron MacFarland
There are thousands of benefits of a database and reasons why we need them most of the time. But not *always*.
bmb
having a database from the start is easier than first having proper separation between data storage and biznis logic with flat files so that you can switch to a database later :)
hasen j
Are you saying it's easier to do it wrong with a database than it is to do it right without one?
bmb
I am 100% convinced that developers over use databases. The crutch that kills.
Stu Thompson
@Stu Thompson, I'm not. At work I'm refactoring an application so that it stores its data in a database instead of xml files. It is a lot of work and I hope it is the last time that I have to do this.
tuinstoel
tuinstoel, don't blame XML files for a missing or poorly designed data access layer.
bmb
@bmb, Even refactoring 'just' a data access layer can be a lot of work. And it is totally unnecessary work.
tuinstoel
+62  A: 

Pagination is never what the user wants

If you start having the discussion about where to do pagination, in the database, in the business logic, on the client, etc. then you are asking the wrong question. If your app is giving back more data than the user needs, figure out a way for the user to narrow down what they need based on real criteria, not arbitrary sized chunks. And if the user really does want all those results, then give them all the results. Who are you helping by giving back 20 at a time? The server? Is that more important than your user?

[EDIT: clarification, based on comments]

As a real world example, let's look at this Stack Overflow question. Let's say I have a controversial programming opinion. Before I post, I'd like to see if there is already an answer that addresses the same opinion, so I can upvote it. The only option I have is to click through every page of answers.

I would prefer one of these options:

  1. Allow me to search through the answers (a way for me to narrow down what I need based on real criteria).

  2. Allow me to see all the answers so I can use my browser's "find" option (give me all the results).

The same applies if I just want to find an answer I previously read, but can't find anymore. I don't know when it was posted or how many votes it has, so the sorting options don't help. And even if I did, I still have to play a guessing game to find the right page of results. The fact that the answers are paginated and I can directly click into one of a dozen pages is no help at all.

--
bmb

bmb
Google does pagination, Google is very popular.
tuinstoel
Good point. I would argue that google is narrowing down what users need based on real criteria -- the criteria is "ten best results." I'm not saying that showing less than the full results is always bad, if you give the user what they want.
bmb
maybe you should give conrete example of a thing that's paginated but shouldn't. for example, how would you "narrow down" answers to this question?
hasen j
@bmb: Where does this put this thread? @tuinstoel: I claim that nobody ever (i.e. about 0.1% of all page views, probably much more for image search) use more than the first page of results. Pagination done right.
Konrad Rudolph
@Konrad Rudolph, Once of twice each year I search on my own name, I use all the page results (I'm not famous). That is probably the only time I use all the pages.
tuinstoel
Sometimes it's easier for the user to read if all the controls are visible at the same time (no scroll bars). But in any case, you have to ask: Should I use paging or scrollbars? Either way it's still a click to the user.
T Pops
@tuinstoel google does a lot of things but is not cooking fish. That google is doing pagination has no consequence in its popularity. Pagination is an antiquated model from books time. It will disappear soon in favor of ajax like refreshes, used by Google Reader for example.
Elzo Valugi
I really, really hate the default 10 results from Google. I turn it up to 100 on every browser I use. I'd probably turn it to 1000 if there were an option (and it still was speedy)
nos
You'll have much more trouble coming up with those query-based requirements than just implementing a simple pagination system. Sure, if you can suggest an alternative, go right ahead and reduce the number of items to return but not every problem will be as amenable.
Kelly French
In the end pagination isn't really interesting. What's more important is the question: do you count all the search results and show the exact count or do you just provide an estimaton? Google shows only an estimation, showing only an approximation has great performance benefits. Ajax like refreshes don't change this.
tuinstoel
"Who are you helping by giving back 20 at a time? The server? Is that more important than your user?" If only 1% of users actually need this feature, then the server and thus the other 99% of users.
Brian Ortiz
Ortzinator, I would agree with you if I thought the number was really 99%. But since my (controversial) contention is that pagination is "never" what the user wants, then I think helping the server helps no one. However, users who don't want all the results don't have to get them. Then everyone is happy.
bmb
I came across this answer while paging through and searching every answer to this question to see if anyone had already posted about anonymous functions. Just sayin'
Larry Lustig
So what about resultsets that have thousands or millions of results? What if it's only hundreds but each one shows a bunch of detail? Returning over 100K violates web best practices and such result sets could result in *huge* server loads.
tsilb
tsilb, then "allow the user to narrow down what they need based on real criteria". The point here is not that subsets are always bad, it's that pagination is not a method of subsetting that helps anyone. And huge server loads? Boo hoo. Did you build your app to make your server happy? Or your users?
bmb
slashdot uses an approach where if you try to scroll below the last entry an extra set is added to the page. I love it!
Thorbjørn Ravn Andersen
Thorbjørn Ravn Andersen, that helps a little, but it would still be tedious if you want to use your browser's "find" function.
bmb
+1  A: 

I think its fine to use goto-statements, if you use them in a sane way (and a sane programming language). They can often make your code a lot easier to read and don't force you to use some twisted logic just to get one simple thing done.

woop
The key concept is "in a sane way". I would be shy of this idea if it were running for Grand Poo-Bah, but I understand Linus Torvalds agrees with it passionately :-)
Mike Dunlavey
+3  A: 

Use type inference anywhere and everywhere possible.

Edit:

Here is a link to a blog entry I wrote several months ago about why I feel this way.

http://blogs.msdn.com/jaredpar/archive/2008/09/09/when-to-use-type-inference.aspx

JaredPar
I'd love to see reasoning about this. Very controversial, and room for lots of good points from both sides.
Jon Skeet
@Jon, added a blog link to the reasons I feel this way.
JaredPar
Jared, your blog post is about local variable declaration with `var`, but your title is much more general. Please clarify.
Jay Bazuzi
@Jay, most of the problem with type inference is around "var" vs. overload resolution and generic method type inference. I really should have added a sample or two to the article though it was discussed in the comments.
JaredPar
+31  A: 

C++ is a good language

I practically got lynched in another question a week or two back for saying that C++ wasn't a very nice language. So now I'll try saying the opposite. ;)

No, seriously, the point I tried to make then, and will try again now, is that C++ has plenty of flaws. It's hard to deny that. It's so extremely complicated that learning it well is practically something you can dedicate your entire life to. It makes many common tasks needlessly hard, allows the user to plunge head-first into a sea of undefined behavior and unportable code, with no warnings given by the compiler.

But it's not the useless, decrepit, obsolete, hated language that many people try to make it. It shouldn't be swept under the carpet and ignored. The world wouldn't be a better place without it. It has some unique strengths that, unfortunately, are hidden behind quirky syntax, legacy cruft and not least, bad C++ teachers. But they're there.

C++ has many features that I desperately miss when programming in C# or other "modern" languages. There's a lot in it that C# and other modern languages could learn from.

It's not blindly focused on OOP, but has instead explored and pioneered generic programming. It allows surprisingly expressive compile-time metaprogramming producing extremely efficient, robust and clean code. It took in lessons from functional programming almost a decade before C# got LINQ or lambda expressions.

It allows you to catch a surprising number of errors at compile-time through static assertions and other metaprogramming tricks, which eases debugging vastly, and even beats unit tests in some ways. (I'd much rather catch an error at compile-time than afterwards, when I'm running my tests).

Deterministic destruction of variables allows RAII, an extremely powerful little trick that makes try/finally blocks and C#'s using blocks redundant.

And while some people accuse it of being "design by committee", I'd say yes, it is, and that's actually not a bad thing in this case. Look at Java's class library. How many classes have been deprecated again? How many should not be used? How many duplicate each others' functionality? How many are badly designed?

C++'s standard library is much smaller, but on the whole, it's remarkably well designed, and except for one or two minor warts (vector<bool>, for example), its design still holds up very well. When a feature is added to C++ or its standard library, it is subjected to heavy scrutiny. Couldn't Java have benefited from the same? .NET too, although it's younger and was somewhat better designed to begin with, is still accumulating a good handful of classes that are out of sync with reality, or were badly designed to begin with.

C++ has plenty of strengths that no other language can match. It's a good language

jalf
true dat. My main beef is that every 3rd party library has its own string class. I waste too much time converting between CString to std::string to WxString to char*. Can't everyone just use std::string or const char*.
Doug T.
Not true "C++ has plenty of strengths that no other language can match. It's a good language." EVERY language has strengths that no longer language can match (even LOLCODE, hey it's a lot of fun).
Jonathan C Dickinson
Perhaps. But C++'s strengths are a bit more commonly useful. Let me know when your language of choice supports compile-time metaprogramming or RAII.
jalf
+88  A: 

A degree in Computer Science or other IT area DOES make you a more well rounded programmer

I don't care how many years of experience you have, how many blogs you've read, how many open source projects you're involved in. A qualification (I'd recommend longer than 3 years) exposes you to a different way of thinking and gives you a great foundation.

Just because you've written some better code than a guy with a BSc in Computer Science, does not mean you are better than him. What you have he can pick up in an instant which is not the case the other way around.

Having a qualification shows your commitment, the fact that you would go above and beyond experience to make you a better developer. Developers which are good at what they do AND have a qualification can be very intimidating.

I would not be surprized if this answer gets voted down.

Also, once you have a qualification, you slowly stop comparing yourself to those with qualifications (my experience). You realize that it all doesn't matter at the end, as long as you can work well together.

Always act mercifully towards other developers, irrespective of qualifications.

Maltrap
"degree in Computer Science or other IT area DOES make you more well rounded" ... "realize that it all doesn't matter at the end, as long as you can work well together" <- sounds a tiny bit inconsistent and self-contradictory.
dreftymac
IT referring to the fact that the other guy has a degree. It's strange, once you have a qualification, you might stop comparing yourself to others.
Maltrap
Agree - qualifications are indicators of commitment. They can be more but if even if that's all they are then they have value. It is only those without pieces of paper who decry them. Those with them know the limits of their value but know their value too.
duncan
From past experience I'd generally rather work with someone that at least has an EE degree, than someone who came into the field after college.
Kendall Helmstetter Gelner
i would even say a good university degree. i met a programmer at my work who finished some small it-schoold i've never heard of and didn't know how many different numbers can be written on 8 bits!
agnieszka
A degree in ANY area (except maybe post-modern literary criticism) makes you a more well-rounded programmer, especially if it's in mathematics or science or engineering. Comp Sci and IT degrees tend to have incredibly narrow scope and focus.
MusiGenesis
In the spirit of healthy discussion I'll just say that I vehemently disagree (and I've got one). Past deliverables shows commitment, not that you lived somewhere for 4 years and read some books.
SnOrfus
I don't believe in degrees as measurements of value or skill, but studying at a university gives you the opportunity to learn the foundations of many different fields that can be useful to you in a work situation. I'm doubtful if being able to graduate is an acceptable proof that you've learned anything, but I know that you CAN learn a lot of useful skills, if you're ambitious enough.
Lucas Lindström
"What you have he can pick up in an instant" - Not necessarily. The ability to write good code is something that tends to come with experience, though some people pick it up quickly and some never seem to get there.The guy with the CS degree will certainly be able to pick up the languages and APIs you use in an instant, but there's no guarantee he'll ever be a good programmer. And he certainly won't become one overnight if he's not one now.
Mark Baker
I learned far more from my college library than the classes them selfs.
gradbot
Disagree - Self learning can be quite better than university learning. As for University, they make you think they way they want (as better marks for thinking their way). A self learner will think far better (for a given value of better) that a person teached to lern one way. I'm fascinated that you agree with me, btw: "You realize that it all doesn't matter at the end, as long as you can work well together."
Random
As someone about to complete a degree in Information Technology (with a specialization in Applications Development, no less), let me assure you that it is a small step above useless for someone interested in software development. You're more than likely to learn UML and object-orientedness which is supposedly good, but beyond that you're on your own.
baultista
+13  A: 

New web projects should consider not using Java.

I've been using Java to do web development for over 10 years now. At first, it was a step in the right direction compared to the available alternatives. Now, there are better alternatives than Java.

This is really just a specific case of the magic hammer approach to problem solving, but it's one that's really painful.

pansapien
Did you mean "New web projects should *not* consider" ?
dreftymac
That doesn't sound very controversial to me.
finnw
WOW! Some people in this thread really have extremist views! ;-)
Seventh Element
This is absolutely not controversial. Perhaps you want to say *New web projects **should not** consider using Java*
flybywire
+264  A: 

Less code is better than more!

If the users say "that's it?", and your work remains invisible, it's done right. Glory can be found elsewhere.

Jas Panesar
I dissagree, readability is crucial and if you take myMethod(mVar++) / myMethod(++myVar) vs myVar++; myMethod(myVar). Give me the latter it's clearer and more readable. If less code is better do you name all variables i,j,k etc...
JoshBerke
Good point, and since your point is a variant of Hemingway's approach to writing, very appropriately written.
MusiGenesis
What I meant is coding things as simply and clearly as possible, but no simpler. Sometimes more lines of code are created in trying to break down a process. More lines of code = more bugs = more debugging. The cost of maintaining each line of code gets to be exponential.
Jas Panesar
"Perfection is attained not when you have nothing left to add, but when there is nothing left to take away" --Antoine de Saint-Exupery
TokenMacGuy
This does not seem very controversial to me.
Richard
I don't think people realize quite how true this is. I've noticed a gradual improvement in my code quality as I've got more extreme with my minimalism. I think I'm going to go take out some methods from a few classes now actually...
Ollie Saunders
@Ollie, little is more satisfying than getting the same amount done with less code. I find sometimes I'll write a bit extra while figuring out how to best do it and then get rid of some code. Our design meetings often are about finding the shortest and best path. Complexity increases the headaches with scaling and extensibility in the future. When we have a system we no longer want to work on as much... it's trouble.
Jas Panesar
For anyone who doesn't think this is controversial, I offer into evidence: Java.
Chuck
The belief that less code = simpler code is utterly false.
Stuart
Less code does not mean compressed/obfuscated code. I'm sure many have seen someone create a whole family-tree of classes to solve a problem you could solve much simply.
MAK
+13  A: 

Developers are all different, and should be treated as such.

Developers don't fit into a box, and shouldn't be treated as such. The best language or tool for solving a problem has just as much to do with the developers as it does with the details of the problem being solved.

commondream
And therefore the bozo bit must be flipped for some :-D
icelava
+7  A: 

Test Constantly

You have to write tests, and you have to write them FIRST. Writing tests changes the way you write your code. It makes you think about what you want it to actually do before you just jump in and write something that does everything except what you want it to do.

It also gives you goals. Watching your tests go green gives you that little extra bump of confidence that you're getting something accomplished.

It also gives you a basis for writing tests for your edge cases. Since you wrote the code against tests to begin with, you probably have some hooks in your code to test with.

There is not excuse not to test your code. If you don't you're just lazy. I also think you should test first, as the benefits outweigh the extra time it takes to code this way.

PJ Davis
OMG how did anyone down vote this. Amazing, i'd + 1000 if i could
acidzombie24
Sometimes, watching all your test go green gives you a FALSE confidence, while your code fails somewhere your test didn't anticipate.
Cameron MacFarland
@acidzombie24, you should vote for it if you think it is controversial, not when you agree it.
tuinstoel
@Cameron MacFarland there is no excuse for not doing user testing. The point of the test isn't to cover every edge case from the beginning, it's to make sure your code meets the requirements for what it's supposed to do. No matter how much you test, you'll never cover everything that could happen.
PJ Davis
@Cameron MacFarland, having a test suite helps you even when your code fails in the sense that you can easily add a new test case, correct the bug and remain sure that the bug will be detected if some dev introduce it again.
Petar Repac
You're accruing "offensive" votes... suggest you remove the profanity.
Marc Gravell
+435  A: 

Your job is to put yourself out of work.

When you're writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned.

If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment's notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can't do that, then you've failed miserably.

Interestingly, I've found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.

Mike Hofer
Very nicely put...
AndyUK
This is the mantra that I've been living by for a long time. Not only is it our job to automate other people's job, but our job is to also automate our own job in the process. Crap Code != Job Security.
Lusid
I've been living by this for years now. It your employer knows that you have their best interest at heart they are more willing to keep you around.
mrdenny
or more extreme: replace yourself with a script you wrote
Vardhan Varma
I was going to disagree with you but then I realized you're right. If you do get hit by a bus, then the project that you work on should suffer greatly. But not because your code is unreadable, but because your were so valuable to the team.
Mark Beckwith
"If you follow all these rules religiously, you will even guarantee yourself a lifetime of employment." http://mindprod.com/jgloss/unmain.html
Nikhil Chelliah
Some unfortunate people write code only they can understand, for job security.
Wahnfrieden
If you can't be replaced then you can't be promoted!
rezzif
Would you rather be paid to write clean code for fun new projects, or maintain that big, ugly ball of mud which you wrote for your current project? (Sadly, I suspect that answers to this question will vary considerably).
Todd Owen
so true. i have always tried to work myself out of a job, and have always failed. how is that?
Peter
would "hit by a bus" be a bus error?
Thorbjørn Ravn Andersen
+1 rezzif: That hadn't occured to me! Nice one!
AndreasT
I did this once. I had a temp job at a nearby city business office. I streamlined a few of the things I was doing to the point that they let me go...
Cogwheel - Matthew Orlando
My corollary: If you're the only one who can maintain your code, then maintaining your code is all that you will do. Better projects, new technologies, new opportunities, and so forth will not be available to you since your boss "can't afford" to not have you available to fix your own code. You build your own prison.
Mike DeSimone
Nice! Thanks for sharing. I have similar opinion but have not experienced what you have experienced: "The more I strive to be disposable, the more valuable I become to them".
Viet
If you can't be replaced on a critical job function, then you'll never be invited to work on any of the new (exciting, resume-improving) job functions.
Jason
People isn't reeplazable
Hernán Eche
@rezzif hah, nice. some inherent benevolence in the business world for once.
Rei Miyasaka
+30  A: 

One I have been tossing around for a while:

The data is the system.

Processes and software are built for data, not the other way around.

Without data, the process/software has little value. Data still has value without a process or software around it.

Once we understand the data, what it does, how it interacts, the different forms it exists in at different stages, only then can a solution be built to support the system of data.

Successful software/systems/processes seem to have an acute awareness, if not fanatical mindfulness of "where" the data is at in any given moment.

Jas Panesar
Or to put it another way: "Data outlives code".
Dan Dyer
Hey, I like that a lot! Thanks for sharing.
Jas Panesar
It's an interesting idea but I think it depends what kind of program you're writing. Five worlds man. http://www.joelonsoftware.com/articles/FiveWorlds.html
MarkJ
I couldn't agree more (granted I'm a DBA so all we deal with is data).
mrdenny
The system also seems to lose it's way if the data is out.
Jas Panesar
I'd take the relations of the data into account, too, so "The model is the system". I mean the second letter of a name is relatively useless without the rest and the first name needs the family name and the employee the department, etc.
Aaron Digulla
A: 

Developers should be able to modify production code without getting permission from anyone as long as they document their changes and notify the appropriate parties.

Eric Mills
What does this even mean? "Hey, I just released a patch that deleted the customer's requested functionality because I felt like it but it's ok because I have documented it and told you that I did it." Is that the kind of thing you were suggesting?
duncan
This could happen if a programmer has poor judgement, but I ultimately believe developers have better judgement than they are given credit for. They should be allowed to fix bugs without a bunch of friction. I believe in trust over regulation with the developers I work with.
Eric Mills
+1. Why the downvotes? Maybe doing the kind of work that demands that level of scrutiny removes the ability to see that there's more than one kind of coding environment? There's no manager to lean on when your world-view-interpreter algorithms are wonky.
jTresidder
I could count on one hand the number of programmers I know that I would trust in that sort of environment - too many cowboys out there.
Evan
Ok, I would start by modifying all the code you wrote. It would be interesting to see if you would still feel the same way.
Seventh Element
@Eric Mills: Go work for a bank, or qualify your answer. Maybe you are unaware or underestimating the impact erroneous (or even malicious) code changes can have on a company. Hours of work lost, bazillions of space credits blown. Careers have been destroyed over these kinds of things, people fired on the spot. Probably not something you'll understand until you are personally responsible for an insanely important system...and some cowboy wants to tweak it at will.
Stu Thompson
At least, in all systems i worked with, this logic would be a very bad policy. Could you provide us with an environment where you would like this to occur?
Luis Filipe
+4  A: 

Uncommented code is the bane of humanity.

I think that comments are necessary for code. They visually divide it up into logical parts, and provide an alternative representation when reading code.

Documentation comments are the bare minimum, but using comments to split up longer functions helps when writing new code and allows quicker analysis when returning to existing code.

Jeff M
"using comments to split up longer functions" means your functions are too long.
Jay Bazuzi
If you can't understand code WITHOUT comments, you can't understand it WITH, either.
Aaron Digulla
Voted up, because this surely is controversial; I disagree with you :-) I'm on the side that says “Don't comment bad code, re-write it so it's clear”. If your justification for comments is to break up code visually, that's far better done with separate well-named functions with whitespace between.
bignose
+1  A: 

Hardcoding is good!

Really ,more efficient and much easier to maintain in many cases!

The number of times I've seen constants put into parameter files really how often will you change the freezing point of water or the speed of light?

For C programs just hard code these type of values into a header file, for java into a static class etc.

When these parameters have a drastic effect on your programs behaviour you really want to do a regresion test on every change, this seems more natural with hard coded values. When things are stored in parameter/property files the temptation is to think "this is not a program cahnge so I dont need to test it".

The other advantage is it stops people messing with vital values in the parameter/property files because there aren't any!

James Anderson
Q - "how often will change the freezing point of water" A - Every time you change altitude (barometric pressure) or salt density or... (assumptions start with those three letters for a reasons)
duncan
the speed of light depends on the medium it's traveling through
Ferruccio
The assumption that a constant won't change (like in this post, indicated by the responses) is EXACTLY the problem and the reason you should just never hardcode.
Bill K
+1  A: 

Having a process that involves code being approved before it is merged onto the main line is a terrible idea. It breeds insecurity and laziness in developers, who, if they knew they could be screwing up dozens of people would be very careful about the changes they make, get lulled into a sense of not having to think about all the possible clients of the code they may be affecting. The person going over the code is less likely to have thought about it as much as the person writing it, so it can actually lead to poorer quality code being checked in... though, yes, it will probably follow all the style guidelines and be well commented :)

Jesse Pepper
Approvals are the bad thing? Or you just don't trust one person to do the approvals? I'd say "one person can never approve anything". Meaningful approval means everybody should have the ability to black-ball, and approval should be by stake-holder consensus. Then everybody is to blame when it fails, which it still will. :-) How's that for punchy?
Warren P
+10  A: 

The worst thing about recursion is recursion.

Mike
But what about recursion?
LarryF
Before you understand recursion, you must first understand recursion.
Velika
Recursion, n. See recursion.
David Thornley
+29  A: 

Design Patterns are a symptom of Stone Age programming language design

They have their purpose. A lot of good software gets built with them. But the fact that there was a need to codify these "recipes" for psychological abstractions about how your code works/should work speaks to a lack of programming languages expressive enough to handle this abstraction for us.

The remedy, I think, lies in languages that allow you to embed more and more of the design into the code, by defining language constructs that might not exist or might not have general applicability but really really make sense in situations your code deals with incessantly. The Scheme people have known this for years, and there are things possible with Scheme macros that would make most monkeys-for-hire piss their pants.

dwf
I agree with your general feeling. Try and see them as temporal observations. See http://blog.plover.com/prog/design-patterns.html for example.
JB
+9  A: 

For a good programmer language is not a problem.

It may be not very controvertial but I hear a lot o whining from other programmers like "why don't they all use delphi?", "C# sucks", "i would change company if they forced me to use java" and so on.
What i think is that a good programmer is flexible and is able to write good programms in any programming language that he might have to learn in his life

agnieszka
On the other hand, I *would* change company if, say, I was told that the rest of my job (forever) would be in GWBasic. There's a significant difference in how easy it is to express designs in different languages.
Jon Skeet
yeah, of course it's not applicable to all situations. but still a programmer has to be flexible to some extent because this is what computer science is all about - constant change.
agnieszka
Totally agreed. I hate those religious language wars :/
driAn
While I agree that a good programmer can understand any language, working with it 40+ hours a week is a different story. I can understand VB.NET just fine, but I don't want to spend most of my day plowing through it!
Cameron MacFarland
I can agree with this. The real truth here is that there is a tool for every job. Sometimes that tool may be Perl. Sometimes it may be vbScript, sometimes Java, sometimes C#, and sometime even C++... The good developer knows WHICH tool is right for the job.
LarryF
While it may be true that you can learn the *syntax* of a new language in a few hours, you can't learn a *language* in a few hours. It takes years to master a new language with all the corner cases, etc.
Aaron Digulla
Lisp! Lisp! Lisp!
Thorbjørn Ravn Andersen
"A good carpenter can cut wood with a hammer..." (I'm sure: carpenters are much more knowledgeable than programmers.)
MaD70
+12  A: 

Non-development staff should not be allowed to manage development staff.

Correction: Staff with zero development experience should not be allowed to manage development staff.

Chris
Better non-development staff with management skills than developer staff without management skills.
tuinstoel
So you reckon every company that employs any developers should have a developer as CEO?
finnw
Yes, if you going to manage people with a special skill set it would be helpful if you also had a background in that skill set. Would you hire a CEO with no Management experience?
Chris
Stu Thompson
+6  A: 

VB sucks
While not terribly controversial in general, when you work in a VB house it is

rotard
That this is not generally controversial shows how generally up themselves so many programmers are. Have a preference - fine. But when it comes down to whether you have a word (that you don't even have to type) or a '}' to terminate a block, it's just a style choice...
ChrisA
... plenty of VB programmers suck, though. As do plenty of C# programmers.
ChrisA
VB doesn't suck. People who use VB like VBA suck.
Chris
VB *does* suck. So many things have been shoe-horned into what was originally a simple instructional language to allow novices to enter the domain of professionals that it's no longer appropriate for either novices nor professionals.
P Daddy
It's not the language that sucks but a lot of the programmers that (used to) program in VB.
Seventh Element
+23  A: 

Generated documentation is nearly always totally worthless.

Or, as a corollary: Your API needs separate sets of documentation for maintainers and users.

There are really two classes of people who need to understand your API: maintainers, who must understand the minutiae of your implementation to be effective at their job, and users, who need a high-level overview, examples, and thorough details about the effects of each method they have access to.

I have never encountered generated documentation that succeeded in either area. Generally, when programmers write comments for tools to extract and make documentation out of, they aim for somewhere in the middle--just enough implementation detail to bore and confuse users yet not enough to significantly help maintainers, and not enough overview to be of any real assistance to users.

As a maintainer, I'd always rather have clean, clear comments, unmuddled by whatever strange markup your auto-doc tool requires, that tell me why you wrote that weird switch statement the way you did, or what bug this seemingly-redundant parameter check fixes, or whatever else I need to know to actually keep the code clean and bug-free as I work on it. I want this information right there in the code, adjacent to the code it's about, so I don't have to hunt down your website to find it in a state that lends itself to being read.

As a user, I'd always rather have a thorough, well-organized document (a set of web pages would be ideal, but I'd settle for a well-structured text file, too) telling me how your API is architectured, what methods do what, and how I can accomplish what I want to use your API to do. I don't want to see internally what classes you wrote to allow me to do work, or files they're in for that matter. And I certainly don't want to have to download your source so I can figure out exactly what's going on behind the curtain. If your documentation were good enough, I wouldn't have to.

That's how I see it, anyway.

chazomaticus
When I use Doxygen, I use /internal tags very often. This makes it easy to generate two sets of documentation exactly as you describe. (Of course, I also continue to use regular comments throughout code where required.)
Zooba
I don't just like JavaDoc. I love it.
Seventh Element
+3  A: 

Extension Methods are the work of the Devil

Everyone seems to think that extension methods in .Net are the best thing since sliced bread. The number of developers singing their praises seems to rise by the minute but I'm afraid I can't help but despise them and unless someone can come up with a brilliant justification or example that I haven't already heard then I will never write one. I recently came across this thread and I must say reading the examples of the highest voted extensions made me feel a little like vomiting (metaphorically of course).

The main reasons given for their extensiony goodness are increased readability, improved OO-ness and the ability to chain method calls better.

I'm afraid I have to differ, I find in fact that they, unequivocally, reduce readability and OO-ness by virtue of the fact that they are at their core a lie. If you need a utility method that acts upon an object then write a utility method that acts on that object don't lie to me. When I see aString.SortMeBackwardsUsingKlingonSortOrder then string should have that method because that is telling me something about the string object not something about the AnnoyingNerdReferences.StringUtilities class.

LINQ was designed in such a way that chained method calls are necessary to avoid strange and uncomfortable expressions and the extension methods that arise from LINQ are understandable but in general chained method calls reduce readability and lead to code of the sort we see in obfuscated Perl contests.

So, in short, extension methods are evil. Cast off the chains of Satan and commit yourself to extension free code.

Stephen Martin
I am still undecided but there seem to be genuine practical uses for extension methods.
Seventh Element
I'm totally with you, buddy.<