views:

4234

answers:

73

When you were starting to program, what was the hardest concept for you to grasp? Was it recursion, pointers, linked lists, assignments, memory management?

I was wondering what gave you headaches and how you overcame this issue and learned to love the bomb, I mean understand it.

EDIT: As a followup, what helped you grok your hard-to-grasp concept?

+76  A: 

The compiler works fine, it's the code that's wrong.

MontyGomery
Also hard to understand for the expert ;)
tloach
Especially when working in a shop which has its own compiler back-end.
Steve Jessop
I had an argument with another senior the other day about the fact that computers don't make errors, they execute your commands very precisely. No matter how "random" the result is if you can reproduce the exact steps you'll get the same problem
Slace
@Stace: not true, a CPU is basically an analog device approximating digital behavior. Just heating it a few degrees too much is enough to introduce truly random behavior.
Joeri Sebrechts
@Joeri: yes, but how often is that the real reason for a failure when the computer is blamed?
Joachim Sauer
Winner. The tragedy is that only experience teaches this one...
John Pirie
+1 When I first started I couldn't stand Java because I thought javac was just trying to annoy me.
Zifre
Ha ha ...! I can't tell you how many times someone has told me that there must be a bug in the .Net framework or the compiler!
Chuck Conway
@Jaochim: Ever try and play on an Xbox 360 for a several hours when it's placed in an enclosed entertainment center? Once it gets warm enough, games will hang and crash in funny ways. Had this happen to me several times last week. And no, it wasn't even warm enough to auto shutdown.
Nathan Ernst
+1  A: 

When I was a 9yo kid learning BASIC from the book that came with my computer, it took me a while to realize that NEXT jumped back to the top of the FOR loop.

jeffm
+2  A: 

When I started the most confusing things were pointers and OO-Concepts.

daddz
A: 

I struggled with pointers when I started doing C++.
I think I suffered from not learning enough C first.

What got me through it was a combination of re-reading the textbooks and sitting with a text editor and a compiler and trying things out until it all came together in my head.

Darren Greaves
+22  A: 

That's gotta be lambda calculus.

xmjx
Whilst learning, it is compulsory to refer to it as "Lambada Calculus". :-)
Cheekysoft
Lambada? As in certain über-sensual Brazilian dance? Must be because it makes you dance!! :D
Joe Pineda
Lambada! http://www.youtube.com/watch?v=5AfTl5Vg73A
Spoike
+4  A: 

When I started, OO was this weird out there thing that only awesome people must be using. Then one day, I took the time to sit down and force myself to understand OO. I don't know why I waited that long, it makes a lot of sense and clicked pretty quickly.

Martin W
+54  A: 

I guess that for a C programmer, the first hard concept would be pointers. Especially references (&) and function pointers. This would require some inner understanding of the computer, which many beginner programmers don't have. Also, pointer arithmetic isn't always simple. For other languages, this could be anything from variables to OOP. This really depends. From what I've seen, I guess it might be procedural programming, because this requires some change in the way of thinking, and might even require the new programmer to design (!) his/her code.

Moshe
+20  A: 

How best to divide up a program into modules/classes.

finnw
I'm struggling with this right now. Any tips?
Sneakyness
@Sneakyness, this is one case where I think *Code Complete* is wrong. I find it easier to start with individual messages, then group them into interfaces, then tests for the interfaces, then the actual classes.
finnw
+7  A: 

I didn't truly get OOP until about a 12 or 14 months ago. It took exposure to Smalltalk's paradigm of messages being the primary language construct to shake me up.

moffdub
A: 

It took me 2 months to finally understand OOP, then it took me 2 more weeks to actually GET IT, then functional programming is still giving me a bit of trouble.

Rayne
Then you are blazingly fast compared to me - it took me about 5 years. I had to first GET functional programming and dive into unit testing to really start to gasp OOP. And it might be, that I still don't get it.
Rene Saarsoo
I refused to learn OOP back when I was still coding Pascal because records (the equivalent of C's structs) offered me what I thought I needed. Later, I had to move to Delphi and when I've realized how powerful OOP is, I never looked back. It took me years to decide to learn OOP but it was worth it!
12 years for me! My problem was that I was confusing objects with ADTs (many still do throughout their entire career.) It was one of Allen Holub's books that made it make sense in the end.
finnw
+3  A: 

Not sure, but function pointers where a bit strange to me in the verry beginning.

Gamecat
A: 

Linked lists and sorting.

I was just about 12 years old statring with pascal. Up till then I was only aware of simple arrays and strings and then my uncle introduced me to the wonderful world of pointers that point to the same struct as the one they are in.
After I figured that out he tried to teach me quicksort but that was a tad too much.

shoosh
+2  A: 

The first concept I had trouble understanding was variables when I tried to learn Visual Basic (my first language) many years ago. The book I was using never bothered to explain them properly, and the whole notion of "Dim X as Variable" was alien to me: Why would you need to declare variables before using them? What is the keyword called 'dim'? Why do you need variables if you could use the values directly? etc.

Then when I learned C some years later, I had trouble with pointers. I understood how to use them, but I couldn't understand why you'd need them. I guess when trying to explain difficult concepts to beginner, you should always try to give them examples of real practical use. The C tutorial I was following said you could use pointers to allocate heap memory, but didn't tell me why I'd need to allocate memory.

I never had trouble with OOP. It seemed pretty logical and intuitive to me. It's closer to the way people think.

Firas Assaad
+9  A: 

Performance != Optimisation

rephrased "Performance != The Highest Objective" -BCS

Performant code is fast.
Optimised code is elegant and easily extensible.

_ande_turner_
I think I have to disagrees with your wording on this. I get your point and agree with it but, IMHO optimization == improving performance (generally space or time usage). I think it would be more accurate the say "Performance != The Highest Objective"
BCS
Optimziation can be done from various views,e.g. smallest code footprint, shortest lines of code, fastest performance, memory required, not all of them deal with performance.
JB King
+18  A: 

Lots of people cite OOP but basic OOP really isn't that hard to understand because you can give fairly visible real-life examples of how objects work.

I found the grittier sub-topics of OOP harder to understand. I'm talking inheritance and polymorphism. I read a lot of definitions of both at university and I understood what they were saying, but I didn't understand why I'd want to use either until after I'd done a couple of large coursework projects.

Some patterns made me wonder "why?" too. If you're trying to learn, you really need a full example to see where you'd want to implement them because one-line definitions don't cut it.

Thankfully pointers made sense to me when I learned C. They're fairly logical and it was only the syntax that caused the initial problem.

MVC (in webdev) was another "why?" topic for me. I'm used to separating my data-logic from display-logic, from display code, so it seemed like what I was doing, which probably exacerbated my problems in getting used to a fixed way of doing it.

Version control is a very important topic that lots of people put-off learning until they're forced to at gunpoint.

Functional programming is something I'm still putting off learning. Again, because I can't see the point/benefit.

Oli
I don't think people have a hard time understanding OOP as much as they do using it in practice, and using it efficiently. Well designed OOP takes a lot of time and practice.
Spodi
Ditto!! on Spodi.
kenny
+1  A: 

I think that in general the hardest part is the general shift the way we thing, programmers think about most things differently then most other people, especaily when presented with a problem. When I speak with other computer people I can usually tell right off the bat weather or not they are a programmer, just by the way they think. When confronted with a problem a typical person looks at the problem as a whole and tries to "eat the entire elephant all at once", but when a programmer gets a problem they instictivly break it down into smaller easier to chew bits.

This way of thinking is not something that can be taught in a class room, some people are born with it others learn it. And I think this process of learning how to think is by far the hardest part of becoming a successful programmer.

Unkwntech
+1  A: 

Hardest concept for me has always been Windows geometry. From the origin being the top-left, to viewports and mappings and dialog units and dpi, from screen co-ordinates to client co-ordinates it has always a bit of a mind fornication trying to get drawing and hit-testing code right first time. And that's without mentioning rounding errors (which have caused me no end of headaches in the past).

I find it all much easier now because I've been burned in the past, but still, that was a hard thing to get my head around initially.

Besides that, the concept of what was the language and what was provided by a library was also a concept I initially struggled with. Such as "for" is a language keyword whereas "printf" is not.

Jeff Yates
+9  A: 

For me the hardest concept was Generalized recursion. Not the divide and conquer style like in q-sort, but the Lisp style loop via recursion. Mostly I took forever to get around to it. I saw it now and again but never really tried to figure it out. Once I actually worked with it (in a CS languages class) it became really clear and VERY handy (I do a fare share of template meta programming).

BCS
+37  A: 

Regular Expressions! I still need a reference when I use them.

Gary Willoughby
Same here, they can be a real pain in the ass. I don't know many who know regular expressions like the back of their hand. It's one of those technologies not many spend much time with until the moment it's needed.
Chuck Conway
me too. I can't do them without Expresso...
Andrei Rinea
+2  A: 

Monads have always been somewhat opaque to me. I understand the basic laws and such, but anything beyond Haskell's Maybe monad is a little beyond me right now.

directrixx
+1  A: 

For me I would have to say it was many levels of indirection. Whether it was assembler or C having pointers pointing to pointers or arrays of pointers. It gets messy pretty quick. Not to mention the additional level of confusion that segments could add to the equation on Intel 16 bit processors.

I think universally most people don't grasp memory management. Whether it's allocating and de-allocating memory and resources in C or creating collections of objects in an OOP language. The reason that I say this is because so many people get it wrong.

bruceatk
+11  A: 

How to avoid duplication.

Manu
+6  A: 

I think that there are several skills that a good programmer needs: the ability to abstract, the ability to think recursively, and the ability to imagine complex networks.

Since beginners have different aptitudes in each, their problems correspond: bad design/modularization/functional decomposition, recursive algorithms and structures, pointers.

It's also interesting that a lot of people (more math oriented) are good with pointers and algorithms but horrible in abstractions and decomposition. The converse is also true. I consider this to be the gap between good classic CS folks and good engineers. Very few people can fit in both categories, unfortunately.

Uri
A: 

Pointers had to be one of my biggest problems I struggled with. Referencing and Dereferencing them, etc. I overcame the problem by following tutorials and reading as much as I could about them. It was a happy day for me when I figured them out.

Zee JollyRoger
For me, the concept of pointers was simple, but I was confused for a while by C's confusing syntax.
Kyle Cronin
+15  A: 

Variables. Or more specifically, the fact that a variable is not the same as the value, that it represents.

I actually took a while before fully realising this, but it made a lot things much clearer. Now, I often recognise the same fallacy with lesser experienced programmers.

There are a lot of things that are technically much more complicated, but understanding these fundamental leaps of abstractions are usually very hard.

troelskn
I mostly see this from beginners that doesn't have a good math background.
jop
Possibly - That would fit my description, I guess.
troelskn
It's interesting that you say this, especially because functional programming attempts make them the same. i.e. have only values and no variables.
Jonathan Tran
This is mostly true in people that come from the higher high-level langauges (Java/C#/VB.net/etc). People who've had to deal with pointers for every-day operations deal much better with the idea because there's less to unlearn
Oli
I've taught beginners programming...This is the hard thing.
Paul Nathan
A: 

Can't remember what I struggled with, it's been too long and I was too young.

That being said, what I see most OO programmers struggle with is NullReferenceException. So many people can't grasp that you can't call methods on null.

TheSoftwareJedi
+14  A: 

I can't say I came across these as a beginner, but:

  • Continuations aren't immediately obvious, and I wouldn't want to be a compiler writer with the job of implementing them (which is probably why so few languages support call/cc)

  • I still don't grok how monads give rise to purely functional I/O in Haskell, mind you I haven't used Haskell since a semester class at University years ago, and have never done any I/O in it.

Hugh Allen
A: 

Pointers were damn confusing.

One thing that also bugged me was optimising my code, I never knew when to stop with the minor performance tweaks that make so little difference they weren't worth the time to implement.

Adam Gibbins
+24  A: 

That someone else would someday be fixing my code.

Hard to grasp, but also the thing that had the most influence on making me a better programmer.

--
Bruce

bmb
True, especially the fact that "someone else" is normally you in 6 months, after you've forgotten all about those weird special cases and what they were for
Daniel Magliola
Daniel, it didn't *really* make me a better programmer until I realized it *would not* be me in 6 months.
bmb
I think this is deeply insightful.
Barry Brown
"Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live."... well, I know where I live; guess I had better start leaving breadcrums for my future self...
TokenMacGuy
TokenMacGuy, if you think it's going to be your future self, you still don't get it.
bmb
+3  A: 

Polymorphism was one of the weirdest concept for me to wrap my head around. Not because it was complex, but because I almost immediately understood it, but not how to use it. I was trying to make function that worked with a specific class and pass them subclass members parent class members, I was casting were I shouldn't have been, and I expected everything to work out fine. Later I learned how to structure the problem to fit the tools I had.

Learning the reasoning behind the tools is far more important to me than simply this is how to use the tools.

The exact same thing happened to me with pointers. I immediately understood the concepts, but I has no idea why such a convoluted tool existed. Then I made my first linked list. Wow, what an epiphany. Not only was there way to use this, but it did something that I was so oblivious to that I had to change the way I looked at coding. These were two of my major windfalls when it comes to coding, I am sure that I will have more I just need to keep trying to understand as much as possible.

When you learn something new, make sure that shortly after you understand the syntax, that you understand what problems that tool was intended to solve and can solve. Learning what problems it should solve can help you prevent from deploying them incorrectly, and eventually let you deploy them in creative and novel ways that still make sense.

Sqeaky
Polymorphism was difficult for me to grasp as well for a while. Then, I was presented with a situation where I needed it and it became obvious.
Adam Lassek
+3  A: 

For web development, it seems to be the difference between client side (Javascript), and server side code (PHP, ASP.Net, Java). I don't understand why, I've never had problems with it myself, but it seems to be a recurring problem among many developers posting on forums. People continually post questions about how to use C# to run some code after the page is finished loading, or how to use Javascript to store form information in a database.

Kibbee
+6  A: 

Everything other than writing the code, without a doubt. I borrowed or bought many C books in my early days, but suffered for years trying to understand how to really build software. None of these texts talked about anything more than writing a single small program, completely self enclosed. I wasn't exposed to detailed understandings of compilers, modules, linkers, source control, and all the other not-writing-source-code activities that often make up the bulk of development work.

ironfroggy
+13  A: 

References in C++. It took awhile for me to accept the fact that

int &x = a;

means that x becomes an alias for a. Not a copy of a, not a weird pointer to a: x is a.

Mike Spross
How is a reference not 'a weird pointer'? Because you can't rebind it?
Simon Buchan
@Simon: I meant to say that references made more sense to me when I stopped trying to think of them as pointers with a different syntax (i.e. "weird pointers"). Not worrying about how they were actually implemented made it easier to grok the higher-level concepts behind them.
Mike Spross
+1  A: 

Mostly C/C++ related things.

printf format specifications - I never quite understood how this worked till I worked on code that mimicked printf. What made it worse was that our lecturer didn't allow us to use cin/cout even though that's what the textbook prescribed. His view was that we shouldn't use code we don't understand - and we didn't understand streams.

How to read input - This was hard because I didn't fully understand the portability issues

Placement new - The concept is easy, I just kept forgetting what it meant because I never used it

The hardest part - bar none - was understanding OOP. It took me a few years of programming to finally get it. Every time I thought I finally understood it, sooner or later it would dawn on me that I was wrong. It was a very humbling experience though. I learned what a profound statement it is to claim that you "understand" something.

ilitirit
so... instead of using streams, which you didn't understand, he had you use formatters, which you also didn't understand, even when `puts()` is perfectly good? sad...
TokenMacGuy
+1  A: 

Continuations. I still don't quite get them properly.

Yeah I know they're not really a beginner subject :-)

Orion Edwards
A: 

Writing an anonymous recursive function using a fixed point combinator, such as the Y combinator.

A. Rex
A: 

The fact that every single decision you make in software engineering is a tradeoff. Being able to recognize these tradeoffs is a fundamental skill that isn't necessarily explicitly talked about. There are many classic tradeoffs (memory vs. speed, security vs. performance etc). Every design decision you make is in some way a tradeoff.

Ben
+7  A: 

Designing loosely coupled, maintainable, extendable, reusable objects(Interface) in OOP.

TG
I'm still struggling with this one.
Sekhat
+1  A: 

OOP is really simple - you just start to use classes. And you can make great inheritance hierarchies out of those classes to really facilitate code reuse. And of course the mighty design patterns - you can use singletons all over the place.

Sadly, for most programmers OOP means using classes for namespacing. Which is a great concept too to gasp, but as many have pointed out: the true OOP is not that easy to understand.

Rene Saarsoo
In particular, going from OO theory to practice. I was fortunate enough to work with someone who was a great mentor and that really did it for me.
objektivs
A: 

From a C++ programmer, the first hard concept would be pointers. Especially references (&) and function pointers. Also, pointer arithmetic was hard until i was actual looking a memory and watching the pointer move.

Jiminy
A: 

Recursion. Pointers are annoying, but the concept makes sense. Object oriented programming seems intimidating, but it's intuitive once you grasp the basic concept.

But recursion? I still have a horrible time with it.

James
Learn Haskell! You'll have a wild ride.
Artelius
A: 

I find that most junior programmers have a hard time to know when and how to use singleton and statics in OOP. Especially if they come from a functional/procedural background. They most often use them to namespace their functions.

olle
+1  A: 

I had no problems with pointers, pointers to pointers, method pointers, etc..., but I got started with assembly very early, before learning C/C++, so that may be the reason. What took me a long time to get right is good class design, with all the intricacies of abstract classes, interfaces, inheritance, design patterns. OOP is deceptively easy, but it can be tricky to get right when you start dealing with more than a half dozen related classes. I still look at code from 1990-1995 and cringe.

Tony BenBrahim
A: 

A couple of decades ago, but...

Moving from BASIC to Z80 assembler was difficult. Just coming to terms with how sparse a language z80 really was. (and it was positively rich by comparison to 6502)

Some time later, the move from Pascal to C I found more difficult than I should have. So many symbols, so few words.

C++ was never a problem, but templates caused a bit of confusion at the time as did moving from home spun loops to iterators.

Shane MacLaughlin
+1  A: 

Starting out, following the 'C' text book was pretty easy - and so exciting I stayed up half the night writing the little example programs to subtract two numbers etc. etc.

The hard part was going from there to writing programs that actually do something useful, organised into fuctions, classes and modules. In my first holiday job I was writing some test software for a hardware engineer and I wrote the whole thing as one big function :-) the hardware guy didn't notice anything wrong but on my last day another software engineer realised what I'd done and took me to one side and explained about using separate functions...

sparklewhiskers
+1  A: 

The difference between server side and client side in Web App programming

StingyJack
The number of times I get asked "why does my javascript not fire after a Response.Redirect" in an ASP.NET page is so annoying
Slace
A: 

Functions. When I started programming (in C, at 14) I had a hard time understanding what are they for and use them appropriately. Couldn't we just put all the code in main()?

Doron Yaacoby
A: 

These are the things that I find new developers have the biggest problems with:

  • Variable scoping in ASP.NET. It wont be there when you next post back!
  • Just because you IM'ed or emailed me doesn't mean I'll be responding immidiately
  • It's better to try and fail than to not try at all
Slace
A: 

I've found that variables can be a hard thing for designers learning to write code with no math background. I've had many forehead slapping moments trying to explain this to them. ("what do you mean you don't know what a variable is? didn't you take basic algebra at some point in your life?...")

The Brawny Man
+16  A: 

Threading!

Brian G
I don't think it's just for beginners
Kozyarchuk
How can threading be hard to understand? If you've ever stood in line or driven a car, you know that if two people try to be in the same place at the same time, something bad happens.
Joey Adams
+5  A: 
Sam Saffron
There are some variations on it and Project Cartoon has also a modern one in different languages as well ( http://www.projectcartoon.com/ ).
Spoike
A few years on a helpdesk will cure anyone of this problem. You eventually come to realize that "the customer always lies". Not maliciously. Not because they're stupid. Not because they think *you're* stupid. Because you and they simply don't speak the same language. *Always* figure out what the real question/complain/request is before you invest a lot of time in it.
Ben Blank
+3  A: 

For me, this was definitely Domain-Driven Design.

I found that most of the concepts of OOP were fairly simple to get. Polymorphism, Inheritance, Encapsulation (in theory at least), etc. are all simple concepts up front, but actually being able to look at a problem domain and understand how to use those tools to effectively design your system so that it is extensible and maintainable is literally something that I'm still working on (and I'm 4 years into this).

However, making that conceptual leap from just randomly using those ideas in my code whenever I felt like it made some sort of weird sense to actually saying, how does my domain require me to use OO Principles in order to make this code as maintainable and clear as possible?, was huge and very difficult for me to wrap my head around.

Tim Visher
A: 

For C/C++, it was always pointers and references that blew my mind. I was young at the time, though.

In Java, threaded programming hasn't necessarily blown my mind but it always ends up being stickier than originally anticipated.

MattC
+1  A: 
defaulthtm
I agree except for the third party library. They are written by fallible programmers like us. And libraries are harder to write than applications.
finnw
In almost every bulleted item, if something is wrong with any of them, it's probably **me** that caused it, therefore `GOTO 10`
TokenMacGuy
One more thing to add to the list: the hardware.
mikez302
A: 

The balance between trying to solve the problem directly in front as quickly as possible and trying to develop a solution that would be reusable under all conceivable circumstances. Writing code that solves the problem expeditiously, can be extended without a huge rewrite/redesign, and suggests itself for re-use.

+2  A: 

Project management

Requirements, specs, interface documents, architecture docs, test plans, etc.

It took a while, but it went from "unnecessary overhead" to "absolutely necessary" to do anything maintainable.

+18  A: 

The beauty of simplicity.

In my early years I always preferred a solution that was harder to grasp because it seemed "geekier".

VVS
I remember that myself, and I still see people around me in my computer science education who try to put as much as possible into a single line and give their variables really short names because it looks more "advanced" and because that's (unfortunately) how many of the examples provided to us are presented.
Jakob
lol soooo freaking true
non sequitor
+2  A: 

Windows API in general. &!#$%$"!

supermedo
A: 

I'd say that it's the concept of the user. That all programs basically are written for other people and it's for them one should think of first.

epatel
I think the idea that writing is always to an audience is hard to learn in glass for any language, computer or human.
TokenMacGuy
+3  A: 

Research shows that there are three problems that most new programmers/students have:

1) Getting assignments.

a = 2;
b = a;

-> Value of a & b? Lot's of people don't even pass this step.

2) Recursion

3) Locking / Multithreaded programming.

The last one was the hardest for me to get.

Carra
+1 When I was a beginner (in my teens) I had no idea how assignments worked. And I don't remember how I overcame that learning curve of a barrier, it was if I woke up one day and finally got it.
Spoike
I really appreciate how each item is about an order of magnitude harder.
TokenMacGuy
1st one is impressive.
Eonil
A: 

Why, why should I use something? Before you actually get some programming done and into a programming mindset, it's kind of hard to see where something would be useful sometimes. A lot of examples that are often given are somewhat trivial and convoluted, intended to explain the how to use and now the why to use. Often after looking at such examples someone would end up asking: Well couldn't I have used something else? or Why would I ever want to do that?

An example was my girlfriend was learning javascript for one of her web design classes, and I explained how a for loop worked, but had trouble explaining why she would want to use it, despite using them all the time myself, I had trouble coming up with a simple real life example

Davy8
+1  A: 

I had a very difficult time understanding Hash Tables in my undergrad courses. I remember being scared anyone would start talking about it. It just made me nervous to think about it.

It wasn't that I didn't understand the concept. I really didn't understand how to properly use a hash table, when to use it, and why anyone would want to use one.

The first real programming job I had required me to work with them. Since then, I have gained a better understanding of how, when & why to use a hash table. I wouldn't say I'm an expert on hash tables, but I no longer recoil in fear at the mention of one.

Dr. Bob
A: 

template programming

Ronny
A: 

I am still struggling with a visitor pattern

Kozyarchuk
+1  A: 

How to structure my code (which was basically if()s and printf()s) to avoid swapping floppy disks in the two disk-drives too often as the SAS/C compiler did its thing.

asjo
A: 

Pointers and memory management.

A: 

The hardest thing for me was and still is using polymorphism, inheritance, and interfaces correctly. The concept of polymorphism has never been hard for me to understand but one of my biggest realizations in programming came when I started heavily looking at using polymorphism and inheritance to make writing code easier and eliminating duplicate code.

Nagrom_17
A: 

Pointers in C.

If you crack it you know how memory management works.

Thanks.

this. __curious_geek
Oh no, no you don't. There are plenty of programmers that understand pointers but are completely misguided about memory management.
Artelius
@Artelius: There is no understanding pointers without understanding memory management. Getting and setting values through pointer indirection is not enough to say you understand pointers.
TokenMacGuy
+3  A: 

The "a-ha" moment of functional programming.

I'd seen many people say that learning Lisp or Haskell would make you a better programmer, and that there was a brilliant moment where everything suddenly clicks.

At first I thought to myself "Bah, it's just rewriting loops as recursion. These people are probably just excited about finally understanding recursion."

But after a while I decided that I wanted to be sure. So I wrote a fractal program in Scheme. I thought to myself, "Well, that was interesting. But mainly it was rewriting loops as recursion."

I thought that was the "a-ha" moment. Clearly, I didn't get it yet.

This year, I went to a talk by Conrad Parker, who spent some of his talk on Haskell, and encouraged everyone to learn it. "Yeah," I thought, "OK." And I put some real effort into learning Haskell properly.

I think I had the real "a-ha" moment already, though maybe there's still a bigger "a-ha" moment on its way. Certainly I love Haskell and now I think the hype is justified.

Artelius
A: 

The hardest concept to grasp for me was Exceptions. And in a way I still struggle with them to this day.

Exceptions are clouded in mystery. On one hand here you have an incredibly flexible built-in idiom for error exception handling. But then you have a myriad of Best Practices rules that invariably, if followed to the letter, turn this whole framework into a rare occurrence in your code. If not even entirely absent.

From performance considerations to idiomatic dogmatism, Exceptions are one area of programming in a language like C++ that feel very much like a tempting forbidden fruit. It's right there in front of you, you stretch towards it, and promptly someone rushes in and slaps your hand. Frustrating.

Krugar
A: 

The different between Thread safe and Reentrant.

pierr
+1  A: 

I've always seen the hardest part of programming is the person explaining it - And there arrogance.

Today I picked up putty - never used it before and had rude comments thrown at me - But funnily enough I didn't give him the same treatment when he wasn't sure what MVC was.

Its the people dude. the specific types who wish coding was a hackers black box and a Masonic order and them never really 'wanting' you too know.

thats why I vow to always do my best on this here forum (don cape).

Glycerine
A: 

Delegates here. They just seemed to be a waste of time, why would you create something with a method signature that matches your own methods? I have a ton of void methods with no parameters, so I end up with a "ReturnsVoidDelegate" I use for everything. Why would C# make me do that?

Hurrah for .NET 3.5/4.0 where we get nice inline delegates and lambda calculus :)

SLC