The single hardest problem I have learned, but evidently not perfectly, is trying to maximize performance and prevent thread deadlocks in a massively multi-threaded environment involving asynchronous events from remote calls and from gui events.
views:
5836answers:
87Until I got my head around it, the declarative nature of PROLOG was a complete left turn from anything I'd touched up to that point. It took me weeks to realise what was going on until - <SNAP>
- a light finally turned on.
I would say it's a toss-up between floating point arithmetic (truly comprehending how IEEE-754 works), and theoretical verification (especially of parallel algorithms). But, if I had to pick, I'd probably go with verification.
Pointer. Freaking. Arithmetic. I'm convinced it only exists to keep out the riffraff (and ensure that maintenance C programmers will have jobs until 2137 fixing memory allocation errors).
Recursive functions.
Although, after figuring the pattern out it has helped me to look at new problems from totally different viewpoints.
The hardest thing relating to programming is finding adequate documentation. I usually know what I want to do, and have a general idea on how to implement it, but finding the details through documentation very seldom works.
Monads.
First exposure is ok, then you try to explain it to someone else and can't do anything but go reread the papers.
Then you discover STMs and wonder why people are still playing with locks and threads manually. Computer science is still in the stone age... :)
Truly appreciating the value of testing and being able to design for testability and to be able to write testable code (and also writing test cases that actually properly test what's being tested).
Writing your entire program as an expression; which is essentially what a functional language like Scheme or Lisp demands. After that would be identifying the resources that need to have thread contention protection. I would say debugging deadlocks in threaded applications, but there seems to be so much variety in what can go wrong that I always learn more while debugging.
Emacs
me: what's the key binding to commit a file? emacs master: C-x v v emacs master: you can list all the key bindings with C-h b (10 seconds later after I typed my commit message) me: what's the key binding to actually commit now?
SQL. Hands down.
C#, C++, C, Clipper, x86 Assembly, Javascript were easy compared to SQL. I guess it was hard to change my mind from procedural languages to a set-based language.
C pointers were always a pain for me. I've been able to figure out how to use them, but I find that whenever I take some time away from C, I need a refresher on pointers when I come back to it.
The very hardest part overall was probably when I was 15 or so and was trying to understand the concepts behind BASIC variables on my TRS-80 so that I could interact with hand-coded assembly routines I was fooling with.
Figured out pointers, reference counting and memory allocation all in one week with no help except for a z80 assembly reference book (Nobody I knew had a computer in '78, and there were no "For Dummies", by reference book I'm talking no examples--just bitecodes, register descriptions and electronic signaling.).
When I came to C like 10 years later, pointers were trivial and obvious.
Everything else--inserting bitfields into assembly operations, converting them to integer values so they could be poked into memory, ... was just fun.
Network programming (in both C++ and Java). Maybe just because it's tedious, but it seems like 50% of the code is either detecting or recovering from an error condition.
Setting up a large, existing project in Eclipse (or any IDE) so that it will (a) build and (b) allow debugging (inside the IDE for both).
I dislike configuration: I don't have the patience for it and it seems the only way to get experience is to repeatedly fall down flights of stairs. There are no university classes; there are few books. Pair-programming helps greatly here.
I have learned to do it but back in the day (2002-03?) it was the bane of my existence. To this day, I will help you with anything I can, but please don't ask me to set you up with Eclipse.
XSLT. It requires holding multiple complex "plates in the air" all at the same time: a declarative/recursive program that is manipulating (at least) two document structures, at least one of which is usually hierarchical.
It's hard to write, even harder to read, and not conducive to standard debugging techniques.
What people say they want from a program != what people actually want from a progam
Functional Programming - Recursion, I'm finding that I have to constantly re-factor code to improve it. It has made force me to think in a new paradigm which is a challenge.
Formal lexing and parsing techniques are the most difficult concept to master that I have actually used in the practice of day-to-day programming in a normal software shop. Being able to create and parse a simple language of your own design can often be a life-saver in terms of long-term productivity, but can also be a royal pain in the ass if you don't know what you're doing.
I don't think I've ever done anything harder than debugging someone else's assembly-language code when my starting point was a listing and a cheat-sheet of op codes and the only language I'd ever programmed in was BASIC. I got pretty good at it. I've built my entire career around not ever having to do anything like that again.
Several people have already mentioned multi-threading and thread safety. Although I find these disciplines difficult, the real hard stuff for me is finding ways to TEST multi-threading and thready safe code. In the comfort of a sandbox/sample program forcing these situations to occur is only a little harder than trivial; but in a real, otherwise working application I say "Pshaww!" and "That's hard".
Learning to ask for help early on rather than try to be the hero and figure out something on my own which may take too long.
Learning to deal with people who lack skill in architecture and meta concepts but who illogically think that some other expertise makes them correct in this area.
Now in general, I have an easy time getting along with people, and this is a rarity, but when it hits, boy can it be a doozy. Often what you are fighting against here is someone's ego, or rather you are trying to debate a point of logic, but finding the ego shield is protecting the logical vulnerability. It can be hard to convince someone that you are in fact correct and that they have a made a mistake when they have already decided their superiority because they know oh so much more than you about: A)Graphical engines B)SCRUM C) The C++ spec D) Making flow charts.
These hardest cases are where the person DOES know a great deal about some area, but has some inability to understand a logical argument... so if you question the logic of "having skill X automatically translates to skill Y" ends up getting interpreted as "ZOMG you have questioned my skills, how dare you."
This is the single hardest skill I think a programmer will ever had to learn. It is especially hard for a programmer because we tend to be poor at diplomacy, especially in the face of illogical arguments. Nothing in school will train you for this, and it really only tends to happen once you hit the mid and senior levels.
Modularity and abstraction.
This is something far too many PHP web developers just do not 'get'. There is something to be said for black-box isolation of APIs. Too much code does everything from assembling HTML to sending SQL to the database on the same page. Bad. Wrong. Doesn't scale.
I'd say designing programs; that it, which code goes where, what should be static, what should be an abstract class, what should be an interface, how it al fits together etc.
I think it's a difficult thing to learn, and comes with experience.
GBL code itself is hard enough. But then figuring out how to make it NOT look like ass was damn near impossible.
Developing fast and believable artificial intelligence for games. I've been learning it for 15 years, and still learning.
Continuations. It was a real brain-twister to learn how continuations work in a language like Scheme.
The whole C++/OOP thing.
I used to hate it, curse it, lose sleep over it, because EVERY program I had to write for my BSCS HAD to be in C++ unless the course was specifically for another language.
BUT, once I finally got my head around it, I loved it, sang its praises, and wondered how I ever wrote decent code in any other language.
And everything I had to learn since then has been easy by comparison. ;)
"Do the simplest thing that will work".
I used to spend far too much time trying to code every possibility into the routine, planning for any eventuality. Of course, all that extra code meant more chance for defects and more debugging.
I learned that most of that code is never actually exercised. Worse, when the eventualities really do occur, they don't follow the assumptions you made back when coding, so you have to revisit the code and change it anyway. Except it's so complex with possibility thinking that it's harder to change than if you had never bothered.
So now I just write my code for the exact problem at hand. Later, if I need to add something else, I'll re-write (refactor) the code to be the simplest thing to solve the new problem.
-R
Writing easily readable, clear code and not trying to be too clever. So much easier on others and yourself when you have to fix a important bug under time pressure.
Testing is another thing. Never push something that can not be tested. I've once held off a 1 line code change for a couple of weeks as the edge case was pretty difficult to reproduce (needed things to have a particular data happening with a set of other conditions). Better to wait than push out untested code.
One thing I still struggle with is the idea that programming is difficult. To me, it comes easy. That leads to some difficulty interacting with other programmers. I've been called arrogant, which I find ironic - in fact, it's a lack of self-esteem that sometimes leads me to believe that others simply aren't trying. After all, so my thinking goes, I'm nobody special, and I can do this - so why can't everyone?
ClearCase. Specifically it was difficult to restrain my homicidal urges while using it as a source control system.
Stifling my instinct to reformat or rewrite legacy code as I work through it. It's far more productive (and less error-prone) to add a few judicious comments that explain the epiphanies along the way.
Maybe not the hardest (it depends on the individual), but certainly the most important for me has been:
Learning to visualise abstract concepts (OOP, set theory, concurrency etc)
Self-referencing Common Table Expressions. They seem so simple, but recursion in a set-based language like sql is much, much different from recursion in an iterative or functional language.
Not trying to build my support libraries too early.
I've only recently figured out that you can't write the database-access abstractions or state-saving classes or whatever else until the second time you need to do that task. Only then will you be able to see enough of the general problem to know how to write a solution for it.
Threads.
I never knew what was happening.
Later everything came clearer.
Later I understand, I should not do threads but leave all to the application server :)
x86 segmented memory architecture. It doesn't help that FAR addresses are represented differently using asm/C.
Python tuples. I kid you not.
There's been a long running holy war, with fanatical jerks on both sides, as to whether tuples are read-only lists.
Well, they are.
But that isn't just all they are. And that's what was tricky to me.
Halfway through the argument, I realized that I had never needed a read-only list, yet I used tuples all the time. Sorting through that revelation helped me understand not only the "intended use" of tuples, but also why certain implementation details were or weren't like those of lists.
It's Python, so I still reserve the right to use tuples as read-only lists, or lists as read/write tuples (as much as the implementation will let me), but now I understand why some uses are more natural than others.
Object Oriented Design (and programming)
When I was just starting, I had a really hard time with it. Actually, I couldn't grok it until a really good prof explained it to us.
The hardest thing I learned was to resist the temptation to just jump in and start coding. Every hour you spend designing and thinking beforehand probably saves you a day in actual coding time.
"The Realm of the Final Inch" - when you think you are done there is still a whole lot to do !
I know that nowadays everyone begins their programming education with OO - but when I started out OO was niche stuff. It took me nearly ten years from first exposure to actually applying it in a useful way.
Mh, I can't really decide.
The first really akward things are Monads in Haskell. I think I finally began to understand them, because one of our professors told me about the concept in a totally different way ("Monads basically overload the ; and the = in Java").
The second really big step was to move from "Hey, I use class in my code, I am object oriented" to actually designing object oriented with concise objects that are told to do things. Python and its dynamic typing helped me a lot here.
The third big step was to actually understand the concept behind DSLs and thus starting to see where one could use them. This occured to me while pondering about Aspects (another candidate) and other advanced techniques to simplify programs.
One of those was the hardest concept to learn :)
The different structures of $_POST['files'] data and a regular, normal, god-fearing PHP array.
Probably my biggest challenge was writing my own implementation of a database index using red-black trees in memory mapped files. I had to write my own memory allocator in order to to allow the removal of nodes as well.
I did have trouble grokking the concept of Object orientation when I first started programming when I got my first computer and taught myself C++. That was when I was 15 or so. (I had actually taken C programming classes before owning a computer)
Many potential candidates here. Writing a virus in assembly language as a teenager. It was hard because I had to deal with self changing code and stupid segments in x86 assembly, and because information was not as easily available as today. I had no internet, BBS or anything like that to access.
PS: I didn't write a virus that was spread, outside my own computer. It was just for the programming challenge ;-)
C++ -- and I must admit that I gave up eventually. I can write complex code in roughly 30 programming languages and I learn a new one every year. I wrote OO programs in C long before people know what OO was but C++ ... ugh. As a friend of mine said: "To understand C++, you must be a C++ compiler".
That's way better than my comment: "C++ is the successful attempt to force developers to use every non-alphabetic character on the standard US keyboard in every single line of code."
Model-View-Controller took me some time to fully understand, but hind-sight is 20/20. From the side of understanding, it's genius, and amazingly simple.
To strike the word 'easy' from my vocabulary.
"Yeah, I can do that, that'll be easy!"
Functional Programming.
It's such a different approach to problem solving that I find it very difficult (but very valuable) to wrap my head around it.
Comments
Not the idea that you should comment your code, or that you shouldn't comment every single line. There is always an exactly right kind of comment, one that adds just enough explanation about why a bit of code does this or that so that it makes some sense.
Before I started reading other peoples code, I mostly used comments as inline documentation, explaining what the arguments were, what the return value was, what the function was supposed to do. Only code changes, and comments turn into lies.
When actually reading someone elses code, you will usually start by looking "what does this actually do" because when you tried to use it, it didn't quite work the way you expected, so you read carefully what it does in the particular case you're interested in.
95% of the time, if the code is concise, uses meaningful identifiers, is well factored, you never needed to look at the comments to understand.
The first few times when that other code was just a bit too clever for me to grok by reading the source, and there was a comment there explaining the bit of cleverness, thats how I came to understand comments.
Now I use them for just that purpose. If I had to think hard about how to make something work, like look up a formula, I'll document where that formula came from. If a bit of code seems sloppy as I'm writing it, or condenses much logic into few tokens, I'll leave a note explaining the reasoning behind it and why I excluded other choices.
In particular, I comment about the assumptions I make about the caller and the data they've sent.
How/When to just fix a bug without throwing out the current version of the code and starting over.
Regular expressions continue to kick my butt. If I don't keep my head in regex on a regular basis I will forget whatever I learned the last time I was in it. Regex has the additional problem of once it's working right, I don't have to go back to it and use it again for a long time.
It's like taking 3 years of Spanish class, then not going to Spain to use it and make it fluent. Of course that statement, itself, could be applied to a lot of things in programming.
Fast Fourier Transforms. For whatever reason my head didn't want to wrap itself around those for the longest time.
The hardest is Psychic debugging (Raymond Chen style) from clueless-user input :D Especially when internal, non-technical teams are involved as intermediaries.
Finding a probable cause to a problem linked to an interaction between your code and some hardware without any version/model information and two level of interpretations (customer and internal) of a cryptic error message is the hardest of the task... Finding what the original error message could have been being the hardest part.
But the day always shine when after a wild guess answer the customer come back with a simple "Thanks, that solved it."
How video codecs work. Including DCT/iDCT transforms, quantization, zigzag scan, huffman coding, CABAC, motion estimation searches, not to mention rate control - single pass MB-based CBR is probably the hardest thing to get my head around. Once you understand these concepts you can understand most video compression algorithms, like the MPEG family or anything that is block-based and does quantization in the frequency domain.
Whilst not a programming language, being able to write mostly cross browser HTML & CSS.
Getting a non-trivial multi-threaded program to work properly with both no-deadlocks and good performance.
Although not typically considered a programming skill, I would say Logic, and how it applies to my programming on a daily basis. Not just the standard ways in which it is applied to computer science, but also logic in and of itself.
For example, Kurt Gödel's work on arithmetic logic and his theorems on incompleteness ("no useful system of arithmetic can be both consistent and complete") have huge implications and apply directly to work in programming, in my opinion.
The customer doesn't want clever or elegant code. The customer just wants to see his problem solved.
Admitting to a development team that you were wrong, and then having the courage to go back to the drawing board with them and start all over again.
Formal Languages and Compilers. Those were definitely the most challenging concepts I've learnt during my CS uni. Luckily I had a really good professor that would make it sound really interesting. Compile a program by hand on paper and then simulate its execution was quite a challenge.