views:

171

answers:

5

I was speaking to an experienced lecturer recently who told me he could usually tell which programming language a student had learnt to program in by looking at their coding style (more specifically, when programming in other languages to the one which they were most comfortable with). He said that there have been multiple times when he's witnessed students attempted to write C# in Prolog.

So I began to wonder, what specific traits do people gain from their first (or favourite) language which are carried over into their overall programming style, and more interestingly what good or bad habits do you think people would benefit from or should be wary of when learning specific language?

+1  A: 

People who move from traditional languages to Ruby often have trouble grasping Ruby's looping best practises, and do loops awkwardly. For example, sometimes they use for loops to iterate through arrays, as opposed to array.each

fahadsadah
+1  A: 

I think that there is not relation between programming language and programming style. Basically the difference between languages is mostly syntax. The coding style refers to good habits, like explicit variables, friendly-read code, isolation, logic separation, etc. And that is suitable in many programming languages...

Aito
A: 

I'm not sure that many of my habits from my first programming language are still around now in my code. 27 years ago I started programming on a Commodore 64 in its version of BASIC and really, there aren't a lot of things from there that I still use in my code. I didn't use arrays back then, there were just 2 types of variables with strings and numbers, line numbers were common, and that is without getting into some of the more complex things that I have learned in the past couple of decades. Relational databases, make files, using multiple files for a solution, programming web applications and tests are things that didn't exist back when I was programming on my C64 back in the days of 64K memory and those big floppies.

I can see various general styles carrying over to some extent and a preference that one may have for a language or group of languages. However, if one has cycled through a few different worlds I think some of those old habits die. By worlds, I mean going from C++ to VBScript to C#.Net does change me a little so that I can handle either VB.Net or C#.Net pretty easily.

I can understand that initially students may not get enough of the paradigm shift in going from procedural or OOP to functional programming and so the code written does show what the programmer prefers, but I'd think some seasoned programmers could hide things well.

JB King
A: 

I've started with various assembler languages and FORTRAN and I can safely say that none of those old practices are reflected in my current code :)

gabr
+1  A: 

I don't think its an issue of "first" language, so much as what you are used to programming in. When you spend a lot of time with a language, its tools, failings, and idioms tend to shape how you think about programming problems. If that isn't the language you are actually using, and that language's toolset doesn't quite map, the results can be rather awkward. This is why it has been said that an experienced Fortran programmer can write Fortran in any language. Dijkstra said something similar but a bit more blunt:

It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.

(full disclosure: My first language was BASIC. Dijkstra nonwithstanding, I think I managed to regenerate nicely.)

One example I used to see a lot was string manipulation in Ada, which gives C coders fits. The idiom in C always was to declare a big honking array holding all the characters you could ever possibly need to put in your string (you hope). Manipulating it after that is fairly easy, if rather unsafe and slow, as everything in C understands that a string is terminated with a NUL character.

In Ada strings are not NUL terminated. They end where their storage ends. Unlike C, it is fast and easy to figure this size out whenever you need to. There are some tricks you can use to defer declaraions, but it is not all that easy to dynamicly build these perfectly sized strings. So the idiom in Ada is to try to work with constant strings which you declare up front (avoiding dynamic building). It works great, but gives C coders fits because they have real trouble thinking about strings that way.

T.E.D.