views:

4420

answers:

34

I know what Hungarian refers to - giving information about a variable, parameter, or type as a prefix to its name. Everyone seems to be rabidly against it, even though in some cases it seems to be a good idea. If I feel that useful information is being imparted, why shouldn't I put it right there where it's available?

See also: Do people use the Hungarian naming conventions in the real world?

+35  A: 

I think it massively clutters up the source code.

It also doesn't gain you much in a strongly typed language. If you do any form of type mismatch tomfoolery, the compiler will tell you about it.

nsanders
I can't vote you up enough.
Cody Brocious
Wow. 90 rep in 15 minutes. Good call.
Dustman
If you use it for type, then yes, it's useless. It's useful to impart other information.
Lou Franco
@Lou, exactly - personally, i store a revision history in my variable names: string firstName_wasName_wasNameID_wasSweetCupinCakes
Shog9
+6  A: 

The IDE should impart that useful information. Hungarian might have made some sort (not a whole lot, but some sort) of sense when IDE's were much less advanced.

Paul Batum
Then again, you shouldn't rely on the IDE telling you more. After all, code can viewed outside of the IDE...
TraumaPony
+8  A: 

Joel Spolsky wrote a good blog post about this. http://www.joelonsoftware.com/articles/Wrong.html Basically it comes down to not making your code harder to read when a decent IDE will tell you want type the variable is if you can't remember. Also, if you make your code compartmentalized enough, you don't have to remember what a variable was declared as three pages up.

Paul Tomblin
A: 
  • They're a humongous eyesore
  • Your IDE should be able to tell you all you need to know about a variable's type
  • Good names (which HN gets in the way of) should communicate to you everything else you need to know about a variable.
Aaron Maenpaa
+4  A: 

I've always thought that a prefix or two in the right place wouldn't hurt. I think if I can impart something useful, like "Hey this is an interface, don't count on specific behaviour" right there, as in IEnumerable, I oughtta do it. Comment can clutter things up much more than just a one or two character symbol.

Dustman
I never got the the ISomethingable. If it's an -able, that implies interface anyway. (At least in java)
tunaranch
+2  A: 

It's incredibly redundant and useless is most modern IDEs, where they do a good job of making the type apparent.

Plus -- to me -- it's just annoying to see intI, strUserName, etc. :)

Ian P
As TraumaPony told Paul Batum, not everyone sees the code in the IDE.
Chris Charabaruk
+2  A: 

I don't think everyone is rabidly against it. In languages without static types, it's pretty useful. I definitely prefer it when it's used to give information that is not already in the type. Like in C, char * szName says that the variable will refer to a null terminated string -- that's not implicit in char* -- of course, a typedef would also help.

Joel had a great article on using hungarian to tell if a variable was HTML encoded or not:

http://www.joelonsoftware.com/articles/Wrong.html

Anyway, I tend to dislike Hungarian when it's used to impart information I already know.

Lou Franco
I think that first sentence might belie the answers on this page.
Dustman
+3  A: 

In the words of the master:

http://www.joelonsoftware.com/articles/Wrong.html

An interesting reading, as usual.

Extracts:

"Somebody, somewhere, read Simonyi’s paper, where he used the word “type,” and thought he meant type, like class, like in a type system, like the type checking that the compiler does. He did not. He explained very carefully exactly what he meant by the word “type,” but it didn’t help. The damage was done."

"But there’s still a tremendous amount of value to Apps Hungarian, in that it increases collocation in code, which makes the code easier to read, write, debug, and maintain, and, most importantly, it makes wrong code look wrong."

Make sure you have some time before reading Joel On Software. :)

rlerallut
No time now that I have Stackoverflow. I thinks Spolsky's mere association with the project must be doing it. :)
Dustman
+1  A: 

If I feel that useful information is being imparted, why shouldn't I put it right there where it's available?

Then who cares what anybody else thinks? If you find it useful, then use the notation.

eduffy
'Cause there's only one of me and a bajillion other people who might maintain my code later on. Since all those people are on the team (but they don't know it yet), I'd like to have some idea if they outvote me.
Dustman
A: 

Im my experience, it is bad because:

1 - then you break all the code if you need to change the type of a variable (i.e. if you need to extend a 32 bits integer to a 64 bits integer);

2 - this is useless information as the type is either already in the declaration or you use a dynamic language where the actual type should not be so important in the first place.

Moreover, with a language accepting generic programming (i.e. functions where the type of some variables is not determine when you write the function) or with dynamic typing system (i.e. when the type is not even determine at compile time), how would you name your variables? And most modern languages support one or the other, even if in a restricted form.

PierreBdR
+81  A: 

Most people use Hungarian notation in a wrong way and are getting wrong results.

Read this excellent article by Joel Spolsky: Making Wrong Code Look Wrong.

In short, Hungarian Notation where you prefix your variable names with their type (string) (Systems Hungarian) is bad because it's useless.

Hungarian Notation as it was intended by its author where you prefix the variable name with its kind (using Joel's example: safe string or unsafe string), so called Apps Hungarian has its uses and is still valuable.

Ilya Kochetov
Good link, a good summary of what's behind the link, and no fanaticism. Excellent.
Dustman
Note that Joels example is in VBScript, a language that long didn't have user-defined classes. In a OO-language you would just create a HtmlEncodedString-type and have the Write method accept only that. "Apps hungarian" are only useful in languages without user-defined types.
JacquesB
True, and a very good point, but it works much better is languages where String isn't sealed/finalized. In such languages, you'd lose type-compatibility with "real" Strings.
Dustman
Microsoft is known for its past misuse of Hungarian notation. Prepending type information to identifiers is not only useless but it may actually do harm. First, readability is less fluent. Second, in languages that support polymorphism or duck-typing the wrong information is passed.
wilhelmtell
I still like using Hungarian notation if I'm doing straight C development with an older IDE ... that way I know there's no type casting issues.
Jess
A: 

Several reasons:

  • Any modern IDE will give you the variable type by simply hovering your mouse over the variable.
  • Most type names are way long (think HttpClientRequestProvider) to be reasonably used as prefix.
  • The type information does not carry the right information, it is just paraphrasing the variable declaration, instead of outlining the purpose of the variable (think myInteger vs. pageSize).
Joannes Vermorel
A: 

See also this post

Kevin Conner
+3  A: 

Hungarian Notation can be useful in languages without compile-time type checking, as it would allow developer to quickly remind herself of how the particular variable is used. It does nothing for performance or behavior. It is supposed to improve code readability and is mostly a matter a taste and coding style. For this very reason it is criticized by many developers -- not everybody has the same wiring in the brain.

For the compile-time type-checking languages it is mostly useless -- scrolling up a few lines should reveal the declaration and thus type. If you global variables or your code block spans for much more than one screen, you have grave design and reusability issues. Thus one of the criticisms is that Hungarian Notation allows developers to have bad design and easily get away with it. This is probably one of the reasons for hatered.

On the other hand, there can be cases where even compile-time type-checking languages would benefit from Hungarian Notation -- void pointers or HANDLE's in win32 API. These obfuscates the actual data type, and there might be a merit to use Hungarian Notation there. Yet, if one can know the type of data at build time, why not to use the appropriate data type.

In general, there are no hard reasons not to use Hungarian Notation. It is a matter of likes, policies, and coding style.

Ignas Limanauskas
+5  A: 

Tacking on cryptic characters at the beginning of each variable name is unnecessary and shows that the variable name by itself isn't descriptive enough. Most languages require the variable type at declaration anyway, so that information is already available.

There's also the situation where, during maintenance, a variable type needs to change. Example: if a variable declared as "uint_16 u16foo" needs to become a 64-bit unsigned, one of two things will happen:

  1. You'll go through and change each variable name (making sure not to hose any unrelated variables with the same name), or
  2. Just change the type and not change the name, which will only cause confusion.
coledot
+4  A: 

In Joel Spolsky's Making Wrong Code Look Wrong he explains that what everybody thinks of as Hungarian Notation (which he calls Systems Hungarian) is not what was it was really intended to be (what he calls Apps Hungarian). Scroll down to the I’m Hungary heading to see this discussion.

Basically, Systems Hungarian is worthless. It just tells you the same thing your compiler and/or IDE will tell you.

Apps Hungarian tells you what the variable is supposed to mean, and can actually be useful.

cjm
+1  A: 

The Hungarian notation was abused, particularly by Microsoft, leading to prefixes longer than the variable name, and showing it is quite rigid, particularly when you change the types (the infamous lparam/wparam, of different type/size in Win16, identical in Win32).

Thus, both due to this abuse, and its use by M$, it was put down as useless.

At my work, we code in Java, but the founder cames from MFC world, so use similar code style (aligned braces, I like this!, capitals to method names, I am used to that, prefix like m_ to class members (fields), s_ to static members, etc.).

And they said all variables should have a prefix showing its type (eg. a BufferedReader is named brData). Which shown as being a bad idea, as the types can change but the names doesn't follow, or coders are not consistent in the use of these prefixes (I even see aBuffer, theProxy, etc.!).

Personally, I chose for a few prefixes that I find useful, the most important being b to prefix boolean variables, as they are the only ones where I allow syntax like if (bVar) (no use of autocast of some values to true or false). When I coded in C, I used a prefix for variables allocated with malloc, as a reminder it should be freed later. Etc.

So, basically, I don't reject this notation as a whole, but took what seems fitting for my needs.
And of course, when contributing to some project (work, open source), I just use the conventions in place!

PhiLho
+11  A: 

Hungarian notation only makes sense in languages without user-defined types. In a modern functional or OO-language, you would encode information about the "kind" of value into the datatype or class rather than into the variable name.

Several answers reference Joels article. Note however that his example is in VBScript, which didn't support user-defined classes (for a long time at least). In a language with user-defined types you would solve the same problem by creating a HtmlEncodedString-type and then let the Write method accept only that. In a statically typed language, the compiler will catch any encoding-errors, in a dynamically typed you would get a runtime exception - but in any case you are protected against writing unencoded strings. Hungarian notations just turns the programmer into a human type-checker, with is the kind of job that is typically better handled by software.

Joel distinguishes between "systems hungarian" and "apps hungarian", where "systems hungarian" encodes the built-in types like int, float and so on, and "apps hungarian" encodes "kinds", which is higher-level meta-info about variable beyound the machine type, In a OO or modern functional language you can create user-defined types, so there is no distinction between type and "kind" in this sense - both can be represented by the type system - and "apps" hungarian is just as redundant as "systems" hungarian.

So to answer your question: Systems hungarian would only be useful in a unsafe, weakly typed language where e.g. assigning a float value to an int variable will crash the system. Hungarian notation was specifically invented in the sixties for use in BCPL, a pretty low-level language which didn't do any type checking at all. I dont think any language in general use today have this problem, but the notation lived on as a kind of cargo cult programming.

Apps hungarian will make sense if you are working with a language without user defined types, like legacy VBScript or early versions of VB. Perhaps also early versions of Perl and PHP. Again, using it in a modern languge is pure cargo cult.

In any other language, hungarian is just ugly, redundant and fragile. It repeats information already known from the type system, and you should not repeat yourself. Use a descriptive name for the variable that describes the intent of this specific instance of the type. Use the type system to encode invariants and meta info about "kinds" or "classes" of variables - ie. types.

The general point of Joels article - to have wrong code look wrong - is a very good principle. However an even better protection against bugs is to - when at all possible - have wrong code to be detected automatically by the compiler.

JacquesB
+101  A: 

vUsing adjHungarian nnotation vmakes nreading ncode adjdifficult.

Mark Stock
Awesome. I love your answer!
Jason Stevenson
hahaha, no, it doesn't.
ramayac
Nice analogy, although slightly unfair. Compare:vUsing adjHungarian nNotation vMakes nReading nCode adjDifficult.
Chris Noe
In that sentence, both Using and Reading are gerunds. But I get the point you were making...
chimp
@chimp: that makes things even more obvious. now that you've found a mistake in the type are you going to rename all the references or leave them as they are, providing misinformation? you lose either way. but of course, this is the WRONG way of applying Hungarian notation.
wilhelmtell
@Chris Noe: for certain reading styles (parse the prefix and the suffix with a glance at the middle), your example only makes things worse. I see "vUsing" and hear "voosing" in my head.
Bob Cross
The analogy falls down because no information is added, you already know if a word is a noun. But if I have a variable called "count", is it an integer or a string (eg to display to a user)?
Jon
A: 

Hungarian is bad because it takes precious characters away from variable names in exchange for what, some type information?

First of all, in a strongly typed language, the compiler will warn you if you do any truly stupid.

Second, if you believe in good modularized code and don't do too much work in any 1 function, you're variables are probable declared just above the code they are used in anyway (so you have the type right there).

Third, if you prefix every pointer with p and every class with C, your really screwing up your nice modern IDE's ability to do intellisense (you know that feature where it guesses as you type what class name your typing and as soon as it gets it right you can hit enter and it completes it for you? well, if you prefix every class with C, you always have at least 1 extra letter to type)...

dicroce
+2  A: 

I think the whole thing of the aesthetical aspect is over-hyped. If that was the most important thing, we would not call ourselves developers, but graphic designers.

One important part, I think, is that you decribe what your objects role is, not what it is. You don't call yourself HumanDustman, becuase in another context, you would not most importantly be a human.

For refactoring-purposes it's really important too:

public string stringUniqueKey = "ABC-12345";

What if you decide to use a GUID instead of a string, your variable name would look stupid after refactoring all refering code.

Or:

public int intAge = 20;

Changing this to a float, you would have the same problem. And so on.

Seb Nilsson
+3  A: 

There is no reason why you should not make correct use of Hungarian notation. It's unpopularity is due to a long-running back-lash against the mis-use of Hungarian notation, especially in the Windows APIs.

In the bad-old days, before anything resembling an IDE existed for DOS (odds are you didn't have enough free memory to run the compiler under Windows, so your development was done in DOS), you didn't get any help from hovering your mouse over a variable name. (Assuming you had a mouse.) What did you did have to deal with were event callback functions in which everything was passed to you as either a 16-bit int (WORD) or 32-bit int (LONG WORD). You then had to cast those parameter to the appropriate types for the given event type. In effect, much of the API was virtually type-less.

The result, an API with parameter names like these:

LRESULT CALLBACK WindowProc(HWND hwnd,
                            UINT uMsg,
                            WPARAM wParam,
                            LPARAM lParam);

Note that the names wParam and lParam, although pretty awful, aren't really any worse than naming them param1 and param2.

To make matters worse, Window 3.0/3.1 had two types of pointers, near and far. So, for example, the return value from memory management function LocalLock was a PVOID, but the return value from GlobalLock was an LPVOID (with the 'L' for long). That awful notation then got extended so that a long pointer string was prefixed lp, to distinguish it from a string that had simply been malloc'd.

It's no surprise that there was a backlash against this sort of thing.

dgvid
A: 

It's a useful convention for naming controls on a form (btnOK, txtLastName etc.), if the list of controls shows up in an alphabetized pull-down list in your IDE.

MusiGenesis
+38  A: 

Joel is wrong, and here is why.

That "application" information he's talking about should be encoded in the type system. You should not depend on flipping variable names to make sure you don't pass unsafe data to functions requiring safe data. You should make it a type error, so that it is impossible to do so. Any unsafe data should have a type that is marked unsafe, so that it simply cannot be passed to a safe function. To convert from unsafe to safe should require processing with some kind of a sanitize function.

A lot of the things that Joel talks of as "kinds" are not kinds; they are, in fact, types.

What most languages lack, however, is a type system that's expressive enough to enforce these kind of distinctions. For example, if C had a kind of "strong typedef" (where the typedef name had all the operations of the base type, but was not convertible to it) then a lot of these problems would go away. For example, if you could say, "strong typedef std::string unsafe_string;" to introduce a new type "unsafe_string" that could not be converted to a std::string (and so could participate in overload resolution etc. etc.) then we would not need silly prefixes.

So, the central claim that Hungarian is for things that are not types is wrong. It's being used for type information. Richer type information than the traditional C type information, certainly; it's type information that encodes some kind of semantic detail to indicate the purpose of the objects. But it's still type information, and the proper solution has always been to encode it into the type system. Encoding it into the type system is far and away the best way to obtain proper validation and enforcement of the rules. Variables names simply do not cut the mustard.

In other words, the aim should not be "make wrong code look wrong to the developer". It should be "make wrong code look wrong to the compiler".

DrPizza
You are so wrong. What you are talking about is Petzold's mistaking interpretation of Simonyi's original idea! It's not type, as in int or long or char, etc. It's in the intent of the variable itself. Read Joel's paper to understand the different.
Rob Wells
The intent of the variable should be encoded within the type of the variable, not left to something so fragile as a frigging naming scheme.
DrPizza
I would love to make it the compiler/interpreter's job also, but that would incur a huge overhead for dynamic languages (Python, Ruby, PHP or Javascript).
too much php
In general, I agree wholeheartedly; in the context of this post I couldn't disagree more.
Software Monkey
@DrPizza - I think you make a very interesting point, and I certainly agree with writing your code in such a way as to make the compiler do as much checking as possible.
Steve Melnikoff
"What most languages lack, however, is a type system that's expressive enough to enforce these kind of distinctions"And there lies the problem. AppsHungarian is a pragmatic method to highlight problems when using these languages in an obvious ways to programmers viewing the code, rather than examining the bug reports.
Neil Trodden
+9  A: 

I always use hungarian notation for all my projects. I find it really helpful when I'm dealing with 100s of different identifier names.

For example When I call a function requiring a string I can type 's' and hit control-space and my IDE will show me exactly the variable names prefixed with 's' .

Another advantage, when I prefix u for unsigned and i for signed ints, I immediately see where I am mixing signed and unsigned in potentially dangerous ways.

I cannot remember the number of times when in a huge 75000 line codebase, bugs were caused (by me and others too) due to naming local variables the same as existing member variables of that class. Since then, I always prefix members with 'm_'

Its a question of taste and experience. Don't knock it until you've tried it.

rep_movsd
+4  A: 

Isn't scope more important than type these days, e.g.

* l for local
* a for argument
* m for member
* g for global
* etc

With modern techniques of refactoring old code, search and replace of a symbol because you changed its type is tedious, the compiler will catch type changes, but often will not catch incorrect use of scope, sensible naming conventions help here.

titanae
+5  A: 

You're forgetting the number one reason to include this information. It has nothing to do with you, the programmer. It has everything to do with the person coming down the road 2 or 3 years after you leave the company who has to read that stuff.

Yes, an IDE will quickly identify types for you. However, when you're reading through some long batches of 'business rules' code, it's nice to not have to pause on each variable to find out what type it is. When I see things like strUserID, intProduct or guiProductID, it makes for much easier 'ramp up' time.

I agree that MS went way too far with some of their naming conventions - I categorize that in the "too much of a good thing" pile.

Naming conventions are good things, provided you stick to them. I've gone through enough old code that had me constantly going back to look at the definitions for so many similarly-named variables that I push "camel casing" (as it was called at a previous job). Right now I'm on a job that has many thousand of lines of completely uncommented classic ASP code with VBScript and it's a nightmare trying to figure things out.

David
A: 

Of course when 99% of programmers agree on something, there is something wrong. The reason they agree here is because most of them have never used Hungarian notation correctly.

For a detailed argument, I refer you to a blog post I have made on the subject.

http://codingthriller.blogspot.com/2007/11/rediscovering-hungarian-notation.html

+1  A: 

I started coding pretty much the about the time Hungarian notation was invented and the first time I was forced to use it on a project I hated it.

After a while I realised that when it was done properly it did actually help and these days I love it.

But like all things good, it has to be learnt and understood and to do it properly takes time.

jussij
+3  A: 

As a Python programmer, Hungarian Notation falls apart pretty fast. In Python, I don't care if something is a string - I care if it can act like a string (i.e. if it has a str() method which returns a string).

For example, let's say we have foo as an integer, 12

foo = 12

Hungarian notation tells us that we should call that iFoo or something, to denote it's an integer, so that later on, we know what it is. Except in Python, that doesn't work, or rather, it doesn't make sense. In Python, I decide what type I want when I use it. Do I want a string? well if I do something like this:

print "The current value of foo is %s" % foo

Note the %s - string. Foo isn't a string, but the % operator will call foo.str() and use the result (assuming it exists). foo is still an integer, but we treat it as a string if we want a string. If we want a float, then we treat it as a float. In dynamically typed languages like Python, Hungarian Notation is pointless, because it doesn't matter what type something is until you use it, and if you need a specific type, then just make sure to cast it to that type (e.g. float(foo) ) when you use it.

Note that dynamic languages like PHP don't have this benefit - PHP tries to do 'the right thing' in the background based on an obscure set of rules that almost no one has memorized, which often results in catastrophic messes unexpectedly. In this case, some sort of naming mechanism, like $files_count or $file_name, can be handy.

In my view, Hungarian Notation is like leeches. Maybe in the past they were useful, or at least they seemed useful, but nowadays it's just a lot of extra typing for not a lot of benefit.

Dan Udey
Interesting that your example with .str() in Python isn't a result of the language being dynamic. Java, definitely not a dynamic language, does the same thing.
Dustman
A: 

I tend to use Hungarian Notation with ASP.NET server controls only, otherwise I find it too hard to work out what controls are what on the form.

Take this code snippet:

<asp:Label ID="lblFirstName" runat="server" Text="First Name" />
<asp:TextBox ID="txtFirstName" runat="server" />
<asp:RequiredFieldValidator ID="rfvFirstName" runat="server" ... />

If someone can show a better way of having that set of control names without Hungarian I'd be tempted to move to it.

Slace
A: 

I cannot find a link but I remember reading somewhere (which I agree with) that avoiding Hungarian notation results in better programming style.

When you program a statement of your program, you should not be thinking about "what type this object is" before calling its method, but rather you should think "what do I want to do with it", "which message to send to it".

Kind of vague concept to explain, but I think it works.

For example, if you have customer name stored in variable customerName, you should not care if it is a string or some other class. More important to think what do you want from this object. Do you want it to print(), getFirstName(), getLastName(), convertToString() etc. Once you make it an instance of String class and take it as granted, you limit yourself and your design since you have to build up all other logic you need elsewhere in the code.

alexandroid
A: 

For years I used Hungarian notation in my programming. Other than some visual clutter and the task of changing the prefix when I changed the data type, no one could convince me otherwise. Until recently--when I had to combine existing C# and VB.NET assemblies in the same solution.

The result: I had to pass a "fltSomeVariable" to a "sngSomeVariable" method parameter. Even as someone who programs in both C# and VB.NET, it caught me off guard and made me pause for a moment. (C# and VB.NET sometimes use different names to represent the same data type--float and single, for example.)

Now consider this: what if you create a COM component that's callable from many languages? The VB.NET and C# "conversion" was easy for a .NET programmer. But what about someone that develops in C++ or Java? Does "dwSomeVariable" mean anything to a .NET developer not familiar with C++?

A: 

If you don't know the type of a variable without being told, you probably shouldn't be messing with it anyways

The type might also not be that important. If you know what the methods do, you can figure out what is being done with the variable and then you'll what the program is doing

There may be times you want it; when type is important and the declaration isn't near or the type can't be inferred with ease. But it should never be seen as absolute

Demur Rumed