The purpose of Hungarian notation is to embed semantic annotations that
- cannot otherwise be expressed within the language, especially the type system AND
- are not apparent from the context
in the names of variables. Now, since Perl doesn't have a static type system, it might appear that Hungarian notation should be fairly common. But, take the canonical example for the value of Hungarian notation: tracking the origin of untrusted data.
One of the oft-cited examples for Hungarian notation is to prefix all string variables in an application with either s
or u
(for safe and unsafe), depending on whether the string came from a trusted source (or has been sanitized) or an untrusted one. But: just replace unsafe with tainted and you have a perfect description of tainting in Perl. So, in this case, even though Perl doesn't have a static type system, it has a dynamic type system that allows you to express the semantics of trusted/untrusted within the language and thus Hungarian notation is superfluous. (And even in a language without built-in support for tainting, there are usually much better ways, such as subtyping (Python, Ruby, Smalltalk, ...), annotations/attributes (Java, C#, ...), metadata (Clojure, ...) and static typing (Haskell, ML, ...).)
Also, Perl is pretty darn expressive, and thus it is much easier to keep the entire context for a variable in your head (or within one screenful of code), so, often enough the semantics are apparent from the surrounding code.
Good naming also helps, of course.
Remember: Hungarian notation was invented for C, which isn't expressive and has a static type system whose pretty much only use is as the punchline of a joke.