What are the advantages and limitations of dynamic type languages compared to static type languages?
See also: whats with the love of dynamic languages (a far more argumentative thread...)
What are the advantages and limitations of dynamic type languages compared to static type languages?
See also: whats with the love of dynamic languages (a far more argumentative thread...)
The ability of the interpreter to deduce type and type conversions makes development time faster, but it also can provoke runtime failures which you just cannot get in a statically typed language where you catch them at compile time. But which one's better (or even if that's always true) is hotly discussed in the community these days (and since a long time).
A good take on the issue is here
Advocates of static typing argue that the advantages of static typing include earlier detection of programming mistakes (e.g. preventing adding an integer to a boolean), better documentation in the form of type signatures (e.g. incorporating number and types of arguments when resolving names), more opportunities for compiler optimizations (e.g. replacing virtual calls by direct calls when the exact type of the receiver is known statically), increased runtime efficiency (e.g. not all values need to carry a dynamic type), and a better design time developer experience (e.g. knowing the type of the receiver, the IDE can present a drop-down menu of all applicable members). Static typing fanatics try to make us believe that “well-typed programs cannot go wrong”. While this certainly sounds impressive, it is a rather vacuous statement. Static type checking is a compile-time abstraction of the runtime behavior of your program, and hence it is necessarily only partially sound and incomplete. This means that programs can still go wrong because of properties that are not tracked by the type-checker, and that there are programs that while they cannot go wrong cannot be type-checked. The impulse for making static typing less partial and more complete causes type systems to become overly complicated and exotic as witnessed by concepts such as “phantom types” [11] and “wobbly types” [10]. This is like trying to run a marathon with a ball and chain tied to your leg and triumphantly shouting that you nearly made it even though you bailed out after the first mile.
Advocates of dynamically typed languages argue that static typing is too rigid, and that the softness of dynamically languages makes them ideally suited for prototyping systems with changing or unknown requirements, or that interact with other systems that change unpredictably (data and application integration). Of course, dynamically typed languages are indispensable for dealing with truly dynamic program behavior such as method interception, dynamic loading, mobile code, runtime reflection, etc. In the mother of all papers on scripting [16], John Ousterhout argues that statically typed systems programming languages make code less reusable, more verbose, not more safe, and less expressive than dynamically typed scripting languages. This argument is parroted literally by many proponents of dynamically typed scripting languages. We argue that this is a fallacy and falls into the same category as arguing that the essence of declarative programming is eliminating assignment. Or as John Hughes says [8], it is a logical impossibility to make a language more powerful by omitting features. Defending the fact that delaying all type-checking to runtime is a good thing, is playing ostrich tactics with the fact that errors should be caught as early in the development process as possible.
From Artima's Typing: Strong vs. Weak, Static vs. Dynamic article:
strong typing prevents mixing operations between mismatched types. In order to mix types, you must use an explicit conversion
weak typing means that you can mix types without an explicit conversion
In the Pascal Costanza's paper, Dynamic vs. Static Typing — A Pattern-Based Analysis (PDF), he claims that in some cases, static typing is more error-prone than dynamic typing. Some statically typed languages force you to manually emulate dynamic typing in order to do "The Right Thing". It's discussed at Lambda the Ultimate.
There are lots of different things about static and dynamic languages. For me, the main difference is that in dynamic languages the variables don't have fixed types; instead, the types are tied to values. Because of this, the exact code that gets executed is undetermined until runtime.
In early or naïve implementations this is a huge performance drag, but modern JITs get tantalizingly close to the best you can get with optimizing static compilers. (in some fringe cases, even better than that).
Static type systems seek to eliminate certain errors statically, inspecting the program without running it and attempting to prove soundness in certain respects. Some type systems are able to catch more errors than others. For example, C# can eliminate null pointer exceptions when used properly, whereas Java has no such power. Twelf has a type system which actually guarantees that proofs will terminate, "solving" the halting problem.
However, no type system is perfect. In order to eliminate a particular class of errors, they must also reject certain perfectly valid programs which violate the rules. This is why Twelf doesn't really solve the halting problem, it just avoids it by throwing out a large number of perfectly valid proofs which happen to terminate in odd ways. Likewise, Java's type system reject's Clojure's PersistentVector
implementation due to its use of heterogeneous arrays. It works at runtime, but the type system cannot verify it.
For that reason, most type systems provide "escapes", ways to override the static checker. For most languages, these take the form of casting, though some (like C# and Haskell) have entire modes which are marked as "unsafe".
Subjectively, I like static typing. Implemented properly (hint: not Java), a static type system can be a huge help in weeding out errors before they crash the production system. Dynamically typed languages tend to require more unit testing, which is tedious at the best of times. Also, statically typed languages can have certain features which are either impossible or unsafe in dynamic type systems (implicit conversions spring to mind). It's all a question of requirements and subjective taste. I would no more build the next Eclipse in Ruby than I would attempt to write a backup script in Assembly or patch a kernel using Java.
Oh, and people who say that "x typing is 10 times more productive than y typing" are simply blowing smoke. Dynamic typing may "feel" faster in many cases, but it loses ground once you actually try to make your fancy application run. Likewise, static typing may seem like it's the perfect safety net, but one look at some of the more complicated generic type definitions in Java sends most developers scurrying for eye blinders. Even with type systems and productivity, there is no silver bullet.
Final note: don't worry about performance when comparing static with dynamic typing. Modern JITs like V8 and TraceMonkey are coming dangerously-close to static language performance. Also, the fact that Java actually compiles down to an inherently dynamic intermediate language should be a hint that for most cases, dynamic typing isn't the huge performance-killer that some people make it out to be.
It is all about the right tool for the job. Neither is better 100% of the time. Both systems were created by man and have flaws. Sorry, but we suck and making perfect stuff.
I like dynamic typing because it gets out of my way, but yes runtime errors can creep up that I didn't plan for. Where as static typing may fix the aforementioned errors, but drive a novice(in typed languages) programmer crazy trying to cast between a constant char and a string.
Dynamic vs. Static Typing - A Pattern-Based Analysis In some cases, static typing is more error-prone than dynamic typing. Some statically typed languages force you to manually emulate dynamic typing in order to do "The Right Thing". The writer takes on Java. He mentions three problems:
Statically Checked Implementation of Interfaces Statically Checked Exceptions Checking Feature AvailabilityDynamic vs. Static Typing - A Pattern-Based Analysis In some cases, static typing is more error-prone than dynamic typing. Some statically typed languages force you to manually emulate dynamic typing in order to do "The Right Thing". The writer takes on Java. He mentions three problems:
Statically Checked Implementation of Interfaces Statically Checked Exceptions Checking Feature Availability
It depends on context. There a lot benefits that are appropriate to dynamic typed system as well as for strong typed. I'm of opinion that the flow of dynamic types language is faster. The dynamic languages are not constrained with class attributes and compiler thinking of what is going on in code. You have some kinda freedom. Furthermore, the dynamic language usually is more expressive and result in less code which is good. Despite of this, it's more error prone which is also questionable and depends more on unit test covering. It's easy prototype with dynamic lang but maintenance may become nightmare.
The main gain over static typed system is IDE support and surely static analyzer of code. You become more confident of code after every code change. The maintenance is peace of cake with such tools.
Perhaps the single biggest "benefit" of dynamic typing is the shallower learning curve. There is no type system to learn and no non-trivial syntax for corner cases such as type constraints. That makes dynamic typing accessible to a lot more people and feasible for many people for whom sophisticated static type systems are out of reach. Consequently, dynamic typing has caught on in the contexts of education (e.g. Scheme/Python at MIT) and domain-specific languages for non-programmers (e.g. Mathematica). Dynamic languages have also caught on in niches where they have little or no competition (e.g. Javascript).
The most concise dynamically-typed languages (e.g. Perl, APL, J, K, Mathematica) are domain specific and can be significantly more concise than the most concise general-purpose statically-typed languages (e.g. OCaml) in the niches they were designed for.
The main disadvantages of dynamic typing are:
Run-time type errors.
Can be very difficult or even practically impossible to achieve the same level of correctness and requires vastly more testing.
No compiler-verified documentation.
Poor performance (usually at run-time but sometimes at compile time instead, e.g. Stalin Scheme) and unpredictable performance due to dependence upon sophisticated optimizations.
Personally, I grew up on dynamic languages but wouldn't touch them with a 40' pole as a professional unless there were no other viable options.
I think the decision making around it is subjective, but the arguments for and against are quite specific. Whether they are strengths or weakness, is, as usual, in many cases subjective. I have crafted a list some thoughts the issues regarding static types if interests you.
Well, both are very, very very very misunderstood and also two completely different things. that aren't mutually exclusive.
Static types are a restriction of the grammar of the language. Statically typed langauges strictly could be said to not be context free. The simple truth is that it becomes inconvenient to express a language sanely in context free grammars that doesn't treat all its data simply as bit vectors. Static type systems are part of the grammar of the language if any, they simply restrict it more than a context free grammar could, grammatical checks thus happen in two passes over the source really. Static types correspond to the mathematical notion of type theory, type theory in mathematics simply restricts the legality of some expressions. Like, I can't say 3 + [4,7]
in maths, this is because of the type theory of it.
Static types are thus not a way to 'prevent errors' from a theoretical perspective, they are a limitation of the grammar. Indeed, provided that +, 3 and intervals have the usual set theoretical definitions, if we remove the type system 3 + [4,7]
has a pretty well defined result that's a set. 'runtime type errors' theoretically do not exist, the type system's practical use is to prevent operations that to human beings would make no sense. Operations are still just the shifting and manipulation of bits of course.
The catch to this is that a type system can't decide if such operations are going to occur or not if it would be allowed to run. As in, exactly partition the set of all possible programs in those that are going to have a 'type error', and those that aren't. It can do only two things:
1: prove that type errors are going to occur in a program
2: prove that they aren't going to occur in a program
This might seem like I'm contradicting myself. But what a C or Java type checker does is it rejects a program as 'ungrammatical', or as it calls it 'type error' if it can't succeed at 2. It can't prove they aren't going to occur, that doesn't mean that they aren't going to occur, it just means it can't prove it. It might very well be that a program which will not have a type error is rejected simply because it can't be proven by the compiler. A simple example being if(1) a = 3; else a = "string";
, surely since it's always true, the else-branch will never be executed in the program, and no type error shall occur. But it can't prove these cases in a general way, so it's rejected. This is the major weakness of a lot of statically typed languages, in protecting you against yourself, you're necessarily also protected in cases you don't need it.
But, contrary to popular believe, there are also statically typed languages that work by principle 1. They simply reject all programs of which they can prove it's going to cause a type error, and pass all programs of which they can't. So it's possible they allow programs which have type errors in them, a good example being Typed Racket, it's hybrid between dynamic and static typing. And some would argue that you get the best of both worlds in this system.
Another advantage of static typing is that types are known at compile time, and thus the compiler can use this. If we in Java do "string" + "string"
or 3 + 3
, both +
tokens in text in the end represent a completely different operation and datum, the compiler knows which to choose from the types alone.
Now, I'm going to make a very controversial statement here but bare with me: 'dynamic typing' does not exist.
Sounds very controversial, but it's true, dynamically typed languages are from a theoretical perspective untyped. They are just statically typed languages with only one type. Or simply put, they are languages that are indeed grammatically generated by a context free grammar in practice.
Why don't they have types? Because every operation is defined and allowed on every operant, what's a 'runtime type error' exactly? It's from a theoretical example purely a side-effect. If doing print("string")
which prints a string is an operation, then so is length(3)
, the former has the side effect of writing string
to the standard output, the latter simply error: function 'length' expects array as argument.
, that's it. There is from a theoretical perspective no such thing as a dynamically typed language. They are untyped
All right, the obvious advantage of 'dynamically typed' language is expressive power, a type system is nothing but a limitation of expressive power. And in general, languages with a type system indeed would have a defined result for all those operations that are not allowed if the type system was just ignored, the results would just not make sense to humans. Many languages lose their Turing completeness after applying a type system.
The obvious disadvantage is the fact that operations can occur which would produce results which are nonsensical to humans. To guard against this, dynamically typed languages typically redefine those operations, rather than producing that nonsensical result they redefine it to having the side effect of writing out an error, and possibly halting the program altogether. This is not an 'error' at all, in fact, the language specification usually implies this, this is as much behaviour of the language as printing a string from a theoretical perspective. Type systems thus force the programmer to reason about the flow of the code to make sure that this doesn't happen. Or indeed, reason so that it does happen can also be handy in some points for debugging, showing that it's not an 'error' at all but a well defined property of the language. In effect, the single remnant of 'dynamic typing' that most languages have is guarding against a division by zero. This is what dynamic typing is, there are no types, there are no more types than that zero is a different type than all the other numbers. What people call a 'type' is just another property of a datum, like the length of an array, or the first character of a string. And many dynamically typed languages also allow you to write out things like "error: the first character of this string should be a 'z'"
.
Another thing is that dynamically typed languages have the type available at runtime and usually can check it and deal with it and decide from it. Of course, in theory it's no different than accessing the first char of an array and seeing what it is. In fact, you can make your own dynamic C, just use only one type like long long int and use the first 8 bits of it to store your 'type' in and write functions accordingly that check for it and perform float or integer addition. You have a statically typed language with one type, or a dynamic language.
In practise this all shows, statically typed languages are generally used in the context of writing commercial software, whereas dynamically typed languages tend to be used in the context of solving some problems and automating some tasks. Writing code in statically typed languages simply takes long and is cumbersome because you can't do things which you know are going to turn out okay but the type system still protects you against yourself for errors you don't make. Many coders don't even realize that they do this because it's in their system but when you code in static languages, you often work around the fact that the type system won't let you do things that can't go wrong, because it can't prove it won't go wrong.
As I noted, 'statically typed' in general means case 2, guilty until proven innocent. But some languages, which do not derive their type system from type theory at all use rule 1: Innocent until proven guilty, which might be the ideal hybrid. So, maybe Typed Racket is for you.
Also, well, for a more absurd and extreme example, I'm currently implementing a language where 'types' are truly the first character of an array, they are data, data of the 'type', 'type', which is itself a type and datum, the only datum which has itself as a type. Types are not finite or bounded statically but new types may be generated based on runtime information.