views:

218

answers:

6

Jon Skeet posted this blog post, in which he states that he is going to be asking why the dynamic part of languages are so good. So i thought i'd preemptively ask on his behalf: What makes them so good?

+8  A: 

The two fundamentally different approaches to types in programming languages are static types and dynamic types. They enable very different programming paradigms and they each have their own benefits and drawbacks.

I'd highly recommend Chris Smith's excellent article What to Know Before Debating Type Systems for more background on the subject.

From that article:

A static type system is a mechanism by which a compiler examines source code and assigns labels (called "types") to pieces of the syntax, and then uses them to infer something about the program's behavior. A dynamic type system is a mechanism by which a compiler generates code to keep track of the sort of data (coincidentally, also called its "type") used by the program. The use of the same word "type" in each of these two systems is, of course, not really entirely coincidental; yet it is best understood as having a sort of weak historical significance. Great confusion results from trying to find a world view in which "type" really means the same thing in both systems. It doesn't. The better way to approach the issue is to recognize that:

  • Much of the time, programmers are trying to solve the same problem with static and dynamic types.
  • Nevertheless, static types are not limited to problems solved by dynamic types.
  • Nor are dynamic types limited to problems that can be solved with static types.
  • At their core, these two techniques are not the same thing at all.
Daniel Pryden
+1 for the link. That's a very insightful article (indeed a highly recommended read!)
Stephan202
Great response to set the debate. John is right that many of the things which we equate with dynamic languages are not.
Steve Rowe
At that link, i saw the following: "OThe simply-typed lambda calculus, on which all other type systems are based, proves that programs terminate in a finite amount of time. Indeed, the more interesting question is how to usefully extend the type system to be able to describe programs that don't terminate! Finding infinite loops, though, is not in the class of things most people associate with "types," so it's surprising.". I'm fairly sure this is blatantly WRONG as computers are turing machines and i think it's been mathematically proven that not all programs for one can be proven to not halt.
RCIX
@RCIX: You need to finish reading the article. The whole point of a static type system is to *constrain* the program's behavior. If a program passes the type system constraints, then you can be certain that certain behaviors are not possible. The drawback is that there may exist programs that would not pass the type constraints but that would still behave the same way, and the type system cannot prove that. Therefore the purpose of the type system is to reject certain interesting programs, in order to make it possible to prove certain statements about the programs that are left.
Daniel Pryden
You're correct. Type systems merely prove that programs written under them satsify certain constraints, they say nothing about whether the program will run forever.
RCIX
Take the following example: A piece of code that checks whether the result of your static type system's analyzation is successful and then breaks the program if its successful, and if the analyzation fails, it becomes compliant. This will cause your static type system's analyzer to freeze up.
RCIX
In other words, any program designed to solve the halting problem will choke on a (modified) version of it's own source code.
RCIX
@RCIX: I don't get what your point is. You're mixing the simply-typed lambda calculus and the Turing model. Church-Turing thesis only applies to algorithms that terminate; for every computable function, there is a Turing machine, but not vice versa. There is no contradiction between the halting problem and the statement that "the simply-typed lambda calculus proves that programs terminate in a finite amount of time". See http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis#Non-computable_functions for details.
Daniel Pryden
A: 

Dynamic programming languages basically do things at runtime that other languages do at Compile time. This includes extension of the program, by adding new code, by extending objects and definitions, or by modifying the type system, all during program execution rather than compilation.

http://en.wikipedia.org/wiki/Dynamic%5Fprogramming%5Flanguage

Here are some common examples

http://en.wikipedia.org/wiki/Category%3ADynamic%5Fprogramming%5Flanguages

And to answer your original question:

They're slow, You need to use a basic text editor to write them - no Intellisense or Code prompts, they tend to be a big pain in the ass to write and maintain. BUT the most famous one (javascript) runs on practically every browser in the world - that's a good thing I guess. Lets call it 'broad compatibility'. I think you could probably get a dynamic language interpretor for most operating systems, but you certainly couldn't get a compiler for non dynamic languages for most operating systems.

reach4thelasers
So, for the sake of completeness, what makes this a good approach?
musicfreak
I highly disagree with this statement and i point you to http://steve.yegge.googlepages.com/is-weak-typing-strong-enough
RCIX
Sorry I didn't mean to offend. Although this sentence jumped out at me on that page "The strong vs. weak typing issue really gets people worked up." hehe... Lets say Dynamic Languages serve a purpose. You wouldn't write a high-transaction stock trading system in one though, would you?
reach4thelasers
Well... If i had a good team of programmers, i would most definitely choose a dynamic language. But then that's true of most any language....
RCIX
+4  A: 

The main thing is that you avoid a lot of redundancy that comes from making the programmer "declare" this, that, and the other. A similar advantage could be obtained through type inferencing (boo does that, for example) but not quite as cheaply and flexibly. As I wrote in the past...:

complete type checking or inference requires analysis of the whole program, which may be quite impractical -- and stops what Van Roy and Haridi, in their masterpiece "Concepts, Techniques and Models of Computer Programming", call "totally open programming". Quoting a post of mine from 2004: """ I love the explanations of Van Roy and Haridi, p. 104-106 of their book, though I may or may not agree with their conclusions (which are basically that the intrinsic difference is tiny -- they point to Oz and Alice as interoperable languages without and with static typing, respectively), all the points they make are good. Most importantly, I believe, the way dynamic typing allows real modularity (harder with static typing, since type discipline must be enforced across module boundaries), and "exploratory computing in a computation model that integrates several programming paradigms".

"Dynamic typing is recommended", they conclude, "when programs must be as flexible as possible". I recommend reading the Agile Manifesto to understand why maximal flexibility is crucial in most real-world application programming -- and therefore why, in said real world rather than in the more academic circles Dr. Van Roy and Dr. Hadidi move in, dynamic typing is generally preferable, and not such a tiny issue as they make the difference to be. Still, they at least show more awareness of the issues, in devoting 3 excellent pages of discussion about it, pros and cons, than almost any other book I've seen -- most books have clearly delineated and preformed precedence one way or the other, so the discussion is rarely as balanced as that;).

Alex Martelli
A: 

In statically typed languages, we're supposed to code to interfaces, not specific implementations. If you use

AbstractList data=getList();

getList() { return new LinkedList(); }

All you have to do to change the type of list used by the program is change getList. By expanding the types you can operate on, you have better more reusable code. One of the advantages of Dynamic Languages is they eliminate many restrictions static languages impose on the types.

int multiply(int a,int b) { return a*b; }   /*static language*/ 

def multiply(a,b):                          /*dynamic language (python)*/
    return a*b

The static version can only ever by used on ints. Period. The dynamic version has no specification on what type can be passed in, it simply invokes the * operator with two arguments. This could result in the multiplying of numbers, the repetition of a string, or whatever effect you give the * operator in your built in type. This is infinitely more flexible.

CrazyJugglerDrummer
Although, with type inferencing, you can have a static language that works like your dynamic example (the Boo language is exactly like this).
Daniel Pryden
That only goes so far though. I think with your boo example, if you defined a similar multiply function in Boo it would infer the types to be some sort of number, and thus you couldn't apply that to strings. In this example, it's not as obvious as to why you would want to do this in this case, but there are others where it is useful. (@CrazyJugglerDrummer: it would be great if you could come up with an example where type inference fails to dynamic typing!)
RCIX
+2  A: 

I'd start with recommending reading Steve Yegge's post on Is Weak Typing Strong Enough, then his post on Dynamic Languages Strike Back. That ought to at least get you started!

RCIX
A: 

Let's do a few advantage/disadvantage comparisons:

Dynamic Languages:

  • Type decisions can be changed with minimal code impact.
  • Code can be written/compiled in isolation. I don't need an implementation or even formal description of the type to write code.
  • Have to rely on unit tests to find any type errors.
  • Language is more terse. Less typing.
  • Types can be modified at runtime.
  • Edit and continue is much easier to implement.

Static Languages:

  • Compiler tells of all type errors.
  • Editors can offer prompts like Intellisense much more richly.
  • More strict syntax which can be frustrating.
  • More typing is (usually) required.
  • Compiler can do better optimization if it knows the types ahead of time.

To complicate things a little more, consider that languages such as C# are going partially dynamic (in feel anyway) with the var construct or languages like Haskell that are statically typed but feel dynamic because of type inference.

Steve Rowe
Interesting points. However it's been fair well proved that tereseness does not *mathematically 100% always equal* tereseness, but it is true in most cases. Also, you should read http://steve.yegge.googlepages.com/is-weak-typing-strong-enough .
RCIX
Terseness isn't an intrinsic feature of dynamically typed languages. PHP, classic ASP, JavaScript, and Objective-C are suprisingly bloated languages compared to OCaml, Haskell, SML, and Scala. For what its worth, I usually find my F# is comparably or even more terse than equivalent Python.
Juliet
"var" is in no way dynamic - C# 3 is entirely statically typed. (C# 4 *is* going dynamic where you want it to.) "var" is just type inference again. That's why the blog post is going to be about the dynamic side of things, not the other things that often accompany dynamic languages.
Jon Skeet
The fact that var is type inferred is why I said "(in feel anyway)". You are correct that it isn't dynamically typed, but it acts very much like it is.
Steve Rowe