views:

233

answers:

8

"There is no such thing as a "compiled language" or "interpreted language". Whether a language implementer chooses to write a compiler, an interpreter or anything in between is an implementation detail and has nothing to do with the language. "

Is the above statement is true ?

+8  A: 

Yes, it is true in the strictest interpretation. You can find both a C++ interpreter and a Javascript compiler, for example. However, you will find that some types of languages (statically typed, for example) lend themselves well to native code compilation. Other languages (dynamically typed, for example) are commonly implemented using bytecode compilation combined with a virtual machine execution environment.

Greg Hewgill
+3  A: 

Sort of. Generally, both interpreters and compilers first need to parse the source code and turn it into a representation, called AST (abstract syntaxt tree). A compiler then turns the AST into executable code (through various transformations), while an interpreter might just directly 'interpret' the AST or sometimes compile and execute it (just-in-time compilation).

The statement is correct in that this has nothing to do with the language: in theory, you can write an interpreter and compiler for any language. Which one to use really depends on the use-case, scenario and environment.

A compiler has the advantage that he only need to do his job once, regardless of how often you then execute the program. An interpreter needs to parse the source every time (or do some caching), thus you have an overhead for each execution which might take way longer than the actual execution time of the final program. On the other hand, an interpreter is more flexible (it can take into amount the current environment and thus do optimizations a compiler is not allowed to do). But the differences don't stop here, these are just two obvious points.

DarkDust
I've written several interpreters and compilers - none of them built or used an AST.
anon
What did you use instead ? And how did you do semantic analysis ?
DarkDust
A: 

The above statement is true.

Then again, one might argue it is not true enough in the real world. If all existing implementations of a language rely on compilation, the language can legitimately be referred to as a compiled language.

GSerg
+1  A: 

The language design has to do with the grammar for the higher-level input portion and the lower level ouput code that's executed on the target.

There's an abstract syntax tree in between the two.

Traditionally, if you write the lower level ouput code to execute on a particular hardware platform and its specific instruction set, the output is "compiled".

If someone decides to write an interpreter to act as the target, the output code is the instruction set or byte code that the interpreter expects. The additional level of indirection means that the interpreted code can run on any hardware platform that has an interpreter implementation.

So the statement is correct if we call "language design" the grammar and the lexer/parser piece.

It's not strictly correct if we're talking about the code generator.

It's possible to emit a particular language as both interpreted and compiled simply by calling different code generators to walk the AST.

So perhaps that's how the distinction is blurred. But I think it's still there.

duffymo
I think you are misunderstanding the question. Nobody is denying the existence of compilers and interpreters. What is being denied is the existence of compiled languages and interpreted languages. A language is just a set of abstract mathematical rules. A language isn't compiled or interpreted. A language just *is*. As evidence, just take one of the gazillions of programming languages for which *no* implementation exists. (Say, Smalltalk-71 or Modula-1.)
Jörg W Mittag
I'm saying pretty much what everyone else is saying. How is it that I'm singled out as the one who is misunderstanding? I'm talking about the underlying implementation details, but coming to the same conclusion as most of the other respondents.
duffymo
A: 

Its true only in the sense that ultimately both compiled and interpreted languages must generate machine code. It does have an effect on the language in so far as traditionally certain paradigms are easier in one over the other. For example, in general, closures or blocks are easier to implement in inter5preted languages than compiled languages. This is true since there is effectively no differnece between compile time and runtime scoping in interpreted languages. Thus dynamic scoping TENDS to be easier to implement in interpreted languages.

ennuikiller
Purely interpreted languages never generate machine code. Rather, each statement will be processed using the equivalent of either an if-ifelse-else tree or a switch-case statement. The interpreter doesn't generate any machine code from the statements it examines; rather, it simply does the appropriate actions directly.
supercat
A: 

A given implementation of a language will be either a "pure" compiler (whose output is executed by a processor as code), a "pure" interpreter (each statement is examined for the first time, in raw-source form, as it is executed, and nothing about the interpretation is cached), or a hybrid between the two. It's pretty easy to distinguish the "pure" cases from the hybrids, but some hybrids are 'closer' to being compiled than others; the lines between a 'compiled' hybrid versus an 'interpreted' one can be pretty fuzzy.

I don't think any language is in substantial use, other than assembly language (for which the term "assembler" is usually used in preference to "compiler"), which could not be implemented at least somewhat practically in an hybrid interpreter (performance of a "pure" interpreter is apt to be horrible with any but the most trivial looping constructs). There are some languages, however, which allow for dynamic code generation in ways that would not be amenable to compilation.

Incidentally, when I say "raw source" form, I do not always mean text format. My first programmable calculator had 99 program steps, each of which could be configured with a keystroke or one of a few special sequencing instructions. The program would never exist in a human-readable text form per se, but rather as a sequence of key numbers. Nonetheless, I would describe that as a purely-interpreted "language" since each program step was evaluated entirely independently.

supercat
A: 

The whole compiler/interpreter thing is sort of dependent on what your intentions for your program are. A compiled program is one that is turned into machine code. An interpreter is used to read an intermediary language and run it on the machine. For instance, when you compile Java, it is turned into the Java Bytecode and is read and run by the interpreter (which also accounts for the speed disadvantage in comparison to C++).

I don't really think your statement about it having nothing to do with the language is entirely true. One of the main things about Java is that it is supposed to be runnable on different architectures. If it were compiled that wouldn't be possible.

Glenn Nelson
A: 

Its worth noting that, for (some?) languages that include an "eval" type statement (especially if it's not possible to determine until runtime whether a given block is code or data), even the most purely compiled version of the a given program must be partially interpreted. For such languages, it's not possible to compile them completely (the compiled code must contain an interpreter for the language).

As an example, consider the following code:

set s [eval {sum $a $b $c}]

For the above Tcl code, it's not possible to determine until runtime whether the block (inside the {}) is code or not.

RHSeeger