I was reading about the pros and cons of interpreted languages, and one of the most common cons is the slowness, but why is the interpreted languages programs slow?

+24  A: 

Native programs runs using instructions written for the processor they run on.

Interpreted languages are just that, "interpreted". Some other form of instruction is read, and interpreted, by a runtime, which in turn executes native machine instructions.

Think of it this way. If you can talk in your native language to someone, that would generally work faster than having an interpreter having to translate your language into some other language for the listener to understand.

Note that what I am describing above is for when a language is running in an interpreter. There are interpreters for many languages that there is also native linkers for that build native machine instructions. The speed reduction (however the size of that might be) only applies to the interpreted context.

So, it is slightly incorrect to say that the language is slow, rather it is the context in which it is running that is slow.

C# is not an interpreted language, even though it employs an intermediate language (IL), this is JITted to native instructions before being executed, so it has some of the same speed reduction, but not all of it, but I'd bet that if you built a fully fledged interpreter for C# or C++, it would run slower as well.

And just to be clear, when I say "slow", that is of course a relative term.

Lasse V. Karlsen
Great analogy, I've never thought of it that way.
One addtional note: things aren't as bad as they were 10 years ago. Currently, it's possible to build very interesting apps with interpreted code. I guess it all depends on the kind of apps you're building...
Luis Abreu
Note that you have to be careful with labeling a *language* as "interpreted" and consequently "slow". For example JavaScript always was interpreted (and slow), but in the recent generation of JS engines when running expensive computational tasks, it gets compiled to native instructions.
Oh, I agree, I should clarify the answer.
Lasse V. Karlsen
+2  A: 

Loop a 100 times, the contents of the loop are interpreted 100 times into low level code.

Not cached, not reused, not optimised.

In simple terms, a compiler interprets once into low level code

Edit, after comments:

  • JIT is compiled code, not interpreted. It's just compiled later not up-front
  • I refer to the classical definition, not modern practical implementations
Do you have a good source for this? I find it quote hard to believe...
Of course:
That's the description of a very lame implementation of an interpreted language. I do not doubt some interpreters did do this thing in the past, but today the only one I would expect being caught parsing forever and ever would be cmd.exe
@Greg: It's true, although the answer was worded rather poorly (no offense gbn). I think he meant that the bytecode for the loop is interpreted 100 times, rather than simply being executed 100 times. This is not an issue for JIT-capable VMs, however, because they would compile the loop down to native code, thus it would be just as fast as in a compiled program.
As soon as you introduce a JIT into the mix, I don't think you can call it an interpreted language - at least not a *purely* interpreted one.
Jon Skeet
Interpretation *per se* does this. Of course, once you have it working you'd immediately start thinking about doing something smarter, which all the big "scripting" languages do. Many are bytecode compiled at run time, some use JIT compilers, etc. etc. etc.
I think that while Tony is technically correct, by his definition, there are practically speaking very few interpreted languages in major production use. This answer may be correct, but it comes down to semantics.


An Interpreted language is processed at runtime. Every line is read, analysed, and executed. Having to reprocess a line every time in a loop is what makes interpreted languages so slow. This overhead means that interpreted code runs between 5 - 10 times slower than compiled code. The interpreted languages like Basic or JavaScript are the slowest. Their advantage is not needing to be recompiled after changes and that is handy when you're learning to program.

The 5-10 times slower is not necessarily true for languages like Java and C#, however. They are interpreted, but the just-in-time compilers can generate machine language instructions for some operations, speeding things up dramatically (near the speed of a compiled language at times).

Kaleb Brasee
That's not entirely correct. C# and Java *are* compiled, but compiled to IL, not native instructions. To say that they are interpreted implies that the source is analyzed at runtime, which is not the case with C# (though I am not sure about Java - not that familiar with it).
David Lively
That's true, the source isn't analyzed at runtime, the compiled bytecode instructions are. I was thinking that JIT compilation (which most Java VMs use now) counts as another form of interpretation, but I guess it does not. Java CAN be interpreted if the VM you're using does that, and a few of them do, but the majority do not for obvious performance reasons.
Kaleb Brasee

Interpreted languages need to read and interpret your source code at execution time. With compiled code a lot of that interpretation is done ahead of time (at compilation time).


Very few contemporary scripting languages are "interpreted" these days; they're typically compiled on the fly, either into machine code or into some intermediate bytecode language, which is (more efficiently) executed in a virtual machine.

Having said that, they're slower because your cpu is executing many more instructions per "line of code", since many of the instructions are spent understanding the code rather than doing whatever the semantics of the line suggest!

Jonathan Feinberg
+2  A: 

In addition to the other answers there's optimization: when you're compiling a programme, you don't usually care how long it takes to compile - the compiler has lots of time to optimize your code. When you're interpreting code, it has to be done very quickly so some of the more clever optimizations might not be able to be made.


Read this

This is the relevant idea in that post to your problem.

An execution by an interpreter is usually much less efficient then regular program execution. It happens because either every instruction should pass an interpretation at runtime or as in newer implementations, the code has to be compiled to an intermediate representation before every execution.

BTW, did you notice that the question you linked to has the same poster...Nathan shows every sign of not being serious.
:) sorry, I didn't see that until you told. thanks

For the same reason that it's slower to talk via translator than in native language. Or, reading with dictionary. It takes time to translate.

Update: no, I didn't see that my answer is the same as the accepted one, to a degree ;-)


There is no such thing as an interpreted language. Any language can be implemented by an interpreter or a compiler. These days most languages have implementations using a compiler.

That said, interpreters are usually slower, because they need process the language or something rather close to it at runtime and translate it to machine instructions. A compiler does this translation to machine instructions only once, after that they are executed directly.

+1  A: 

A simple question, without any real simple answer. The bottom line is that all computers really "understand" is binary instructions, which is what "fast" languages like C are compiled into.

Then there are virtual machines, which understand different binary instructions (like Java and .NET) but those have to be translated on the fly to machine instructions by a Just-In-Compiler (JIT). That is almost as fast (even faster in some specific cases because the JIT has more information than a static compiler on how the code is being used.)

Then there are interpreted languages, which usually also have their own intermediate binary instructions, but the interpreter functions much like a loop with a large switch statement in it with a case for every instruction, and how to execute it. This level of abstraction over the underlying machine code is slow. There are more instructions involved, long chains of function calls in the interpreter to do even simple things, and it can be argued that the memory and cache aren't used as effectively as a result.

But interpreted languages are often fast enough for the purposes for which they're used. Web applications are invariably bound by IO (usually database access) which is an order of magnitude slower than any interpreter.

.NET is a framework, not a VM. A VM-friendly language typically (but not always) isolates the code from the hardware and OS, and provides an abstract "machine code" that source is compiled to which the VM then, at run time, translates into instructions compatible with the target OS or hardware.
David Lively
+1  A: 

This is a good question, but should be formulated a little different in my opinion, for example: "Why are interpreted languages slower than compiled languages?"

I think it is a common misconception that interpreted languages are slow per se. Interpreted languages are not slow, but, depending on the use case, might be slower than the compiled version. In most cases interpreted languages are actually fast enough!

"Fast enough", plus the increase in productivity from using a language like Python over, for example, C should be justification enough to consider an interpreted language. Also, you can always replace certain parts of your interpreted program with a fast C implementation, if you really need speed. But then again, measure first and determine if speed is really the problem, then optimize.

+1 for avoiding premature optimization! Good answer!

Yeah, interpreted languages are slow...

However, consider the following. I had a problem to solve. It took me 4 minutes to solve the problem in Python, and the program took 0.15 seconds to run. Then I tried to write it in C, and I got a runtime of 0.12 seconds, and it took me 1 hour to write it. All this because the practical way to solve problem in question was to use hashtables, and the hashtable dominated the runtime anyway.

+1  A: 

Think of the interpeter as an emulator for a machine you don't happen to have

The short answer is that the compiled languages are executed by machine instructions whereas the interpreted ones are executed by a program (written in a compiled language) that reads either the source or a bytecode and then essentially emulates a hypothetical machine that would have run the program directly if the machine existed.

Think of the interpreted runtime as an emulator for a machine that you don't happen to actually have around at the moment.

This is obviously complicated by the JIT (Just In Time) compilers that Java, C#, and others have. In theory, they are just as good as "AOT" ("At One Time") compilers but in practice those languages run slower and are handicapped by needing to have the compiler around using up memory and time at the program's runtime. But if you say any of that here on SO be prepared to attract rabid JIT defenders who insist that there is no theoretical difference between JIT and AOT. If you ask them if Java and C# are as fast as C and C++, then they start making excuses and kind of calm down a little. :-)

So, C++ totally rules in games where the maximum amount of available computing can always be put to use.

On the desktop and web, information-oriented tasks are often done by languages with more abstraction or at least less compilation, because the computers are very fast and the problems are not computationally intensive, so we can spend some time on goals like time-to-market, programmer productivity, reliable memory-safe environments, dynamic modularity, and other powerful tools.


All answers seem to miss the real important point here. It's the detail how "interpreted" code is implemented.

Interpreted languages are slower because there method, object and global variable space model is dynamic. This requires many many extra hashtable lookup on each access to a variable or method call. This is where most of the time is spend. It is a painfull random memory lookup which really hurts when you get a L1/L2 cache-miss.

Googles Javascript Core8 is so fast and targeting almost C speed for a simple optimization: They take the object data model as fixed and create internal code to access it like the data structure of a native compiled program. When a new variable or method is added or removed then the whole compiled code is discarded and compiled again.

The technique is well explained in the Deutsch/Schiffman paper "Efficient Implementation of the Smalltalk-80 System".

The question is why php, python and ruby aren't doing this is pretty simple to answer:

The technique is extremely complicated to implement.

And google only has the money to pay for javascript because a fast browser based javascript interpreter is their fundamental need of there billion dollar business modell.