views:

1006

answers:

6

Erlangs Characteristics

From Erlang Programming (2009):

Erlang concurrency is fast and scalable. Its processes are lightweight in that the Erlang virtual machine does not create an OS thread for every created process. They are created, scheduled, and handled in the VM, independent of underlying operating system. As a result, process creation time is of the order of microseconds and independent of the number of concurrently existing processes. Compare this with Java and C#, where for every process an underlying OS thread is created: you will get some very competitive comparisons, with Erlang greatly outperforming both languages.

From Concurrency oriented programming in Erlang (pdf) (slides) (2003):

We observe that the time taken to create an Erlang process is constant 1µs up to 2,500 processes; thereafter it increases to about 3µs for up to 30,000 processes. The performance of Java and C# is shown at the top of the figure. For a small number of processes it takes about 300µs to create a process. Creating more than two thousand processes is impossible.

We see that for up to 30,000 processes the time to send a message between two Erlang processes is about 0.8µs. For C# it takes about 50µs per message, up to the maximum number of processes (which was about 1800 processes). Java was even worse, for up to 100 process it took about 50µs per message thereafter it increased rapidly to 10ms per message when there were about 1000 Java processes.

My thoughts

I don't fully understand technically why Erlang processes are so much more efficient in spawning new processes and has much smaller memory footprint per process. Both the OS and Erlang VM has to do scheduling, context switches and keep track of the values in the registers and so on...

Simply why isn't OS threads implemented in the same way as processes in Erlang? Does it have to support something more? and why does it need a bigger memory footprint? and why has it slower spawning and communication?

Technically why is processes in Erlang more efficient than OS threads when it comes to spawning and communication? And why can't threads in the OS be implemented and managed in the same efficient way? And why has OS threads a bigger memory footprint, slower spawning and communication?

More reading

+1  A: 

Because Erlang interpreter has only to worry about itself, the OS has many other things to worry about.

Francisco Soto
+6  A: 

Erlang processes correspond (approximately) to green threads in other languages; there's no OS-enforced separation between the processes. (There may well be language-enforced separation, but that's a lesser protection despite Erlang doing a better job than most.) Because they're so much lighter-weight, they can be used far more extensively.

OS threads on the other hand are able to be simply scheduled on different CPU cores, and are (mostly) able to support independent CPU-bound processing. OS processes are like OS threads, but with much stronger OS-enforced separation. The price of these capabilities is that OS threads and (even more so) processes are more expensive.


Another way to understand the difference is this. Supposing you were going to write an implementation of Erlang on top of the JVM (not a particularly crazy suggestion) then you'd make each Erlang process be an object with some state. You'd then have a pool of Thread instances (typically sized according to the number of cores in your host system; that's a tunable parameter in real Erlang runtimes BTW) which run the Erlang processes. In turn, that will distribute the work that is to be done across the real system resources available. It's a pretty neat way of doing things, but relies utterly on the fact that each individual Erlang process doesn't do very much. That's OK of course; Erlang is structured to not require those individual processes to be heavyweight since it is the overall ensemble of them which execute the program.

In many ways, the real problem is one of terminology. The things that Erlang calls processes (and which correspond strongly to the same concept in CSP, CCS, and particularly the π-calculus) are simply not the same as the things that languages with a C heritage (including C++, Java, C#, and many others) call a process or a thread. There are some similarities (all involve some notion of concurrent execution) but there's definitely no equivalence. So be careful when someone says “process” to you; they might understand it to mean something utterly different…

Donal Fellows
Erlang doesn't get anywhere close to Pi Calculus. Pi calculus assumes synchronous events over channels that can be bound to variables. This kind of concept doesn't fit the Erlang model at all.Try Join Calculus, Erlang's closer to that although it still needs to be able to natively join on some messages and whatnot. There was a thesis paper (and project) named JErlang dedicated which implemented it.
I GIVE TERRIBLE ADVICE
It all depends on what exactly you view the pi-calculus to be (and you can model asynchronous channels with synchronous channels plus buffer processes).
Donal Fellows
You are just saying that Erlang processes are lightweight but you are not explaining why they have a smaller footprint (are lightweight) and why they have better performance than OS threads.
Jonas
@Jonas: For some types of task (particularly computation-heavy tasks) OS threads do better. Mind you, those are not typically tasks for which Erlang is used; Erlang is focussed towards having large numbers of simple communicating tasks. One of the gains from doing that is that in the case of a group of tasks that hand a piece of work around and wait for the result, that can all be done in a single OS thread on a single processor, which is more efficient than having context switches.
Donal Fellows
Theoretically, you could make an OS thread be very cheap too through using a very small stack and carefully controlling the number of other thread-specific resources allocated, but that's quite problematic in practice. (Predicting stack requirements is a bit of a black art.) So instead OS threads are particularly designed to be optimal in the case where there are fewer of them (of the order of the number of CPU cores) and where they are doing more significant amounts of processing each.
Donal Fellows
And "best" is a slippery thing. It massively depends on what you're doing; give me a metric and I'll tell you which is "best". :-)
Donal Fellows
+8  A: 

There are several contributing factors:

  1. Erlang processes use a lightweight cooperative threading model (preemptive at the Erlang level, but under the control of a cooperatively scheduled runtime). This means that it is much cheaper to switch context, because they only switch at known, controlled points and therefore don't have to save the entire CPU state (normal, SSE and FPU registers, address space mapping, etc.).
  2. Erlang processes use dynamically allocated stacks, which start very small and grow as necessary. This permits the spawning of many thousands (even millions) of Erlang processes without sucking up all available RAM.
  3. Erlang used to be single-threaded, meaning that there was no requirement to ensure thread-safety between processes. It now supports SMP, but the interaction between Erlang processes on the same scheduler/core is still very lightweight (there are separate run queues per core).
Marcelo Cantos
To your 2nd point: And if the process has not run yet, there is no reason to have stack allocated for it.In addition: Several tricks can be played by fiddling with the GC of a process such that it never collects memory. But that is advanced and somewhat dangerous :)
jlouis
+5  A: 

After some more research I found a presentation by Joe Armstrong.

From Erlang - software for a concurrent world (presentation) (at 13 min):

[Erlang] is a concurrent language – by that I mean that threads are part of the programming language, they do not belong to the operating system. That's really what's wrong with programming languages like Java and C++. It's threads aren't in the programming language, threads are something in the operating system – and they inherit all the problems that they have in the operating system. One of the problems is granularity of the memory management system. The memory management in the operating system protects hole pages of memory, so the smallest size that a thread can be is the smallest size of a page. That's actually too big.

If you add more memory to your machine – you have the same number of bits that protects the memory so the granularity of the page tables goes upyou end up using say 64kB for a process you know running in a few hundred bytes.

I think it answers if not all, at least a few of my questions

Jonas
**Related:** http://stackoverflow.com/questions/2267545/why-are-threads-called-as-light-weight-processes
Jonas
+2  A: 

I've implemented coroutines in assembler, and measured performance.

Switching between coroutines, a.k.a. Erlang processes, takes about 16 instructions and 20 nanoseconds on a modern processor. Also, you often know the process you are switching to (example: a process receiving a message in its queue can be implemented as straight hand-off from the calling process to the receiving process) so the scheduler doesn't come into play, making it an O(1) operation.

To switch OS threads, it takes about 500-1000 nanoseconds, because you're calling down to the kernel. The OS thread scheduler might run in O(log(n)) or O(log(log(n))) time, which will start to be noticeable if you have tens of thousands, or even millions of threads.

Therefore, Erlang processes are faster and scale better because both the fundamental operation of switching is faster, and the scheduler runs less often.

Surfer Jeff
A: 

I think Jonas wanted some numbers on comparing OS threads to Erlang processes. The author of Programming Erlang, Joe Armstrong, a while back tested the scalability of the spawning of Erlang processes to OS threads. He wrote a simple web server in Erlang and tested it against multi-threaded Apache (since Apache uses OS threads). There's an old website with the data dating back to 1998. I've managed only to find that site exactly once. So I can't supply a link. But the information is out there. The main point of the study showed that Apache maxed out just under 8K processes, while his hand written Erlang server handled 10K+ processes.

Jurnell
I think you are talking about this one: http://www.sics.se/~joe/apachevsyaws.htmlBut I asked how erlang make threads so efficient compared to kerlenl threads.
Jonas