views:

1874

answers:

6

What are your opinions and expectations on Google's Unladen Swallow? From their project plan:

We want to make Python faster, but we also want to make it easy for large, well-established applications to switch to Unladen Swallow.

  1. Produce a version of Python at least 5x faster than CPython.
  2. Python application performance should be stable.
  3. Maintain source-level compatibility with CPython applications.
  4. Maintain source-level compatibility with CPython extension modules.
  5. We do not want to maintain a Python implementation forever; we view our work as a branch, not a fork.

And even sweeter:

In addition, we intend to remove the GIL and fix the state of multithreading in Python. We believe this is possible through the implementation of a more sophisticated GC

It almost looks too good to be true, like the best of PyPy and Stackless combined.

More info:

Update: as DNS pointed out, there was related question: http://stackoverflow.com/questions/695370/what-is-llvm-and-how-is-replacing-python-vm-with-llvm-increasing-speeds-5x

+17  A: 

I have high hopes for it.

  1. This is being worked on by several people from Google. Seeing as how the BDFL is also employed there, this is a positive.

  2. Off the bat, they state that this is a branch, and not a fork. As such, it's within the realm of possibility that this will eventually get merged into trunk.

  3. Most importantly, they have a working version. They're using a version of unladen swallow right now for Youtube stuff.

They seem to have their shit together. They have a relatively detailed plan for a project at this stage, and they have a list of tests they use to gauge performance improvements and regressions.

I'm not holding my breath on GIL removal, but even if they never get around to that, the speed increases alone make it awesome.

thedz
Yes, GIL removal when desktops machines have 8 cores is a must have.
vartec
@vartec Not really, threads are not the only path to concurrency and multiprocessing (as in having multiple processes) has a long and storied history (at least on the *nix side of the fence).
Aaron Maenpaa
@zacherates: multi-processing via processes (instead of threads) works great. Structure your big operation in a way that 8 processes are part of some massive pipeline and you can tie up every core.
S.Lott
@zacherates: Well, multiprocessing has history. Green threads have future.
vartec
@zacherates: IPC are quiet tedious comparing to threading.
vartec
Are you sure they are actually using it in a production environment (youtube)?
lhahne
@Aaron: Forgive me if this is a silly question, but since this is an interpreted language, wouldn't each process need its own copy of the interpreter, excessively wasting memory?
BigBeagle
My thread is bigger than your thread.
Chris Shouts
+4  A: 

I think the project has noble goals and with enough time (2-3 years), they will probably reach most of them.

They may not be able to merge their branch back into the trunk because Guido's current view is that cpython should be a reference implementation (ie. it shouldn't do things that are impossible for IronPython and jython to copy.) I've seen reports that this is what kept the cool parts of stackless from being merged into cpython.

David Locke
But Guido also works for the Google ;-)
vartec
That's true, it may be eaiser for someone inside Google to influence the BDFL. However, Google is a very large organization and the people behind this may never even meet Guido.
David Locke
+4  A: 

This question discussed many of the same things. My opinion is that it sounds great, but I'm waiting to see what it looks like, and how long it takes to become stable.

I'm particularly concerned with compatibility with existing code and libraries, and how the library-writing community responds to it. Ultimately, aside from personal hobby projects, it's of zero value to me until it can run all my third-party libraries.

DNS
Somehow I've missed that question. Thanks for the link.
vartec
+1  A: 

I think that a 5 times speed improvement is not all that important for me personally.

It is not an order of magnitude change. Although if you consume CPU power at the scale of Google it can be a worth while investment to have some of your staff work on it.

Many of the speed improvements will likely make it into cpython eventually.

Getting rid of the GIL is interesting in principle but will likely reveal lots of problems with modules that are not thread safe once the GIL is removed.

I do not think I will use Unladen Swallow any time soon but like how giving attention to performance may improve the regular Python versions.

James Dean
Yes, it's not an order of a magnitude change; it's half an order of a magnitude :)
ΤΖΩΤΖΙΟΥ
Actually it is just a linear speed improvement. So it a problem takes n time it will now take 0.2 * n. An order of magnitude improvement would be something like replacing a linear search by a tree or hash-table lookup. O(n) to O(log n) to O(1).
James Dean
@James: http://en.wikipedia.org/wiki/Order_of_magnitude
vartec
I stand corrected.
James Dean
rofl guys, I learnt something too
Matt Joiner
Still - it seems like 3.16 is closer to "half an order of magnitude" since `log10(3.16) ~= 0.5` ;)
viraptor
+1  A: 

Guido just posted an article to his twitter account that is an update to the Jesse Noller article posted earlier. http://jessenoller.com/2010/01/06/unladen-swallow-python-3s-best-feature/. Sounds like they are moving ahead as previously mentioned with python 3.

Stedy
+9  A: 

Hello, I'm sorry to disappoint you, but when you read PEP 3146 things look bad.

The improvement is by now minimal and therfore the compiler-code gets more complicated. Also removing the GIL has many downsides.

Btw. PyPy seems to be faster then Unladen Swallow in some tests.

Joschua
yup............
Matt Joiner