views:

1527

answers:

20

Update: 2009-05-29

Thanks for all the suggestions and advice. I used your suggestions to make my production code execute 2.5 times faster on average than my best result a couple of days ago. In the end I was able to make the java code the fastest.

Lessons:

  • My example code below shows the insertion of primitive ints but the production code is actually storing strings (my bad). When I corrected that the python execution time went from 2.8 seconds to 9.6. So right off the bat, the java was actually faster when storing objects.

  • But it doesn't stop there. I had been executing the java program as follows:

    java -Xmx1024m SpeedTest

But if you set the initial heap size as follows you get a huge improvement:

java -Xms1024m -Xmx1024m SpeedTest

This simple change reduced the execution time by more than 50%. So the final result for my SpeedTest is python 9.6 seconds. Java 6.5 seconds.

Original Question:

I had the following python code:

import time
import sys

def main(args):    
    iterations = 10000000
    counts = set()
    startTime = time.time();    
    for i in range(0, iterations):
        counts.add(i)
    totalTime = time.time() - startTime
    print 'total time =',totalTime
    print len(counts)

if __name__ == "__main__":
    main(sys.argv)

And it executed in about 3.3 seconds on my machine but I wanted to make it faster so I decided to program it in java. I assumed that because java is compiled and is generally considered to be faster than python I would see some big paybacks.

Here is the java code:

import java.util.*;
class SpeedTest
{    
    public static void main(String[] args)
    {        
        long startTime;
        long totalTime;
        int iterations = 10000000;
        HashSet counts = new HashSet((2*iterations), 0.75f);

        startTime = System.currentTimeMillis();
        for(int i=0; i<iterations; i++)
        {
            counts.add(i);
        }
        totalTime = System.currentTimeMillis() - startTime;
        System.out.println("TOTAL TIME = "+( totalTime/1000f) );
        System.out.println(counts.size());
    }
}

So this java code does basically the same thing as the python code. But it executed in 8.3 seconds instead of 3.3.

I have extracted this simple example from a real-world example to simplify things. The critical element is that I have (set or hashSet) that ends up with a lot of members much like the example.

Here are my questions:

  1. How come my python implementation is faster than my java implementation?

  2. Is there a better data structure to use than the hashSet (java) to hold a unique collection?

  3. What would make the python implementation faster?

  4. What would make the java implementation faster?

UPDATE:

Thanks to all who have contributed so far. Please allow me to add some details.

I have not included my production code because it is quite complex. And would generate a lot of distraction. The case I present above is the most simplified possible. By that I mean that the java put call seems to be much slower than the python set`s add().

The java implementation of the production code is also about 2.5 - 3 times slower than the python version -- just like the above.

I am not concerned about vm warmup or startup overhead. I just want to compare the code from my startTime to my totalTime. Please do not concern yourselves with other matters.

I initialized the hashset with more than enough buckets so that it should never have to rehash. (I will always know ahead of time how many elements the collection will ultimately contain.) I suppose one could argue that I should have initialized it to iterations/0.75. But if you try it you will see that execution time is not significantly impacted.

I set Xmx1024m for those that were curious (my machine has 4GB of ram).

I am using java version: Java(TM) SE Runtime Environment (build 1.6.0_13-b03).

In the production version of I am storing a string (2-15 chars) in the hashSet so I cannot use primitives, although that is an interesting case.

I have run the code many, many times. I have very high confidence that the python code is between 2.5 and 3 times faster than the java code.

+2  A: 

You need to run it multiple times to get a real idea of "how fast" each runs. The JVM startup time [for one] is adding to the single running time of the Java version.

You're also creating a HashSet with a large initial capacity, which means the backing HashMap will be created with that many available slots, unlike the Python where you create a basic Set. Hard to tell if that would hinder though, as when your HashSet grows it will have to reallocate the stored objects.

Gandalf
As the OP measures the time spent in the loop the JVM startup time does not apply
lothar
Not necessarily, if you're writing a fast response tool that can start processing data as soon as it starts then you want it to start fast.
stefanB
That's not the issue here.
n3rd
"warm up time" if you prefer that phrase.
Tom Hawtin - tackline
I don't even get what you're trying to say with this anymore. As others have mentioned, the startup time doesn't count, which renders your first statement pretty much incorrect. As for the second, the preallocation of storage has to be an advantage for java in this case, so what's your point?
Emil H
I agree my initial assessment was hasty, but not completely incorrect. The JVM will optimize the compiled code in some cases (the -server flag for instance) and it's been shown countless times that the first (or more) run through of an app can take longer to execute some sections of code then later runs.
Gandalf
+3  A: 

I'm not too familiar with python, but I do know HashSet can't contain primitives, so when you say counts.add(i) the i there is getting autoboxed into a new Integer(i) call. That's probably your problem.

If for some reason you really needed a 'set' of integers between 0 and some large n, its probably best declared as an 'boolean[] set = new boolean[n]'. Then you could go through the array and mark items that are in the set as 'true' without incurring the overhead of creating n Integer wrapper objects. If you wanted to go further than that you could use a byte[] of size n/8 and use the individual bits directly. Or perhaps BigInteger.

EDIT

Stop voting my answer up. Its wrong.

EDIT

No really, its wrong. I get comparable performance if I do what the question suggest, populate the set with N Integers. if I replace the contents of the for loop with this:

    Integer[] ints = new Integer[N];
    for (int i = 0; i < N; ++i) {
        ints[i] = i;
    }

Then it only takes 2 seconds. If you don't store the Integer at all then it takes less than 200 millis. Forcing the allocation of 10000000 Integer objects does take some time, but it looks like most of the time is spent inside the HashSet put operation.

Jherico
autoboxing is not a performance issue
Pyrolistical
I thought that too. But is this different for Python?
n3rd
I thought exactly the same thing until, like you, I tried it.
Eddie
+2  A: 

Are you using the -server flag with the jvm? You can't test for performance without it. (You also have to warm up the jvm before doing the test.)

Also, you probably want to use TreeSet<Integer>. HashSet will be slower in the long run.

And which jvm are you using? The newest I hope.

EDIT

When I say use TreeSet, I mean in general, not for this benchmark. TreeSet handles the real world issue of non even hashing of objects. If you get too many objects in the same bin in a HashSet, you will perform about O(n).

Pyrolistical
He initializes startTime *after* the creation of the HashSet.
Igor Krivokon
Why would TreeSet be faster than HashSet? HashSet does resize (although not necessary in this case as it is given a oversized capacity (which can reduce preformance due to poor cache locality)).
Tom Hawtin - tackline
So don't use a bad hash function. Good hash function and you remain at O(1).
Tom Hawtin - tackline
Thanks for the clarification. Downvote removed. :)
Emil H
A: 

I agree with Gandalf about the startup time. Also, you are allocating a huge HashSet which is not at all similar to your python code. I imaging if you put this under a profiler, a good chunk of time would be spent there. Also, inserting new elements is really going to be slow with this size. I would look into TreeSet as suggested.

AdamC
he starts recording the time after creating hashset
Pyrolistical
good point;) It will still effect insert time though.
AdamC
Why would insert time be slow. There's potentially a cache miss there, but you are probably going to get multiple with a TreeSet.
Tom Hawtin - tackline
+7  A: 

It has generally been my experience that python programs run faster than java programs, despite the fact that java is a bit "lower level" language. Incidently, both languages are compiled into byte code (that's what those .pyc file are -- you can think of them as kind of like .class files). Both languages are byte-code interpreted on a virtual stack machine.

You would expect python to be slower at things like, for example, a.b. In java, that a.b will resolve into a dereference. Python, on the other hand, has to do one or more hash table lookups: check the local scope, check the module scope, check global scope, check builtins.

On the other hand, java is notoriously bad at certain operations such as object creation (which is probably the culprit in your example) and serialization.

In summary, there's no simple answer. I wouldn't expect either language to be faster for all code examples.

Correction: several people have pointed out that java isn't so bad at object creation any more. So, in your example, it's something else. Perhaps it's autoboxing that's expensive, perhaps python's default hashing algorithm is better in this case. In my practical experience, when I rewrite java code to python, I always see a performance increase, but that could be as much due to the language as it is due to rewritng in general leads to performance improvements.

bad object creation is no longer a huge issue after java 1.5. while its still the "slowest" its way faster now. serialization still does suck
Pyrolistical
Dross. For the most part Java gets compiled into optimised machine code, which Python generally does not. Object creation is incredibly fast in Java (faster than pretty much any C++ implementation). (Particularly on Azul hardware!)
Tom Hawtin - tackline
Actually, Java *used* to be bad at object creation, but hasn't been for several years now. This bit of old information lingers on, despite the fact that it's so untrue today.
Eddie
@Tom Hawtin, Faster than pretty much any C++ implementation? Really I would like to see a benchmark of this.
grepsedawk
Still, Java undeniably has to create objects for all the ints in this case. If Python manages to avoid that by having a special case for an all-int set, it probably explains the difference.
Michael Borgwardt
@grepsedawk, Tom said "Object Creation" is faster. See here: http://en.wikipedia.org/wiki/Java_performance#Program_speed
A_M
@A_M looking at the wikipedia reference #52 to dr dobbs journal. They are using an older glibc which has a potentially slower general purpose allocator than newer glibc implementations. Not to mention using a custom allocator for object creation will show you the real reason why tuned C++ code always beats Java. But maybe that is what Tom meant by "pretty much" even though custom allocators are very common in C++. Example : Boost.Pool
grepsedawk
Sorry I meant reference #35
grepsedawk
+7  A: 

Another possible explanation is that sets in Python are implemented natively in C code, while HashSet's in Java are implemented in Java itself. So, sets in Python should be inherently much faster.

Clint Miller
Why would C be much faster than Java?? There might be implementation differences, for instance a common one is to shove small integers into references rather than actually have an object.
Tom Hawtin - tackline
Tom Hawtin - Because for most cases C IS faster than Java?
grepsedawk
"for most cases C IS faster than Java" --- is a thing that most Java programmers had started to belive false since 1.5.
CDR
+1  A: 

How much memory did you start the JVM with? It depends? When I run the JVM with your program with 1 Gig of RAM:

$ java -Xmx1024M -Xms1024M -classpath . SpeedTest 
TOTAL TIME = 5.682
10000000
$ python speedtest.py 
total time = 4.48310899734
10000000

If I run the JVM with less memory, it takes longer ... considerably longer:

$ java -Xmx768M -Xms768M -classpath . SpeedTest 
TOTAL TIME = 6.706
10000000
$ java -Xmx600M -Xms600M -classpath . SpeedTest 
TOTAL TIME = 14.086
10000000

I think the HashSet is the performance bottleneck in this particular instance. If I replace the HashSet with a LinkedList, the program gets substantially faster.

Finally -- note that Java programs are initially interpreted and only those methods that are called many times are compiled. Thus, you're probably comparing Python to Java's interpreter, not the compiler.

Eddie
That's curious. The microbenchmark doesn't appear to create much reclaimable garbage, and you are setting -Xms. I assume you are not swapping.
Tom Hawtin - tackline
A gig of RAM? Dedicated to running the JVM on startup? Wow... ummmm... wow. VMWare doesn't even need that much to virtualize WinXP with good speed. Wow.
Jarret Hardie
It's only a 100 bytes but stored object, although I would hope it actually requires somewhat less.
Tom Hawtin - tackline
I assume the memory's affect on speed has something to do with the performance of the HashSet.
Eddie
Note: When I start the JVM with 1/2 Gig, it gets an OOM error when running. I didn't try to see how much RAM the Python implementation takes when it runs (although I'd imagine it's less).
Eddie
As you said yourself, the bottleneck is HashSet.add() - and the JIT gets plenty of opprtunities to compile that. That's definitely not the problem here.
Michael Borgwardt
@Michael Borgwardt: Good point.
Eddie
+4  A: 

I find benchmarks like this to be meaningless. I don't solve problems that look like the test case. It's not terribly interesting.

I'd much rather see a solution for a meaningful linear algebra solution using NumPy and JAMA. Maybe I'll try it and report back with results.

duffymo
You're right, benchmarks like this don't prove a lot when there are so many other factors. That makes your second point just as moot though; just because it work's for one situation doesn't mean it's going to be any better for others.
Jon Cage
Benchmarking doing something sensible does make sense. Benchmarking nonsense, less so. (I don't know about NumPy and JAMA. Are they wrappers over C code? :)
Tom Hawtin - tackline
@Jon Cage - It's not moot if I'm interested in what I learn and I use problems that are representative of what I'd really do with the libraries.
duffymo
@Tom Hawtin - JAMA is pure Java. NumPy has an API for integrating with C, but I believe it's pure Python.
duffymo
+3  A: 

There's a number of issues here which I'd like to bring together.

First if it's a program that you are only going to run once, does it matter it takes an extra few seconds?

Secondly, this is just one microbenchmarks. Microbenchmarks are pointless for comparing performance.

Startup has a number of issues.

The Java runtime is much bigger than Python so takes longer to load from disk and takes up more memory which may be important if you are swapping.

If you haven't set -Xms you may be running the GC only to resize the heap. Might as well have the heap properly sized at the start.

It is true that Java starts off interpreting and then compiles. Around 1,500 iterations for Sun client [C1] Hotspot and 10,000 for server [C2]. Server Hotspot will give you better performance eventually, but take more memory. We may see client Hotspot use server for very frequently executed code for best of both worlds. However, this should not usually be a question of seconds.

Most importantly you may be creating two objects per iteration. For most code, you wouldn't be creating these tiny objects for such a proportion of the execution. TreeSet may be better on number of objects score, with 6u14 and Harmony getting even better.

Python may possibly be winning by storing small integer objects in references instead of actually have an object. That is undoubtably a good optimisation.

A problem with a lot of benchmarks is you are mixing a lot of different code up in one method. You wouldn't write code you cared about like that, would you? So why are you attempting to performance test which is unlike code you would actually like to run fast?

Better data structure: Something like BitSet would seem to make sense (although that has ynchronisation on it, which may or may not impact performance).

Tom Hawtin - tackline
+1. Valid points.
Emil H
A: 

The biggest issue is probably that what the given code measures is wall time -- what your watch measures -- but what should be measured to compare code runtime is process time -- the amount of time the cpu spend executing that particular code and not other tasks.

Autoplectic
Assuming the test was done in reasonable conditions and repeated, the two results should be roughly comparable.
Tom Hawtin - tackline
+5  A: 

Edit: A TreeSet might be faster for the real use case, depending on allocation patterns. My comments below deals only with this simplified scenario. However, I do not believe that it would make a very significant difference. The real issue lays elsewhere.

Several people here has recommended replacing the HashSet with a TreeSet. This sounds like a very strange advice to me, since there's no way that a data structure with O(log n) insertion time is faster then a O(1) structure that preallocates enough buckets to store all the elements.

Here's some code to benchmark this:

import java.util.*;
class SpeedTest
{    
    public static void main(String[] args)
    {        
        long startTime;
        long totalTime;
        int iterations = 10000000;
        Set counts;

        System.out.println("HashSet:");
        counts = new HashSet((2*iterations), 0.75f);
        startTime = System.currentTimeMillis();
        for(int i=0; i<iterations; i++) {
            counts.add(i);
        }
        totalTime = System.currentTimeMillis() - startTime;
        System.out.println("TOTAL TIME = "+( totalTime/1000f) );
        System.out.println(counts.size());

        counts.clear();

        System.out.println("TreeSet:");
        counts = new TreeSet();
        startTime = System.currentTimeMillis();
        for(int i=0; i<iterations; i++) {
            counts.add(i);
        }
        totalTime = System.currentTimeMillis() - startTime;
        System.out.println("TOTAL TIME = "+( totalTime/1000f) );
        System.out.println(counts.size());
    }
}

And here's the result on my machine:

$ java -Xmx1024M SpeedTest
HashSet:
TOTAL TIME = 4.436
10000000
TreeSet:
TOTAL TIME = 8.163
10000000

Several people also argued that boxing isn't a performance issue and that object creation is inexpensive. While it's true that object creation is fast, it's definitely not as fast as primitives:

import java.util.*;
class SpeedTest2
{    
    public static void main(String[] args)
    {        
        long startTime;
        long totalTime;
        int iterations = 10000000;

        System.out.println("primitives:");
        startTime = System.currentTimeMillis();
        int[] primitive = new int[iterations];
        for (int i = 0; i < iterations; i++) {
            primitive[i] = i;
        }
        totalTime = System.currentTimeMillis() - startTime;
        System.out.println("TOTAL TIME = "+( totalTime/1000f) );

        System.out.println("primitives:");
        startTime = System.currentTimeMillis();
        Integer[] boxed = new Integer[iterations];
        for (int i = 0; i < iterations; i++) {
            boxed[i] = i;
        }
        totalTime = System.currentTimeMillis() - startTime;
        System.out.println("TOTAL TIME = "+( totalTime/1000f) );
    }
}

Result:

$ java -Xmx1024M SpeedTest2
primitives:
TOTAL TIME = 0.058
primitives:
TOTAL TIME = 1.402

Moreover, creating a lot of objects results in additional overhead from the garbage collector. This becomes significant when you start keeping tens of millions of live objects in memory.

Emil H
I said TreeSet may be better on the number of objects score. I did not say that it would be faster.
Tom Hawtin - tackline
Tom: It wasn't directed towards you. Didn't even see your post before I posted this. :)
Emil H
In my experience Hash* is slower with general use than Tree*. This benchmark is very good for Hash*, since all your objects are nicely even distributed. But in the real world you'll get clumping, which would lead to sub O(log n) performance.
Pyrolistical
Heh, I skipped over the two previous answers mentioning TreeSet. I only though of it as a bit of an after thought.
Tom Hawtin - tackline
Pyro: We don't know anything about the actual problem though, so it's hard to make a good recommendation about the choice of data structure. You make a good point, but your post didn't clarify that you meant general purpose usage rather than this very case.
Emil H
A: 

You can make the Java microbenchamrk much faster, by adding just a simple little extra.

    HashSet counts = new HashSet((2*iterations), 0.75f);

becomes

    HashSet counts = new HashSet((2*iterations), 0.75f) {
        @Override public boolean add(Object element) { return false; }
    };

Simple, faster and gets the same result.

Tom Hawtin - tackline
+1. Very unhelpful, but a fun way to illustrate the problem with the benchmark. :)
Emil H
Possibly more helpful than the original microbenchmark.
Tom Hawtin - tackline
The benchmark is not the problem. It might not have God-like resolution but I believe you will have a difficult time arguing that the python version is slower than the java version, as implemented. Let`s just think of this as an exercise in data structures. Why might one set implementation be faster than the other?
blaine
It's an exercise in pointlessness.
Tom Hawtin - tackline
Why is it an exercise in pointlessness? The production code could execute for hundreds of seconds and I am trying to figure out a way to shave off time. If someone can help me shave off 10%, I will happily take it. A better understanding of how these data structures work is just a bonus.
blaine
It's not production code. It's nothing like production code. It behaves nothing like production code. It is indeed misleading as to what production code will do.
Tom Hawtin - tackline
+1  A: 

Just a stab in the dark here, but some optimizations that Python is making that Java probably isn't:

  • The range() call in Python is creating all 10000000 integer objects at once, in optimized C code. Java must create an Integer object each iteration, which may be slower.
  • In Python, ints are immutable, so you can just store a reference to a global "42", for example, rather than allocating a slot for the object. I'm not sure how Java boxed Integer objects compare.
  • Many of the built-in Python algorithms and data structures are rather heavily optimized for special cases. For instance, the hash function for integers is, simply the identity function. If Java is using a more "clever" hash function, this could slow things down quite a bit. If most of your time is spent in data structure code, I wouldn't be surprised at all to see Python beat Java given the amount of effort that has been spent over the years hand-tuning the Python C implementation.
Rick Copeland
Integers are immutable in Java as well. Those in at least [-128, 127] will be shared. Those outside will be slower to allocate because of the check to see if they are inside. HashSet will rehash the hashCode from elements to make sure that even implementations that don't but their randommost bits near the bottom perform well. Vastly more effort has been put into making Java perform than Python, and it's also easier to do.
Tom Hawtin - tackline
I realize that ints are immutable in Java, but I was under the impression that Integer objects were mutable. As to the "vastly more effort", this may be true in some cases, but since many of Java's collections are written in Java and then (maybe) JIT compiled, where Python's are written in C and ahead-of-time compiled, Java's not really competing against Python here; it's competing against C -- and according to this microbenchmark, losing.
Rick Copeland
I should mention that the use of range() is actually slowing the Python down quite a bit. Using xrange() instead (which is closer to the spirit of what the Java is doing) will skip the creation of the temporary list and (in my tests on Python 2.6.2) yields a speedup of 15-20%.
Rick Copeland
A: 

You might want to see if you can "prime" the JIT compiler into compiling the section of code you're interested in, by perhaps running it as a function once beforehand and sleeping briefly afterwords. This might allow the JVM to compile the function down to native code.

Paul Fisher
+12  A: 

I suspect that is that Python uses the integer value itself as its hash and the hashtable based implementation of set uses that value directly. From the comments in the source:

This isn't necessarily bad! To the contrary, in a table of size 2**i, taking the low-order i bits as the initial table index is extremely fast, and there are no collisions at all for dicts indexed by a contiguous range of ints. The same is approximately true when keys are "consecutive" strings. So this gives better-than-random behavior in common cases, and that's very desirable.

This microbenchmark is somewhat of a best case for Python because it results in exactly zero hash collisions. Whereas, if Javas HashSet is rehashing the keys it has to perform the additional work and also gets much worse behavior with collisions.

If you store the range(iterations) in a temporary variable and do a random.shuffle on it before the loop the runtime is more than 2x slower even if the shuffle and list creation is done outside the loop.

Ants Aasma
+1 I'm pretty sure this is exactly what makes Python faster here.
Michael Borgwardt
+19  A: 

You're not really testing Java vs. Python, you're testing java.util.HashSet using autoboxed Integers vs. Python's native set and integer handling.

Apparently, the Python side in this particular microbenchmark is indeed faster.

I tried replacing HashSet with TIntHashSet from GNU trove and achieved a speedup factor between 3 and 4, bringing Java slightly ahead of Python.

The real question is whether your example code is really as representative of your application code as you think. Have you run a profiler and determined that most of the CPU time is spent in putting a huge number of ints into a HashSet? If not, the example is irrelevant. Even if the only difference is that your production code stores other objects than ints, their creation and the computation of their hashcode could easily dominate the set insertion (and totally destroy Python's advantage in handling ints specially), making this whole question pointless.

Michael Borgwardt
+2  A: 

If you really want to store primitive types in a set, and do heavy work on it, roll your own set in Java. The generic classes are not fast enough for scientific computing.

As Ants Aasma mentions, Python bypasses hashing and uses the integer directly. Java creates an Integer object (autoboxing), and then casts it to an Object (in your implementation). This object must be hashed, as well, for use in a hash set.

For a fun comparision, try this:

Java

import java.util.HashSet;
class SpeedTest
{ 
  public static class Element {
    private int m_i;
    public Element(int i) {
      m_i = i;
    }
  }

  public static void main(String[] args)
  {        
    long startTime;
    long totalTime;
    int iterations = 1000000;
    HashSet<Element> counts = new HashSet<Element>((int)(2*iterations), 0.75f);

    startTime = System.currentTimeMillis();
    for(int i=0; i<iterations; ++i)
    {
      counts.add(new Element(i));
    }
    totalTime = System.currentTimeMillis() - startTime;
    System.out.println("TOTAL TIME = "+( totalTime/1000f) );
    System.out.println(counts.size());
  }
}

Results:

$java SpeedTest
TOTAL TIME = 3.028
1000000

$java -Xmx1G -Xms1G SpeedTest
TOTAL TIME = 0.578
1000000

Python

#!/usr/bin/python
import time
import sys

class Element(object):
  def __init__(self, i):
    self.num = i

def main(args):    
    iterations = 1000000
    counts = set()
    startTime = time.time();    
    for i in range(0, iterations):
        counts.add(Element(i))
    totalTime = time.time() - startTime
    print 'total time =',totalTime
    print len(counts)


if __name__ == "__main__":
  main(sys.argv)

Results:

$./speedtest.py 
total time = 20.6943161488
1000000

How's that for 'python is faster than java'?

gnud
+6  A: 

I'd like to dispel a couple myths I saw in the answers:

Java is compiled, yes, to bytecode but ultimately to native code in most runtime environments. People who say C is inherently faster aren't telling the entire story, I could make a case that byte compiled languages are inherently faster because the JIT compiler can make machine-specific optimizations that are unavailable to way-ahead-of-time compilers.

A number of things that could make the differences are:

  • Python's hash tables and sets are the most heavily optimized objects in Python, and Python's hash function is designed to return similar results for similar inputs: hashing an integer just returns the integer, guaranteeing that you will NEVER see a collision in a hash table of consecutive integers in Python.
  • A secondary effect of the above is that the Python code will have high locality of reference as you'll be accessing the hash table in sequence.
  • Java does some fancy boxing and unboxing of integers when you add them to collections. On the bonus side, this makes arithmetic way faster in Java than Python (as long as you stay away from bignums) but on the downside it means more allocations than you're used to.
Dietrich Epp
I'm pretty sure compilers like gcc have no trouble making machine specific optimizations. Look up the march flag. Maybe you mean something else though like profile guided optimization? But that is also available in gcc.
grepsedawk
But when you compile a program using GCC you typically don't enable all possible features, because some of your users will probably be using old CPUs that don't have those features. A JIT only has to produce machine code for one machine, and it knows exactly which CPU features it supports.
Jason Orendorff
+1  A: 

A few changes for faster Python.

#!/usr/bin/python
import time
import sys

import psyco                 #<<<<  
psyco.full()

class Element(object):
    __slots__=["num"]        #<<<<
    def __init__(self, i):
        self.num = i

def main(args):    
    iterations = 1000000
    counts = set()
    startTime = time.time();
    for i in xrange(0, iterations):
        counts.add(Element(i))
    totalTime = time.time() - startTime
    print 'total time =',totalTime
    print len(counts)

if __name__ == "__main__":
  main(sys.argv)

Before

(env)~$ python speedTest.py
total time = 8.82906794548
1000000

After

(env)~$ python speedTest.py
total time = 2.44039201736
1000000

Now some good old cheating and ...

#!/usr/bin/python
import time
import sys
import psyco

psyco.full()

class Element(object):
    __slots__=["num"]
    def __init__(self, i):
        self.num = i

def main(args):    
    iterations = 1000000
    counts = set()
    elements = [Element(i) for i in range(0, iterations)]
    startTime = time.time();
    for e in elements:
        counts.add(e)
    totalTime = time.time() - startTime
    print 'total time =',totalTime
    print len(counts)

if __name__ == "__main__":
  main(sys.argv)

(env)~$ python speedTest.py
total time = 0.526521921158
1000000
A: 

Well, if you're going to tune the Java program, you might as well tune the Python program too.

>>> import timeit
>>> timeit.Timer('x = set()\nfor i in range(10000000):\n    x.add(i)').repeat(3, 1)
[2.1174559593200684, 2.0019571781158447, 1.9973630905151367]
>>> timeit.Timer('x = set()\nfor i in xrange(10000000):\n    x.add(i)').repeat(3, 1)
[1.8742368221282959, 1.8714439868927002, 1.869229793548584]
>>> timeit.Timer('x = set(xrange(10000000))').repeat(3, 1)
[0.74582195281982422, 0.73061800003051758, 0.73396801948547363]

Just using xrange makes it about 8% faster on my machine. And the expression set(xrange(10000000)) builds exactly the same set, but 2.5x faster (from 1.87 seconds to 0.74).

I like how tuning a Python program makes it shorter. :) But Java can do the same trick. As everyone knows, if you want a dense set of smallish integers in Java, you don't use a hash table. You use a java.util.BitSet!

BitSet bits = new BitSet(iterations);

startTime = System.currentTimeMillis();
bits.set(0, iterations, true);
totalTime = System.currentTimeMillis() - startTime;
System.out.println("TOTAL TIME = "+( totalTime/1000f) );
System.out.println(bits.cardinality());

That should be fairly quick. Unfortunately I don't have the time to test it right now.

Jason Orendorff