In general, each successive version along the 2.*
line becomes a bit faster than the previous one -- because optimization and fine-tuning is a high priority for many contributors.
I don't know of any articles on performance configuration: my advice is to identify the "hot spots" of your specific application by profiling (but surely you're already doing that, if you are "very concerned about performance") and turn them into microbenchmarks to run on timeit
across all four versions.
For example, suppose that ''.join
ing middling-length lists of shortish strings is known to be a hotspot in your applications. Then, we could measure:
$ python2.4 -mtimeit -s'x=[str(i) for i in range(99)]' '"".join(x)'
100000 loops, best of 3: 2.87 usec per loop
$ python2.5 -mtimeit -s'x=[str(i) for i in range(99)]' '"".join(x)'
100000 loops, best of 3: 3.02 usec per loop
$ python2.6 -mtimeit -s'x=[str(i) for i in range(99)]' '"".join(x)'
100000 loops, best of 3: 2.7 usec per loop
$ python2.7 -mtimeit -s'x=[str(i) for i in range(99)]' '"".join(x)'
100000 loops, best of 3: 2.12 usec per loop
python2.5 in this case does not follow the other releases' general trend (repeating the measurement confirms it's about 5% slower than python2.4 on this microbenchmark) and 2.7 is surprisingly faster than expected (a whopping 26%+ faster than 2.4), but that's for a specific build and platform (as well of course as a specific microbenchmark), which is why it's important for you to perform such measurements on benchmarks and platforms/builds of your specific interest (if you're willing to just accept the general consideration that later builds tend to be faster, then you're not really "very" concerned about performance;-).