bisect.insort
is a little bit faster, where applicable, than append-then-sort (unless you have a few elements to add before you need the list to be sorted again) -- measuring as usual on my laptop (a speedier machine will of course be faster across the board, but the ratio should remain roughly constant):
$ python -mtimeit -s'import random, bisect; x=range(20)' 'y=list(x); bisect.insort(y, 22*random.random())'
1000000 loops, best of 3: 1.99 usec per loop
vs
$ python -mtimeit -s'import random, bisect; x=range(20)' 'y=list(x); y.append(22*random.random()); y.sort()'
100000 loops, best of 3: 2.78 usec per loop
How much you care about this difference, of course, depend on how critical a bottleneck this operation is for your application -- there are of course situation where even this fraction of a microsecond makes all the difference, though they are the exception, not the rule.
The bisect
module is not as flexible and configurable -- you can easily pass your own custom comparator to sort (although if you can possibly put it in the form of a key= argument you're strongly advised to do that; in Python 3, only key= remains, cmp= is gone, because the performance just couldn't be made good), while bisect rigidly uses built-in comparisons (so you'd have to wrap your objects into wrappers implementing __cmp__
or __le__
to your liking, which also has important performance implications).
In your shoes, I'd start with the append-then-sort approach, and switch to the less-handy bisect approach only if profiling showed that the performance hit was material. Remember Knuth's (and Hoare's) famous quote, and Kent Beck's almost-as-famous one too!-)