Which models of algorithm running time exist?
We all expect mergesort to be faster than bublesort, and note that mergesort makes O(n log n) comparisons vs. O(n2) for bubblesort.
For other algorithms, you count other operations (than compares and swaps), such as pointer dereference, array lookup, arithmetic on fixed-size integers, etc.
What other ways to model execution time are there?
One I know of myself is counting the number of block read from and written to disk; see my answer to http://stackoverflow.com/questions/941283/when-does-big-o-notation-fail for a lengthy description.
Another is counting the number of cache misses. This expands on the I/O model by looking at all levels of the memory hierarchy.
A third, for distributed algorithms (such as in secure multiparty computation) is to count the amount of data transmitted across the network (commonly measured in rounds of communication or number of messages).
Which other interesting things are there to count (and not count!) in order to predict the performance of an algorithm?
Also, how good are these models? As far as I've heard, cache oblivious algorithms are competitive with I/O-efficient algorithms for data on disk, but not for in-memory algorithms.
In particular: in which specific instances do these models mispredict relative performance? According to my own experiments, Fibonacci heaps don't speed up Dijstra's shortest path (versus binary heaps) when the data is small enough to fit in memory.