Consider an array versus a hashtable where the keys are just the integral indexes of a list.
Their average-case insertion, lookup, and removal big-O bounds are all O(1)
constant time. I understand that you may get some low-level wins in cache locality with an array, and there is a marginal (mostly-constant) overhead to the hashtable operations, but hashtables get you sparseness for free, which in some applications is a big win.
What other significant (or small) contrasts am I missing?
Context: I get into a discussion about this sometimes when interviewing programming candidates. Usually the context is "how would you implement the Javascript array type inside of the JS VM?" For densely-packed data, I side with a native array, but I want to have better reasoning than the intuition that "it seems less like overkill".