Any ranking of various data structures will be at least partially tied to problem context. It would help to learn how to analyze time and space performance of algorithms. Typically, "big O notation" is used, e.g. binary search is in O(log n) time, which means that the time to search for an element is the log (in base 2, implicitly) of the number of elements. Intuitively, since every step discards half of the remaining data as irrelevant, doubling the number of elements will increases the time by 1 step. (Binary search scales rather well.) Space performance concerns how the amount of memory grows for larger data sets. Also, note that big-O notation ignores constant factors - for smaller data sets, an O(n^2) algorithm may still be faster than an O(n * log n) algorithm that has a higher constant factor. Complex algorithms often have more work to do on startup.
Besides time and space, other characteristics include whether a data structure is sorted (trees and skiplists are sorted, hash tables are not), persistence (binary trees can reuse pointers from older versions, while hash tables are modified in place), etc.
While you'll need to learn the behavior of several data structures to be able to compare them, one way to develop a sense for why they differ in performance is to closely study a few. I'd suggest comparing singly-linked lists, binary search trees, and skip lists, all of which are relatively simple, but have very different characteristics. Think about how much work it takes to find a value, add a new value, find all values in order, etc.
There are various texts on analyzing algorithms / data structure performance that people recommend, but what really made them make sense to me was learning OCaml. Dealing with complex data structures is ML's strong suit, and their behavior is much clearer when you can avoid pointers and memory management as in C. (Learning OCaml just to understand data structures is almost certainly the long way around, though. :) )