views:

140

answers:

6

Assuming a collection of objects, each of which needs to be drawn at a specific layer, at what point would it (or ever) be better to sort each object by layer rather than looping multiple times and drawing a layer at each pass? More importantly how would you arrive at this conclusion? Bonus points for a sort algorithm you would use if you would sort?

for (obj = each in collection) {
  for (i=0; i<=topLayer; i++) {
    if (obj.layer == i) {
      obj.draw()
    }
  }
}

/* vs. */

function layerCompare(obj1, obj2) {
  return (obj1.layer > obj2.layer)
}

collection.sort(layerCompare) 

for (obj = each in collection) {
    obj.draw()
}
+3  A: 

If you loop through every layer and every object, that is O(m*n) where m is number of layers and n is number of objects. However, if you sort the layers ahead of time with something like quicksort, you can sort them in O(n*log(n)) and then draw them in O(n), yielding a total complexity of O(n*log(n) + n) = O(n*log(n)).

So in theory, its always better to sort them. In practice, you would have to benchmark.

EDIT: On second inspection, the cutoff is whether m < log(n). If the number of layers is less than the log of the number of objects, then you should do the double loop, otherwise sort them.

twolfe18
Thanks for the big O break down. This is exactly what I was looking for. In most cases the objects will heavily outnumber the layers which is why I was considering the double loop.
Nick
You can't just divide big-O and ignore constant factors.
Pete Kirkham
well, you can talk about constant factors (and that can be very important), but as far as big-O goes, you can divide out constants. http://en.wikipedia.org/wiki/Big_O_notation
twolfe18
You divide out the constants to get the big-O growth. What your m < log (n) statement is saying is absolute time, not growth, so you have divided two different constants out from each sides. You've gone from a < Ka*m*n and b < Kb*n*log(n) to having a < b implied by m < log(n), rather than a < b implied by Ka * m < Kb * log(n). The big O growth limits say nothing about whether one or the other is faster in absolute terms.
Pete Kirkham
I am not assuming m (number of layers) is a constant. When I say m < log(n), m and n are both variable, but you might know for a fact that over the set of all possible m's and all possible n's m < log(n) depending on the circumstances. If you knew ahead of time how many layers where being used (or even if the number of layers is finite), then you would remove the m term.
twolfe18
Use a stable sort to maintain z-order for objects in the same layer.
FogleBird
+2  A: 

I would maintain a separate collection for each layer (and not have layer as a property of each object), rendering the question moot. Reassigning an object to a different layer (with an add and a remove) would only be (roughly) twice as expensive as changing an object's layer property, and you would completely avoid the cost of either a sort or a bunch of iterations to get the objects in each layer.

MusiGenesis
That's a good alternative and makes a lot of sense. It would require creating new collections if the number of layers need to grow but this is likely cheaper than the double loop or the sort.
Nick
What language is this? Collections are pretty cheap in .Net.
MusiGenesis
+3  A: 

If your code is such that not many layers come and go, it makes vastly more sense to always keep your layers sorted. One such way to accomplish that trivially is to make a layer itself a drawable object that can contain objects. At this point, your layering is built into the recursive nature of the layer object itself.

Alternatively, you could just have a layer list which each layer being a single list of objects.

plinth
+1  A: 

The big question is how smart your sort can be. How often do items get added/removed/change layers? Insertion and Bubble sort (generally considered poor performers) work very well for almost sorted data.

The other question is how much work do you save by sorting? Your rendering function is great for a small number of layers, but gets worse the more layers you have. From a complexity standpoint, you need to consider both the number of layers and the number of items, and possibly the distribution per layer.

If you have a small (small being relative of course) number of layers it might just make sense to have a separate collection (linked list probably) for each layer, and move it to the appropriate layer when it changes.

Dolphin
+1  A: 

If the number of layers is fixed, you can use Pigeonhole sort which is an O(n) algorithm. For example, if there are 256 layers numbered 0 through 255, create an array List<Obj>[] a of size 256, then for each object obj in your collection, perform a[obj.layer].add(obj). Finally, iterate through your array back to front and draw your objects (if 0 is back, for (int i = 0; i < a.length; i++) {for (Obj obj : a[i]) {draw(obj);}}

But I agree that maintaining these lists is better than regenerating them all the time. The simplest approach is to store your objects in a set data structure ordered by layer that allows in-order iteration (for example, a TreeSet in Java).

Martin Hock
+1  A: 
Pete Kirkham