views:

709

answers:

5

Developing a simple game for the iPhone, what gives a better performance?

  1. Using a lot of small (10x10 to 30x30 pixels) PNGs for my UIViews' backgrounds.
  2. Using one large PNG and clipping to the bounds of my UIViews.

My thought is that the first technique requires less memory per individual UIView, but complicates how the iPhone handles the large amount of images, as it tries to combine the images into a larger texture or tries to switch between all the small textures a lot.

The second technique, on the other hand, gives the iPhone the opportunity to handle just one large PNG, but unnessicarily increases the image weight every UIView has to carry.

  • Am I right about the iPhone's attempts, handling the images the way I described it?
  • So, what is the way to go?

Seeing the answers thus far, there is still doubt. There seems to be a trade-off with two parameters: Complexity and CPU-intensive coding. What would be my tipping point for deciding what technique to use?

+2  A: 

One large gives you better performance. (Of cause if you should render all pictures anyway).

Roman
Another advantage could be smaller size, the large PNG will perhaps find some similarities between smaller pictures. To maximize that, try to put similar images next to each other.
schnaader
The reason why is what I implied? I'm curious, because every UIView I assign that large PNG to, has to somehow take that extra heavy weight with it?
Kriem
actually, one large image would require more code to slice at runtime, and thus add cpu cycles. the added cpu cycles will likely be more costly than just loading smaller images. Also, a large image containing multiple smaller images requires a lot more maintenance in both code and graphics.
Kris
I don't think clipping an image is very computationally expensive on the iPhone. IIRC, drawing a clipped image is actually cheaper than drawing the whole image. Also, for most apps, it seems like the iPhone is more memory-starved than it is CPU-starved, and the large image is more RAM-efficient.
Chuck
+2  A: 

One large image will remove any overhead associated with opening and manipulating many images in memory.

Greg B
+4  A: 

One large image mainly gives you more work and more headaches. It's harder to maintain but is probably less ram intensive because there is only one structure + data in memory instead of many structures + data. (though probably not enough to notice).

Looking at the contents of .app bundles on regular Mac OS, it seems the generally approved method of storage is one file/resource per image.

Ofcourse, this is assuming you're not getting the image resources from the web, where the bottleneck would be in http and its specified maximum of two concurrent requests.

Kris
+7  A: 

If you end up referring back to the same CGImageRef (for example by sharing a UIImage *), the image won't be loaded multiple times by the different views. This is the technique used by the videowall Core Animation demo at the WWDC 07 keynote. That's OSX code, but UIViews are very similar to CALayers.

The way Core Graphics handles images (from my observation anyway) is heavily tweaked for just-in-time loading and aggressive releasing when memory is tight.

Using a large image you could end up loading the image at draw time if the memory for the decoded image that CGImageRef points to has been reclaimed by the system.

What makes a difference is not how many images you have, but how often the UIKit traverses your code.

Both UIViews and Core Animation CALayers will only repaint if you ask them to (-setNeedsDisplay), and the bottleneck usually is your code plus transferring the rendered content into a texture for the graphics chip.

So my advice is to think your UIView layout in a way that allows portions that change together to be updated all at the same time, which turn into a single texture upload.

duncanwilcox
this falls into the "great answer" category
Kris
Is that what the iPhone does? Turning piles of images into a single large texture? Does that mean the "large texture" can change over time?
Kriem
You can think of every UIView as its own texture (in OpenGL terms), yes. That's why moving/fading/scaling UIViews is fast (hardware accelerated), but modifying the contents of the UIViews is slow/expensive (CPU bound).
duncanwilcox
I see. Great help duncan! Thanks!
Kriem
+1  A: 

I would say there is no authoritative answer to this question. A single large image cuts down (slow) flash access and gets the decode done in one go but a lot of smaller images give you better control over what processing happens when... but it's memory hungry and you do have to slice that image up or mask it.

You will have to implement one solution and test it. If it isn't fast enough and you can't optimise, implement the other. I suggest implementing the version which you personally find easier to imagine implementing because that will be easiest to implement.

Roger Nolan