views:

512

answers:

2

Hi,

I'm trying to work out how to draw from a TexturePage using CoreGraphics.

Given a texture page (CGImageRef) which contains multiple 64x64 packed textures, how do I render sub areas from that page onto the device context.

CGContextDrawImage seems to only take a destination rect. I noticed CGImageCreateWithImageInRect, however this creates a new image. I don't want a new image I simply want to draw from the original image.

I'm sure this is possible, however I'm new to iPhone development.

Any help much appreciated.

Thanks

A: 

Edit: Wait a minute. Use CGImageCreateWithImageInRect. That is what it's for.

Here are the ideas I wrote up initially; I will leave them in case they're useful.

  • See if you can create a sub-image of some kind from another image, such that it borrows the original image's buffer (much like some substring implementations). Then you could draw using the sub-image.
  • It might be that Core Graphics is intended more for compositing than for image manipulation, so you may have to use separate image files in your application bundle. If the SDK docs don't particularly recommend what you're doing, then I suggest you go that route since it seems the most simple and natural way to do it.
  • You could use OpenGLES instead, in which case you can specify the texture coordinates of polygon vertices to select just that section of your big texture.
Kevin Conner
+2  A: 

What's wrong with CGImageCreateWithImageInRect?

CGImageRef subImage = CGImageCreateWithImageInRect(image, srcRect);
if (subImage) {
 CGContextDrawImage(context, destRect, subImage);
 CFRelease(subImage);
}
Peter Hosey
performance wise i don't want to create a new image multiple times per render frame. I realise i could pre-cache these images, however the reason i packed the texture pages in the first place was a performance reason.
Rich