views:

599

answers:

1

I have a 320x480 PNG that I would like to texture map/manipulate on the iPhone but these dimensions are obviously not power of 2. I already tested my texture map manipulation algorithm on a 512x512 PNG that is a black background with a 320x480 image superimposed on it, centered at the origin (lower left corner (0,0)) where the 320x480 area is properly oriented/centered/scaled on the iPhone screen.

What I would like to do now is progress to the point where I can take 320x480 source images and apply them to a blank/black background 512x512 texture generated in code so that the two would combine as one texture so that I can apply the vertices and texture coordinates I used in my 512x512 test. This will be eventually used for camera captured and camera roll sourced images, etc.

Any thoughts? (must be for OpenGL ES 1.1 without use of GL util toolkit, etc.).

Thanks, Ari

A: 

One method I've found to work is to simply draw both images into the current context and then extract the resulting combined image. Is there another way that is more geared towards OpenGL that may be more efficent?

// CGImageRef for background image
// CGImageRef for foreground image

// CGSize for current context

// Define CGContextRef for current context

// UIGraphicsBeginImageContext using CGSize

// Get value for current context with UIGraphicsGetCurrentContext()

// Define 2 rectangles, one for the background and one for the foreground images

// CGContextDrawImage(currentContext, backgroundRect, backgroundImage);
// CGContextDrawImage(currentContext, foregroundRect, foregroundImage);

// UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();

// spriteImage = finalImage.CGImage();

// UIGraphicsEndImageContext();

At this point you can proceed to use spriteImage as the image source for the texture and it will be a combination of a blank 512x512 PNG with a 320x480 PNG for example.

I'll replace the 512x512 blank PNG with an image generated in code but this does work.

abraginsky