I'm having some performance issues calling CGContextDrawLayerAtPoint in iOS 4 that didn't seem to exist in previous version of the OS.
I'm copying a layer obtained from a bitmap context created with CGBitmapContextCreate to my view's context during a drawRect call. The view and the bitmap are the same size.
The bitmap was created with:
CGBitmapContextCreate(NULL, width, height, 8, width * 4, genericRGBSpace, kCGBitmapByteOrder32Host | kCGImageAlphaNoneSkipFirst);
Instruments in indicating I'm spending more time in CGContextDrawLayerAtPoint than I was on devices running OS 3.2. In fact the stack trace indicates the following stack trace taking a higher percentage of time:
argb32_sample_argb32
argb32_image_mark
argb32_image
ripl_Mark
ripl_BltImage
RIPLayerBltImage
ripc_RenderImage
ripc_DrawLayer
CGContextDelegateDrawlayer
CGContextDrawLayerAtPoint
whereas the same code running under 3.2 shows
argb32_image_mark_rgb32
argb32_image
ripl_Mark
ripl_BltImage
ripc_RenderImage
ripc_DrawLayer
CGContextDelegateDrawLayer
CGContextDrawLayerAtPoint
with a much lower percentage of time. I'm not sure exactly why argb32_sample_argb32 is being called on iOS 4 and what it's doing - sampling the existing data into a new buffer? I'm not sure why this would be necessary. The view and the bitmap are the same size and no scaling has been performed.
Any insight into this would be appreciated.