views:

27

answers:

1

I'm having some performance issues calling CGContextDrawLayerAtPoint in iOS 4 that didn't seem to exist in previous version of the OS.

I'm copying a layer obtained from a bitmap context created with CGBitmapContextCreate to my view's context during a drawRect call. The view and the bitmap are the same size.

The bitmap was created with:

CGBitmapContextCreate(NULL, width, height, 8, width * 4, genericRGBSpace, kCGBitmapByteOrder32Host | kCGImageAlphaNoneSkipFirst);

Instruments in indicating I'm spending more time in CGContextDrawLayerAtPoint than I was on devices running OS 3.2. In fact the stack trace indicates the following stack trace taking a higher percentage of time:

argb32_sample_argb32  
argb32_image_mark  
argb32_image  
ripl_Mark  
ripl_BltImage  
RIPLayerBltImage  
ripc_RenderImage  
ripc_DrawLayer  
CGContextDelegateDrawlayer  
CGContextDrawLayerAtPoint  

whereas the same code running under 3.2 shows

argb32_image_mark_rgb32
argb32_image  
ripl_Mark  
ripl_BltImage  
ripc_RenderImage  
ripc_DrawLayer  
CGContextDelegateDrawLayer  
CGContextDrawLayerAtPoint  

with a much lower percentage of time. I'm not sure exactly why argb32_sample_argb32 is being called on iOS 4 and what it's doing - sampling the existing data into a new buffer? I'm not sure why this would be necessary. The view and the bitmap are the same size and no scaling has been performed.

Any insight into this would be appreciated.

A: 

Well, no one had any insight in to this, so I'll recount our theories and what we ended up doing. The theory is that since iOS 4 now has to deal with different screen resolutions, it's taking a more "generalized" approach to blitting and is a bit more inefficient than it was.

In the end, it doesn't really matter - it's slower and we have to deal with it. We discovered that there were places where we were copying redundantly, so we were able to reduce the amount copying we had to do. This bought us the speed we needed.

Rich Bruchal