views:

119

answers:

4

I have created a 800x1200 context using this line:

CGSize sizeX = CGSizeMake(800, 1200);
CGLayerRef objectLayer = CGLayerCreateWithContext (context, sizeX, NULL);

over this context I have a CGLayer that is 2250x2250 pixels.

This layer (objectLayer) is drawn using something like

CGRect LayerRect = CGRectMake(0,0, layerW, layerH);
CGContextDrawImage(objectContext, LayerRect, myImage.image.CGImage);

CGRect superRect = CGRectMake(0,0, sizeW, sizeH);
CGContextDrawLayerInRect(context, superRect, objectLayer);

according to my math, a 800x1200 context at 24 bpp, should be using 2.8 Mb and a 2250x2250 layer at 32 bpp should be using 20 Mb. So, in total both should be using about 23 Mb.

The problem is that instruments report just the layer to be using 38.62 Mb !!!!

How can that be? Is that some I am missing?

thanks for any help.

A: 

Has the context (2250x2250) the memory size right after creation?
How do you create the context?

caahab
1) Yes 2) see first block of code on this post
Digital Robot
Actually you are creating the objectLayer from context. Where do you create the context object and how?
caahab
A: 

The answer or best details are likely about 2 clicks away (to find the source of the allocation in the call stack) in Instruments.

Where is the allocation? Locate the source doc of the offending call; Does it specify allocation amounts? Does that differ from your expectations/findings/use?

Justin
instruments points this CGLayerRef objectLayer = CGLayerCreateWithContext (context, sizeX, NULL); as using 38.62 Mb. I expected 23 MB as my post says.
Digital Robot
Short answer. I don't know. Long answer: This is a private library and an opaque type. I would guess that the API may allocate more for temporary buffers, so it does not need to allocate or compute results for intermediates during render - it can just cache them, the caching may be multitiered and is likely stored in the destination format, for computational reduction (favored over allocation size). You may be able to reduce the allocation by using a source format/context which is equal to the internal/canonic format.
Justin
thanks! can you elaborate your answer? what do you mean by your last phrase?
Digital Robot
Regarding width and component order: One approach the API uses is to cache rendered data. So if you are supplying in one format, it may have been designed to create more intermediates in order to enable the higher throughput. One example (CPUs): it probably would not store intermediate data at 24bpp if the destination format were 32bpp because it would add a ton of overhead. (Does the api state whether it uses 32 or 64 bit color internally?)Regarding copying: It's much like strings of varying types and/or encodings.
Justin
A: 

Are you sure you are creating the layers without alpha channels? I think by default there is an alpha channel, so it's four bytes instead of three...

Kendall Helmstetter Gelner
A: 

apparently there's no solution for that. It appears to be a problem on the API.

Digital Robot