This is a rather simple example and probably wouldn't make much of a difference anyway, but say I have this drawing code in a view to draw a gradient:
@interface SomeView : UIView
@end
@implementation SomeView
- (void)drawRect:(CGRect)rect
{
const CGContextRef ctx = UIGraphicsGetCurrentContext();
// Set fill color to white
CGContextSetGrayFillColor(ctx, 1.0f, 1.0f);
CGContextFillRect(ctx, rect);
// Create a fancy, albeit ugly, orange gradient
const CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
const CGFloat components[] = { 1.0, 0.5, 0.4, 1.0, // Start color
0.8, 0.8, 0.3, 1.0 }; // End color
CGGradientRef gloss;
gloss = CGGradientCreateWithColorComponents(rgbColorSpace, components, NULL, 2);
CGColorSpaceRelease(rgbColorSpace);
// Draw the gradient
const CGPoint endPoint = {rect.origin.x,
rect.origin.y + floor(rect.size.height / 2.0f)};
CGContextDrawLinearGradient(ctx, gloss, rect.origin, endPoint, 0);
CGGradientRelease(gloss);
}
@end
I realize this is a very negligible example, but you can imagine the concern if I had more complex values to reuse. Is it necessary to cache these, or does Cocoa-Touch essentially do that for you with CALayers?
Here's an example of what I mean by caching:
@interface SomeView : UIView
{
CGGradientRef gloss;
}
@end
@implementation SomeView
- (id)initWithFrame:(CGRect)frame
{
if (self = [super initWithFrame:frame]) {
// Create a fancy, albeit ugly, orange gradient only once here instead
const CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
const CGFloat components[] = { 1.0, 0.5, 0.4, 1.0, // Start color
0.8, 0.8, 0.3, 1.0 }; // End color
CGGradientRef gloss;
gloss = CGGradientCreateWithColorComponents(rgbColorSpace, components, NULL, 2);
CGColorSpaceRelease(rgbColorSpace);
}
return self;
}
- (void)dealloc
{
CGGradientRelease(gradient);
[super dealloc];
}
- (void)drawRect:(CGRect)
{
const CGContextRef ctx = UIGraphicsGetCurrentContext();
// Set fill color to white
CGContextSetGrayFillColor(ctx, 1.0f, 1.0f);
CGContextFillRect(ctx, rect);
// Draw the gradient
const CGPoint endPoint = {rect.origin.x,
rect.origin.y + floor(rect.size.height / 2.0f)};
CGContextDrawLinearGradient(ctx, gloss, rect.origin, endPoint, 0);
}
@end
You can obviously see the tradeoff here; especially if I had a lot of these views, it could end up taking more memory with this technique vs. possibly worse drawing performance in the former. However, I'm not even sure if there is much of a tradeoff because I don't know what magic Cocoa is doing behind the scenes. Could anyone explain?