I have a CGLayer that was created like this:
CGSize size = CGSizeMake(500, 500);
UIGraphicsBeginImageContext(tamanho);
ctx = UIGraphicsGetCurrentContext();
[self loadImageToCTX]; // this method loads an image into CTX
lineLayer = CGLayerCreateWithContext (ctx, size, NULL);
Now I have a PNG that has alpha and some content. I need to load this PNG into lineLayer, so I do...
// lineLayer is empty, lets load a PNG into it
CGRect superRect = CGRectMake(0,0, 500, 500);
CGContextRef lineContext = CGLayerGetContext (lineLayer);
CGContextSaveGState(lineContext);
// inverting Y, so image will not be loaded flipped
CGContextTranslateCTM(lineContext, 0, -500);
CGContextScaleCTM(lineContext, 1.0, -1.0);
// CGContextClearRect(lineContext, superRect);
UIImage *loaded = [self recuperarImage:@"LineLayer.png"];
CGContextDrawImage(lineContext, superRect, loaded.CGImage);
CGContextRestoreGState(lineContext);
if I render, at this point, the contents of ctx + lineLayer, the final image contains just ctx...
// if I render the contents to a view using the lines below, I see just CTX, lineLayer contents are not there
// remember CTX has an image and lineLayer has a transparent loaded PNG
// but when I render this, the final image contains just CTX's contents...
// this is how it is rendered.
CGContextDrawLayerInRect(ctx, superRect, lineLayer);
myView.image = UIGraphicsGetImageFromCurrentImageContext();
am I missing something? thanks in advance.