views:

84

answers:

3

I'm writing a game for IPhone in Opengl ES, and I'm experiencing a problem with alpha blending:

I'm using glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA) to achieve alpha blending and trying to compose a scene with several "layers" so I can move them separately instead of having a static image. I created a preview in photoshop and then tried to achieve the same result in the iphone, but a black halo is shown when I blend a texture with semi-transparent regions.

I attached an image. In the left is the screenshot from the iphone, and in the right is what it looks like when I make the composition in photoshop. The image is composed by a gradient and a sand image with feathered edges.

Is this the expected behaviour? Is there any way I can avoid the dark borders?

Thanks.

EDIT: I'm uploading the portion of the png containing the sand. The complete png is 512x512 and has other images too.

I'm loading the image using the following code:

NSString *path = [NSString stringWithUTF8String:filePath];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil) NSLog(@"ERROR LOADING TEXTURE IMAGE");

GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, height - height );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);

CGContextRelease(context);

free(imageData);
[image release];
[texData release];

alt text alt text

+2  A: 

I have no idea what your original source images look like but to me it looks like it is blending correctly. With the blend mode you have you're going to get muggy blends between the layers.

The photoshop version looks like you've got proper transparency for each layer, but not blending. I suppose you could experiement with glAlphaFunc if you didn't want to explicitly set the pixel alphas exactly.

--- Code relating to comment below (removing alpha pre-multiplication) ---

int pixelcount = width * height;
unsigned char* off = pixeldata;
for (int pi=0; pi<pixelcount; ++pi)
{
    unsigned char alpha = off[3];
    if( alpha!=255 && alpha!=0 )
    {
        off[0] = ((int)off[0])*255/alpha;
        off[1] = ((int)off[1])*255/alpha;
        off[2] = ((int)off[2])*255/alpha;
    }
    off += 4;
}
Montdidier
I uploaded the original source image. The background is a shape. Why you say that with the blend mode I have I'm going to get muggy blends between the layers?
Damian
after seeing the original source images, you're not doing what I thought you were doing. What I think is the problem is that the API is premultiplying the alpha. You can quickly work around this by using the blend mode glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) instead. I understand it's one of the iphone API annoyances.
Montdidier
Using glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) won't let me change the overall alpha of the image using glColor.
Damian
You might have to remove the RGA * alpha multiplication manually from the image once it's loaded then. It sucks to have to do that. I'll edit the answer above better illustrate what I mean.
Montdidier
+3  A: 

Your screenshot and photoshop mockup suggest that the image's color channels are being premultiplied against the alpha channel.

rpetrich
Irpetrich: I uploaded the source image and the pice of code I use to load it. You are right, that's what it looks like, but why/where the color channels are being premultiplied?
Damian
They are being multiplied because that's the format the iPhone's GPU handles best--all PNGs are converted to premultiplied RGBA when read using UIImage. You may be able to work around this using the newly public ImageIO framework or by simply storing your alpha channel in a separate image.
rpetrich
+1  A: 

I need to answer my own question:

I couldn't make it work using ImageIO framework so I added libpng sources to my project and loaded the image using it. It works perfect now, but had I to solve the following problem:

The image was loaded and showed fine in the simulator but was not loading at all on the real device. I found on the web that what's going on is that the pixel ordering in PNG image-format files is converted from RGBA to BGRA, and the color values are pre-multiplied by the alpha channel value as well, by a compression utility 'pngcrush' (for device-specific efficiency reasons, when programming with the UIKit interface).

The utility also renames a header of the file, making the new PNG file unusable by libpng. These changes are done automatically when PNG files are deployed onto the iPhone. While this is fine for the UIKit, libpng (and other non-Apple libraries) generally can't then read the files.

The simple solutions are:

  1. rename your PNG files with a different extension.
  2. for your iPhone -device- build add the following user-defined setting:

    IPHONE_OPTIMIZE_OPTIONS | -skip-PNGs

I did the second and it works perfect on simulator and device now.

Damian