It can be done by mallocing a temporary bitmap with 32bits per pixel and then clearing the alpha component with a for loop and and finally turn it back into a NSImage again.
I suspect is can be done in a simpler way using a clever combination of NSColor and NSCompositingOperation. Or perhaps the image needs to be composited with itself using drawAtPoint.
My code looks like this.
NSImage* img = some image with RGB and Alpha;
NSRect rect = some rect inside the image;
[img lockFocus];
[[NSColor clearColor] set];
NSRectFillUsingOperation(rect, NSCompositeXOR);
[img unlockFocus];
NOTE: Setting the alpha channel to 1 can be done by using a blackColor with NSCompositePlusLighter.
What is the secret in clearing the alpha channel?