You'll have to handle shades of off-white as well. Probably when they made the iamges initially and set the background color, there was some anti-aliasing, and then when saving as a jpg, not all colors will be preserved perfectly. So if you are making a particular color transparent, it's not going to get all the shades of that color which will leave many artifacts. You need something that will do transparency proportional to how close a color is to your key color. This might be something easier to do as a batch script in something like Photoshop, but I don't know if this is something you are needing to do in real time.
There's no (remotely easy) way to deal with this problem programatically. The white artifact-y areas around the edge of the image are the result of pixels that are nearly white but not quite, so they don't pick up the transparency effect. There are also a couple of spots on the mask/coffee mug that are pure white, so they become transparent and thus grey.
Your best bet is to contact the original site's webmaster and see if they can send you the original images, hopefully in photoshop or some other format where the original layers are preserved separately. You could then re-generate the images in a format that preserves the original transparency (PNG or something like that) or else uses your gradient for the background (it would very tough to get this right, since you don't know exactly where within the gradient the picture will be rendered).
I'd go with some sort of border around the images, as you suggested.
Loop through each pixel in the image, if R,G and B is higher than, say, 230 then replace the color with your desired color(or transparent). Maybe even weight the new color depending on how far from 'true' white the old color is.
Expect to get problems if the actual image is white also, otherwhise you will end up with a grey stormtrooper :)
You will not be able to do this automatically with anything like 100% accuracy.
The reason for this is that the only info you have is the colour which you know that some pixels in the image are attempting to blend nicely with. Only some pixels in the image will actually be using colours at or close to this value for the purposes of shading into the background, others will be using (in the case white) because the actual object represented are in fact white (damn the precision of these imperial storm troopers).
The sort of sophisticated machine learning to detect which is which is an interesting problem domain, and might be a fun project for you but it certainly won't make for a quick solution to your immediate problem.
The other problem you have is that, even if you could detect with good reliability those areas of the image which are attempting to blend into the back ground you will have issues 'unblending' them and then reblending them into your new background colour unless the colours are reasonably compatible. In this case your gray may work since it is a broad spectrum colour like the white.
The technique you want to use is as follows:
- Use a flood fill algorithm to select, from the edges of the image inwards all pixels within x%(1) of the known backdrop colour.
- For those pixels set their alpha channel to a value as a proportion of their match to the original colour and eliminate the colour cast which was associated with it.
- So if the backdrop is RGB value a,b,c and the pixel is a+5,b,c-7 then the result is RGBA 5,0,0,((a+b+c-7)/(a+b+c)*256)(1)
- composite this alpha blending image over a pain square of the new back ground colur.
- render the result with no alpha channel as the new image.
This will still have issues for objects whose colour is close to the either background colour. * in the case of the original then it may be that shadowing is being used to imply the presences of the object, as such the flood fill will 'invade' the inside of the image. * in the case of the latter the resulting image will lose definition of the object and no subtle shading, highlights or just plain lines will be present to indicate where the object ends and the back ground ends.
This is a very rough first approximation but may cover a reasonable percentage of your target. Those pictures with transparent fully enclosed holes (like the gaps in the outer arch in your example) are not likely to ever work nicely in an automatic fashion since the algorithm will be unable to distinguish between white holes and white stormtrooper.
You may wish to make you algorithm highlight the regions of the picture it plans on reblending and allow the simple selection of regions to include/exclude (using the magic wand selection tool from Pain.Net as an example of how to do this if you want to be fancy, allowing simple per pixel selection for less upfront effort.
- the value for x will be something you tune - it may be that, based on some aspects of the image (say the proportion of the image which is close to the back ground colour) you can tweak it automatically.
- Note that this formulae assumes a close to white colour, for close to black you would want to invert
i wanna ask a question please if i am dealing with words as images i wanna fill gaps in letters
what algorithm do u advice me to use?