views:

114

answers:

4

What algorithms to use for image downsizing?

What is faster?

What algorithm is performed for image resizing ( specially downsizing from big 600x600 to super small 6x6 for example) by such giants as flash and silver player, and html5?

+1  A: 

Normally I would stick to a bilinear filter for scaling down. For resizing images to tiny sizes, though, you may be out of luck. Most icons are pixel-edited by hand to make them look their best.

Here is a good resource which explains the concepts quite well.

Ben Herila
+1  A: 

There is an excellent article at The Code Project showing the effects of various image filters.

For shrinking an image I suggest the bicubic algorithm; this has a natural sharpening effect, so detail in the image is retained at smaller sizes.

Dour High Arch
I get a 404 error on that page. I'd suggest bicubic for scaling up, but I stick with my bilinear for scaling down. All in all it's not going to make a major difference.
Ben Herila
@Ben, link fixed. I disagree; bilinear has a smoothing effect that removes jaggies when enlarging, but tends to wash details out when downsizing. Anyway, take a look at the article.
Dour High Arch
+3  A: 

Bilinear is the most widely used method and can be made to run about as fast as the nearest neighbor down-sampling algorithm, which is the fastest but least accurate.

The trouble with a naive implementation of bilinear sampling is that if you use it to reduce an image by more than half, then you can run into aliasing artifacts similar to what you would encounter with nearest neighbor. The solution to this is to use an pyramid based approach. Basically if you want to reduce 600x600 to 30x30, you first reduce to 300x300, then 150x150, then 75x75, then 38x38, and only then use bilinear to reduce to 30x30.

When reducing an image by half, the bilinear sampling algorithm becomes much simpler. Basically for each alternating row and column of pixels:

y[i/2][j/2] = (x[i][j] + x[i+1][j] + x[i][j+1] + x[i+1][j+1]) / 4;
Doug
+1  A: 

There's one special case: downsizing JPG's by more than a factor of 8. A direct factor of 8 rescale can be done on the raw JPG data, without decompressing it. JPG's are stored as compressed blocks of 8x8 pixels, with the average pixel value first. As a result, it typically takes more time to read the file from disk or the network than it takes to downscale it.

MSalters