views:

3283

answers:

11

I'm writing some code to scale a 32 bit RGBA image in C/C++. I have written a few attempts that have been somewhat successful, but they're slow and most importantly the quality of the sized image is not acceptable. I compared the same image scaled by OpenGL (i.e. my video card) and my routine and it's miles apart in quality. I've Google Code Searched, scoured source trees of anything I thought would shed some light (SDL, Allegro, wxWidgets, CxImage, GD, ImageMagick, etc.) but usually their code is either convoluted and scattered all over the place or riddled with assembler and little or no comments. I've also read multiple articles on Wikipedia and elsewhere, and I'm just not finding a clear explanation of what I need. I understand the basic concepts of interpolation and sampling, but I'm struggling to get the algorithm right. I do NOT want to rely on an external library for one routine and have to convert to their image format and back. Besides, I'd like to know how to do it myself anyway. :)

I have seen a similar question asked on stack overflow before, but it wasn't really answered in this way, but I'm hoping there's someone out there who can help nudge me in the right direction. Maybe point me to some articles or pseudo code... anything to help me learn and do.

Here's what I'm looking for: 1. No assembler (I'm writing very portable code for multiple processor types). 2. No dependencies on external libraries. 3. I am primarily concerned with scaling DOWN, but will also need to write a scale up routine later. 4. Quality of the result and clarity of the algorithm is most important (I can optimize it later).

My routine essentially takes the following form: DrawScaled( uint32 *src, uint32 *dst, src_x, src_y, src_w, src_h, dst_x, dst_y, dst_w, dst_h );

Thanks!

UPDATE: To clarify, I need something more advanced than a box resample for downscaling which blurs the image too much. I suspect what I want is some kind of bicubic (or other) filter that is somewhat the reverse to a bicubic upscaling algorithm (i.e. each destination pixel is computed from all contributing source pixels combined with a weighting algorithm that keeps things sharp.

EXAMPLE: Here's an example of what I'm getting from the wxWidgets BoxResample algorithm vs. what I want on a 256x256 bitmap scaled to 55x55.

And finally: the original 256x256 image

+2  A: 

I've found the wxWidgets implementation fairly straightforward to modify as required. It is all C++ so no problems with portability there. The only difference is that their implementation works with unsigned char arrays (which I find to be the easiest way to deal with images anyhow) with a byte order of RGB and the alpha component in a separate array.

If you refer to the "src/common/image.cpp" file in the wxWidgets source tree there is a down-sampler function which uses a box sampling method "wxImage::ResampleBox" and an up-scaler function called "wxImage::ResampleBicubic".

Dan
Thanks. I tried that, but it fails on the quality test (it's blurry). See update on my post.
pbhogan
+1  A: 

A fairly simple and decent algorithm to resample images is Bicubic interpolation, wikipedia alone has all the info you need to get this implemented.

Pieter
While this is a decent quality, I'm guessing that this simple solution doesn't quite count as "high quality".
TM
+1  A: 
joki
+1  A: 

A generic article from our beloved host: Better Image Resizing, discussing the relative qualities of various algorithms (and it links to another CodeProject article).

PhiLho
A: 

Intel has IPP libraries which provide high speed interpolation algorithms optimized for Intel family processors. It is very good but it is not free though. Take a look at the following link:

Intel IPP

Naveen
The IPP libs are the low level raw operations. The openCV lib provides higher level image processing functions and can use the IPP if available - opencv is free , I thought IPP was ?
Martin Beckett
A: 

Take a look at ImageMagick, which does all kinds of rescaling filters.

derobert
A: 

Is it possible that OpenGL is doing the scaling in the vector domain? If so, there is no way that any pixel-based scaling is going to be near it in quality. This is the big advantage of vector based images.

The bicubic algorithm can be tuned for sharpness vs. artifacts - I'm trying to find a link, I'll edit it in when I do.

Edit: It was the Mitchell-Netravali work that I was thinking of, which is referenced at the bottom of this link:

http://www.cg.tuwien.ac.at/~theussl/DA/node11.html

You might also look into Lanczos resampling as an alternative to bicubic.

Mark Ransom
No, that's actually a 256x256 bitmap that's being scaled down. It's not drawn by vector operations (although it's unusually crisp, even compared to Photoshop's bicubic resample which is pretty good itself.)
pbhogan
If you can provide the original 256x256 bitmap, that would be helpful too.
Mark Ransom
I've added it to the post.
pbhogan
+1  A: 

It sounds like what you're really having difficulty understanding is the discrete -> continuous -> discrete flow involved in properly resampling an image. A good tech report that might help give you the insight into this that you need is Alvy Ray Smith's A Pixel Is Not A Little Square.

Boojum
This is a great article, thanks, but it is not the problem I'm having. I understand pixels are not squares. The problem I'm having is finding a clear, understandable, reasonably fast algorithm reference for implementing scaling properly.
pbhogan
A: 

As a follow up, Jeremy Rudd posted this article above. It implements filtered two pass resizing. The sources are C# but it looks clear enough that I can port it to give it a try. I found very similar C code yesterday that was much harder to understand (very bad variable names). I got it to sort-of-work, but it was very slow and did not produce good results which led me to believe there was an error in my adaptation. I may have better luck writing it from scratch with this as a reference, which I'll try.

But considering how the two pass algorithm works I wonder if there isn't a faster way of doing it, perhaps even in one pass?

pbhogan
+1  A: 

Now that I see your original image, I think that OpenGL is using a nearest neighbor algorithm. Not only is it the simplest possible way to resize, but it's also the quickest. The only downside is that it looks very rough if there's any detail in your original image.

The idea is to take evenly spaced samples from your original image; in your case, 55 out of 256, or one out of every 4.6545. Just round the number to get the pixel to choose.

Mark Ransom
For a moment I thought you were correct, so I checked my GL_TEXTURE_MIN_FILTER setting and it was set to GL_LINEAR. For kicks I compared it to GL_NEAREST and blew it up in Photoshop. There is a slight difference in smoothness.
pbhogan
I actually have a working routine now based on 2-pass scaling which supports multiple filters. I'll be trying them out and may post results compared to OpenGL and Photoshop at some point. I hope to post some generic code others can use since the question seems to have generated some watchers.
pbhogan