views:

2438

answers:

5

I want to be able to take an image and blur it relatively quickly (say in 0.1 sec). Image size would almost never be larger than 256 x 256 px.

Do I have to loop thru every pixel and average them with neighbors or is there a higher-level way that I could do this?

PS: I am aware that multiple box blurs can approximate a gaussian blur.

A: 

You might want to take a look at Mario Klingemann's StakBlur algorithm. It's not quite Gaussian, but pretty close.

balpha
Can I just use a CGImageRef as input to the StackBlur? Not sure where I get the pixel data out of.
mahboudz
+1  A: 

Here's two tricks for poor man's blur:

  1. Take the image, a draw it at partial opacity 5 or 6 (or however many you want) times each time offseting by a couple pixels in a different direction. drawing more times in more directions gets you a better blur, but you obviously trade off processing time. This works well if you want a blur with a relatively small radius.

  2. For monochromatic images, you can actually use the build in shadow as a simple blur.

Kailoa Kadano
If the "partial opacity" is a Gaussin bell curve function of the "couple pixels," you have the defintion of a gaussian blur (minus aliasing issues).
balpha
Do you know how fast the multiple drawings would be for an 4-way blur? That is, 4 left and 4 right.
willc2
+1  A: 

If you always or at least often use the same blur settings you might gain speed by doing the filtering in frequency domain instead of the spatial domain.

  1. Precaclulate your filter image G(u,v), which is a 2D gaussian
  2. Apply fourier transform to your input image f(x,y)->F(u,v)
  3. Filter by multiplication: H(u,v) = F(u,v) .* G(u,v) (pixelwise multiplication, not matrix multiplication)
  4. Transform your filtered image back into the spatial domain by inverse fourier transform: H(u,v) -> h(x,y)

The pros of this approach is that pixel-wise multiplication should be pretty fast compared to averaging a neighborhood. So if you process a lot of images this might help.

The downside is that I have no idea how fast you can do fourier transforms on the iPhone so this might very well be much slower than other implementations.

Other than that I guess since the iPhone has OpenGL support you could maybe use its texturing functions/drawing to do it. Sorry to say though that I am no OpenGL expert and can't really give any practical advice as how that is done.

kigurai
+1  A: 

Any algorithm that modifys images on a pixel level via openGL is going to be a tad slow; pixel-by-pixel manipulation on a openGL texture and then update it every frame is sadly performance in-adequate.

Spend some time writing a test rig and experimenting with pixel manipulation before committing to implementing a complex blur routine.

Phill
+2  A: 

From how-do-i-create-blurred-text-in-an-iphone-view:

Take a look at Apple's GLImageProcessing iPhone sample. It does some blurring, among other things.

The relevant code includes:

static void blur(V2fT2f *quad, float t) // t = 1
{
    GLint tex;
    V2fT2f tmpquad[4];
    float offw = t / Input.wide;
    float offh = t / Input.high;
    int i;

    glGetIntegerv(GL_TEXTURE_BINDING_2D, &tex);

    // Three pass small blur, using rotated pattern to sample 17 texels:
    //
    // .\/.. 
    // ./\\/ 
    // \/X/\   rotated samples filter across texel corners
    // /\\/. 
    // ../\. 

    // Pass one: center nearest sample
    glVertexPointer  (2, GL_FLOAT, sizeof(V2fT2f), &quad[0].x);
    glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &quad[0].s);
    glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
    glColor4f(1.0/5, 1.0/5, 1.0/5, 1.0);
    validateTexEnv();
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    // Pass two: accumulate two rotated linear samples
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE);
    for (i = 0; i < 4; i++)
    {
     tmpquad[i].x = quad[i].s + 1.5 * offw;
     tmpquad[i].y = quad[i].t + 0.5 * offh;
     tmpquad[i].s = quad[i].s - 1.5 * offw;
     tmpquad[i].t = quad[i].t - 0.5 * offh;
    }
    glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &tmpquad[0].x);
    glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
    glActiveTexture(GL_TEXTURE1);
    glEnable(GL_TEXTURE_2D);
    glClientActiveTexture(GL_TEXTURE1);
    glTexCoordPointer(2, GL_FLOAT, sizeof(V2fT2f), &tmpquad[0].s);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);
    glBindTexture(GL_TEXTURE_2D, tex);
    glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
    glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB,      GL_INTERPOLATE);
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB,         GL_TEXTURE);
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB,         GL_PREVIOUS);
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_RGB,         GL_PRIMARY_COLOR);
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB,     GL_SRC_COLOR);
    glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA,    GL_REPLACE);
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA,       GL_PRIMARY_COLOR);

    glColor4f(0.5, 0.5, 0.5, 2.0/5);
    validateTexEnv();
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    // Pass three: accumulate two rotated linear samples
    for (i = 0; i < 4; i++)
    {
     tmpquad[i].x = quad[i].s - 0.5 * offw;
     tmpquad[i].y = quad[i].t + 1.5 * offh;
     tmpquad[i].s = quad[i].s + 0.5 * offw;
     tmpquad[i].t = quad[i].t - 1.5 * offh;
    }
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    // Restore state
    glDisableClientState(GL_TEXTURE_COORD_ARRAY);
    glClientActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, Half.texID);
    glDisable(GL_TEXTURE_2D);
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB,     GL_SRC_ALPHA);
    glActiveTexture(GL_TEXTURE0);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glDisable(GL_BLEND);
}
mahboudz