views:

113

answers:

4

I have a gray-scale image and I want to make a function that

  1. closely follows the image
  2. is always grater than it the image
  3. smooth at some given scale.

In other words I want a smooth function that approximates the maximum of another function in the local region while over estimating the that function at all points.

Any ideas?


My first pass at this amounted to picking the "high spots" (by comparing the image to a least-squares fit of a high order 2-D polynomial) and matching a 2-D polynomial to them and their slopes. As the first fit required more working space than I had address space, I think it's not going to work and I'm going to have to come up with something else...


What I did

My end target was to do a smooth adjustment on an image so that each local region uses the full range of values. The key realization was that an "almost perfect" function would do just fine for me.

The following procedure (that never has the max function explicitly) is what I ended up with:

  • Find the local mean and standard deviation at each point using a "blur" like function.
  • offset the image to get a zero mean. (image -= mean;)
  • divide each pixel by its stdev. (image /= stdev;)
  • the most image should now be in [-1,1] (oddly enough most of my test images have better than 99% in that range rather than the 67% that would be expected)
  • find the standard deviation of the whole image.
  • map some span +/- n*sigma to your output range.

With a little manipulation, that can be converted to find the Max function I was asking about.

A: 

Can you clarify what you mean by your desire that it be "smooth" at some scale? Also, over how large of a "local region" do you want it to approximate the maximum?

Quick and dirty answer: weighted average of the source image and a windowed maximum.

Stephen Canon
- By smooth, I want only small changes in the function (and it's slope) from one pixel to the next. Ideally, I'd like a function that is smooth in the math sense, but that would be hard for a discreet function.- The size of the "local region" is one of the tuning parameters.- As to the weighted average idea; wouldn't the windowed max be able to make step changes? Or am I reading that wrong. OTOH picking the N closest local maximum and weighting them based on distance might do it... I'll have to think on that.
BCS
Yes, the windowed max can make step changes. You could weight the average based on the distance of the pixel from the location where the maximum is achieved in order to smooth this out, or you could just apply some a smoothing function.
Stephen Canon
+1  A: 

My quick and dirty answer would be to start with the original image, and repeat the following process for each pixel until no changes are made:

  1. If an overlarge delta in value between this pixel and its neighbours can be resolved by increasing the value of the pixel, do so.
  2. If an overlarge slope change around this pixel and its neighbours can be resolved by increasing the value of the pixel, do so.

The 2D version would look something like this:

for all x:
    d = img[x-1] - img[x]
    if d > DMAX:
        img[x] += d - DMAX
    d = img[x+1] - img[x]
    if d > DMAX:
        img[x] += d - DMAX

    dleft = img[x-1] - img[x]
    dright = img[x] - img[x+1]
    d = dright - dleft
    if d > SLOPEMAX:
        img[x] += d - SLOPEMAX
Artelius
I have realised that there is a problem: if you have a local maximum whose slope-change is too large, it may never be corrected. I guess you could add half the difference to each neighbour in this case...
Artelius
+1  A: 

Maximum filter the image with an RxR filter, then use an order R-1 B-spline smoothing on the maximum-filtered image. The convex hull properties of the B-spline guarantee that it will be above the original image.

thouis
thouis
+1  A: 

Here's something that's easy; I don't know how good it is.

  1. To get smooth, use your favorite blurring algorithm. E.g., average points within radius 5. Space cost is order the size of the image and time is the product of the image size with the square of the blurring radius.

  2. Take the difference of each individual pixel with the original image, find the maximum value of (original[i][j] - blurred[i][j]), and add that value to every pixel in the blurred image. The sum is guaranteed to overapproximate the original image. Time cost is proportional to the size of the image, with constant additional space (if you overwrite the blurred image after computing the max.

To do better (e.g., to minimize the square error under some set of constraints), you'll have to pick some class of smooth curves and do some substantial calculations. You could try quadratic or cubic splines, but in two dimensions splines are not much fun.

Norman Ramsey
I like it :), however I think adding bell curves (`n/(1+r^2)` is my choice) at all the high points would do better. Getting all the curves to add up to exactly the right amount shouldn't be to hard (linear algebra) as long as there aren't to many
BCS
Or even better, find the mean and standard deviation of each local region and use `x_bar +- n*sigma`. Or even better find the "upper" and "lower" standard deviation by fixing the mean and only considering, in turn, the points above or below it. This would work better for unbalanced regions like a star field.
BCS
While I didn't to your suggestion, it pointed me in the right direction.
BCS