views:

527

answers:

5

How do I segment a 2D image into blobs of similar values efficiently? The given input is a n array of integer, which includes hue for non-gray pixels and brightness of gray pixels.

I am writing a virtual mobile robot using Java, and I am using segmentation to analyze the map and also the image from the camera. This is a well-known problem in Computer Vision, but when it's on a robot performance does matter so I wanted some inputs. Algorithm is what matters, so you can post code in any language.

A: 

What I have now:

  1. Make a buffer of the same size as the input image, initialized to UNSEGMENTED.
  2. For each pixel in the image where the corresponding buffer value is not UNSEGMENTED, flood the buffer using the pixel value.

    a. The border checking of the flooding is done by checking if pixel is within EPSILON (currently set to 10) of the originating pixel's value.

    b. Flood filling algorithm.

Possible issue:

The 2.a.'s border checking is called many times in the flood filling algorithm. I could turn it into a lookup if I could precalculate the border using edge detection, but that may add more time than current check.

private boolean isValuesCloseEnough(int a_lhs, int a_rhs) {
 return Math.abs(a_lhs - a_rhs) <= EPSILON;
}

Possible Enhancement:

Instead of checking every single pixel for UNSEGMENTED, I could randomly pick a few points. If you are expecting around 10 blobs, picking random points in that order may suffice. Drawback is that you might miss a useful but small blob.

eed3si9n
+2  A: 

I would downsample,in colourspace and in number of pixels, use a vision method(probably meanshift) and upscale the result.

This is good because downsampling also increases the robustness to noise, and makes it more likely that you get meaningful segments.

You could use floodfill to smooth edges afterwards if you need smoothness.

Some more thoughts (in response to your comment).

1) Did you blend as you downsampled? y[i]=(x[2i]+x[2i+1])/2 This should eliminate noise.

2)How fast do you want it to be?

3)Have you tried dynamic meanshift?(also google for dynamic x for all algorithms x)

+1 for the idea. That's actually the first thing I tried, but 1. My original image is small (100x100) and I want to preserve size of blob and 2. When it does pick up noise it gets enhanced. In real situations with lots of gradients, this would be a good idea.
eed3si9n
1. I didn't blend during down sampling, but I did put Gaussian blur on the whole image to reduce tiny noise beforehand. 2. I don't have specific goal, but faster means more frequent update for the robots. 3. No, sound much cooler than flooding.
eed3si9n
+2  A: 

Not sure if it is too efficient, but you could try using a Kohonen neural network (or, self-organizing map; SOM) to group the similar values, where each pixel contains the original color and position and only the color is used for the Kohohen grouping.

You should read up before you implement this though, as my knowledge of the Kohonen network goes as far as that it is used for grouping data - so I don't know what the performance/viability options are for your scenario.

There are also Hopfield Networks. They can be mangled into grouping from what I read.

Jonathan C Dickinson
+1 for interesting idea.
eed3si9n
A: 

Check out Eyepatch (eyepatch.stanford.edu). It should help you during the investigation phase by providing a variety of possible filters for segmentation.

yacdmnky
A: 

An alternative to flood-fill is the connnected-components algorithm. So,

  1. Cheaply classify your pixels. e.g. divide pixels in colour space.
  2. Run the cc to find the blobs
  3. Retain the blobs of significant size

This approach is widely used in early vision approaches. For example in the seminal paper "Blobworld: A System for Region-Based Image Indexing and Retrieval".

graveca