views:

127

answers:

3

does anybody know we can translate from real image captured using cameras be converted to the cartoon space ?

Please note that my goal is not to create animations or the likes, but just to translate to "cartoon colors" if possible.

will simple requantization to a space where there is less quantization levels work Or some other specific transforms are better ?

any help will be useful, as I wasn't able to find any material on this.

Thnx in advance.

+4  A: 

What you're trying to do is most commonly done from 3D models and is called cel-shading, or "toon-shading". Basically, you try to force uniform colors and force abrupt transitions at certain angles with respect to the light source.

Obviously, this does not translate well to 2D input images. What you can do, is to requantize but making sure you uniformly fill regions and break where the image gradient is high.

Non-linear diffusion is a denoising technique that forces regions to become uniform to remove noise. If you let it loop for too many iterations, you get a cartoon-looking image.

I've implemented that maybe 2-3 years ago and it worked suprisingly well considering it was not so hard to implement. However, you're going to want a GPGPU implementation because it is slow!

André Caron
is there anyway of achieving the same on a smart phone ? I know requantization will be decently fast, say O(n*m) if image is n-by-m. at worst n and m are of the order of 2000.
Egon
Requantizing will not give you results of the same quality. If you're willing to trade quality for speed, that's your dicision! Honestly, you should implement more that one method, analyze the tradeoffs and *then* implement it on a smart phone. Take a look at MATLAB, Scilab or Octave to prototype if necessary.
André Caron
@André Non-linear diffusion demands a lot of processing power, though.
karlphillip
Yes it does, note the last sentence in my answer :-)
André Caron
+1  A: 

You could also take a look at mean shift segmentation. An implementation is available here: EDISON

carlosdc
A: 

Complete shot in the dark:

  1. Convert to HSV color space (cvtColor using CV_BGR2HSV)
  2. Leave H(ue) alone, or quantize it down to some smaller set if you want
  3. Binary threshold S(aturation) with a low threshold so that pastels push to white
  4. Binary threshold V(alue) with a low threshold so that dark stuff turns to black

Absolutely untested. Probably speaking out of my hat... But should be pretty low CPU use if it would work. This strikes me as the sort of thing to fire up with sliders for the values needed in steps 2-4 and just fiddle with it.

EDIT: A friend pointed out that you might also want lines around objects. My first thought for that would be to use cvCanny to pick out edges (requires a grayscale image... I'm not sure if it would be better to do this before or after the HSV cartooning. Probably before). Those will be a single pixel wide, which may not be enough, so you may want to dilate them a bit to widen them up. They'll be white on a black background, so you can then subtract them from your cartoon colored image, which will pull the pixels where the lines are down to 0 (saturation arithmetic to the rescue) but leave the other pixels alone.

DigitalMonk