views:

880

answers:

6

In the upcoming version of Photoshop there is a feature called Content-Aware fill.

This feature will fill a selection of an image based on the surrounding image - to the point it can generate bushes and clouds while being seamless with the surrounding image.

See http://www.youtube.com/watch?v=NH0aEp1oDOI for a preview of the Photoshop feature I'm talking about.

My question is: How does this feature work algorithmically?

+2  A: 

Well, they are not going to tell for the obvious reasons. The general name for the technique is "inpainting", you can look this up.

Specifically, if you look at what Criminisi did while in Microsoft http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.67.9407 and what Todor Georgiev does now at Adobe http://www.tgeorgiev.net/Inpainting.html, you'll be able to make a very good guess. A 90% guess, I'd say, which should be good enough.

AVB
+3  A: 

There is very similar algorithm for GIMP for a quite long time. It is called resynthesizer and probably you should be able to find a source for it (maybe at the project site)

EDIT
There is also source available at the ubuntu repository
And here you can see processing the same images with GIMP: http://www.youtube.com/watch?v=0AoobQQBeVc&feature=related

Gacek
When I last looked at resynthesizer (version 0.13, I think), it was behind state of the art (see e.g. Criminisi's work cited in my answer) and thus pretty slow. Of course, it's still the best from what's available for free.
AVB
+1  A: 

As a guess (and that's all that it would be) I'd expect that it does some frequency analysis (some like a Fourier transform) of the image. By looking only at the image at the edge of the selection and ignoring the middle, it could then extrapolate back into the middle. If the designers choose the correct color plains and what not, they should be able to generate a texture that seamlessly blends into the image at the edges.


edit: looking at the last example in the video; if you look at the top of the original image on either edge you see that the selection line runs right down a "gap" in the clouds and that right in the middle there is a "bump". These are the kind of artifacts I'd expect to see if my guess is correct. (OTOH, I'd also expect to see them is it was using some kind of sudo-mirroring across the selection boundary.)

BCS
People used Fourier-like transforms to do these things in the 80s. Things happened since, although in some conditions this will give you a good initial guess.
AVB
AB: If I had to guess, I'd still say it's doing the same basic thing, just a more advanced version of it.
BCS
+4  A: 

I'm guessing that for the smaller holes they are grabbing similarly textured patches surrounding the area to fill it in. This is described in a paper entitled "PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing" by Connelly Barnes and others in SIGGRAPH 2009. For larger holes they can exploit a large database of pictures with similar global statistics or texture, as describe in "Scene Completion Using Millions of Photographs". If they somehow could fused the two together I think it should work like in the video.

Hao Wooi Lim
+1  A: 

I work on a similar problem. From what i read they use "PatchMatch" or "non-parametric patch sampling" in general.

PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing

Ross
Why do i can't post link properly?
Ross
+3  A: 

The general approach is called seam-carving. Ariel Shamir's group is responsible for the seminal work here, which was presented in SIGGRAPH 2007. See: http://www.faculty.idc.ac.il/arik/site/subject-seam-carve.asp

nav