views:

2896

answers:

13

I am building an ASP.NET web site where the users may upload photos of themselves. There could be thousands of photos uploaded every day. One thing my boss has asked a few time is if there is any way we could detect if any of the photos are showing too much 'skin' and automatically move flag these as 'Adults Only' before the editors make the final decision.

+19  A: 

I doubt that there exists any off-the-shelf software that can determine if the user uploads a naughty picture. Your best bet is to let users flag images as 'Adults Only' with a button next to the picture. (Clarification: I mean users other than the one who uploaded the picture--similar to how posts can be marked offensive here on StackOverflow.)

Also, consider this review of an attempt to do the same thing in a dedicated product: http://www.dansdata.com/pornsweeper.htm.

Link stolen from today's StackOverflow podcast, of course :).

JSBangs
Do you really trust users of a site to check the 'evil bit' when they up load an image that is questionable?
Peter M
I think he means that other users will flag it as offensive / adult only. (And a copy will be sent to me =D)
StingyJack
@StingyJack or implement a list of users to send it all =)
Seiti
+14  A: 

We can't even write filters that detect dirty words accurately in blog posts, and your boss is asking for a porno detector? CLBUTTIC!

Tim Howland
I know it's not easy but I am sure large dating sites such as match.com use some kind of detection. And there will be a second level human editors to check for false positives.
Craig
It's all good until they try automatically drawing clothes on the pics; which is what screws most people up.
Chris Lively
I think you are buttuming that the same algorithm is used for pictures and words. People like you should be buttbuttinated (which strangely sounds worse than the the original word, reminds me of the death by bongo-bongo joke:-)).
Tim Ring
A: 

I'm afraid I can't help point you in the right direction, but I do remember reading about this being done before. It was in the context of people complaining about baby pictures being caught and flagged mistakenly. If nothing else, I can give you the hope that you don't have to invent the wheel all by yourself... Someone else has been down this road!

Brian Knoblauch
IIRC, the solution there was to disallow baby pictures completely.
Joel Coehoorn
+33  A: 

Your best bet is to deal with the image in the HSV colour space (see here for rgb - hsv conversion). The colour of skin is pretty much the same between all races, its just the saturation that changes. By dealing with the image in HSV you can simply search for the colour of skin.

You might do this by simply counting the number of pixel within a colour range, or you could perform region growing around pixel to calculate the size of the areas the colour.

Edit: for dealing with grainy images, you might want to perform a median filter on the image first, and then reduce the number of colours to segment the image first, you will have to play around with the settings on a large set of pre-classifed (adult or not) images and see how the values behave to get a satisfactory level of detection.

EDIT: Heres some code that should do a simple count (not tested it, its a quick mashup of some code from here and rgb to hsl here)

Bitmap b = new Bitmap(_image);
BitmapData bData = b.LockBits(new Rectangle(0, 0, _image.Width, _image.Height), ImageLockMode.ReadWrite, b.PixelFormat);
byte bitsPerPixel = GetBitsPerPixel(bData.PixelFormat);
byte* scan0 = (byte*)bData.Scan0.ToPointer();

int count;

for (int i = 0; i < bData.Height; ++i)
{
    for (int j = 0; j < bData.Width; ++j)
    {
        byte* data = scan0 + i * bData.Stride + j * bitsPerPixel / 8;

        byte r = data[2];
        byte g = data[1];
        byte b = data[0];

        byte max = (byte)Math.Max(r, Math.Max(g, b));
        byte min = (byte)Math.Min(r, Math.Min(g, b));

        int h;

        if(max == min)
            h = 0;
        else if(r > g && r > b)
            h = (60 * ((g - b) / (max - min))) % 360;
        else if (g > r && g > b)
            h = 60 * ((b - r)/max - min) + 120;
        else if (b > r && b > g)
            h = 60 * ((r - g) / max - min) + 240;


        if(h > _lowerThresh && h < _upperThresh)
            count++;
    }
}
b.UnlockBits(bData);
Andrew Bullock
+1 because you sound like you know what you're doing.
Chris Lively
Very important, of course, would be to make sure the editors are quick to review suspect images, because you're probably going to get lots of false positives.
Kip
+1  A: 

Perhaps the Porn Breath Test would be helpful - as reported on Slashdot.

CodeSlave
+10  A: 

I would say your answer lies in crowdsourcing the task. This almost always works and tends to scale very well.

It doesn't have to involve making some users into "admins" and coming up with different permissions - it can be as simple as to enable an "inappropriate" link near each image and keeping a count.

conny
We will go down that route as well I think.
Craig
Or outsource it to Mechanical Turk
John Sheehan
There's a userfriendly cartoon on this: http://ars.userfriendly.org/cartoons/?id=20081210
ConcernedOfTunbridgeWells
+4  A: 

Interesting question from a theoretical / algorithmic standppoint. One approach to the problem would be to flag images that contain large skin-colored regions (as explained by Trull).

However, the amount of skin shown is not a determinant of an offesive image, it's rather the location of the skin shown. Perhaps you can use face detection (search for algorithms) to refine the results -- determine how large the skin regions are relative to the face, and if they belong to the face (perhaps how far below it they are).

dbkk
Very good suggestion. It's easy enough to actually implement and would probably work pretty good.
kigurai
+3  A: 

I know either Flickr or Picasa has implemented this. I believe the routine was called FleshFinder.

A tip on the architecture of doing this:

Run this as a windows service separate from the ASP.NET Pipeline, instead of analyzing images in real time, create a queue of new images that are uploaded for the service to work through.

You can use the normal System.Drawing stuff if you want, but if you really need to process a lot of images, it would be better to use native code and a high performance graphics library and P/invoke the routine from your service.

As resources are available, process images in the background and flag ones that are suspicious for editors review, this should prune down the number of images to review significantly, while not annoying people who upload pictures of skin colored houses.

FlySwat
+3  A: 

I would approach the problem from a statistical standpoint. Get a bunch of pictures that you consider safe, and a bunch that you don't (that will make for a fun day of research), and see what they have in common. Analyze them all for color range and saturation to see if you can pick out characteristics that all of the naughty photos, and few of the safe ones have.

Bill the Lizard
This is an interesting point. I have heard people from Google say before that given enough data anything can be solved using statistics. Algorithms are not alway required. For example the spell check on Google.com is statistics driven not a spell check algorithm.
Craig
This is kind of what i was getting at, just from the other approach. This is probably the starting point for what i suggested. Do a load of analysis first to give you some starting off points for the suggested thresholds in your detector.
Andrew Bullock
Im actually quite interested in this, if you can send me a farly decent sized set of test images, I'd have a play for you - you can happily have the code, i might sourceforge it as a library if its any good
Andrew Bullock
@Trull: You could probably sift through SO gravatars for images that are in the safe category. The internet is full of test image in the "naughty" category. :)
Bill the Lizard
+34  A: 

Of course, this will fail for the first user who posts a close-up of someone's face (or hand, or foot, or whatnot). Ultimately, all these forms of automated censorship will fail until there's a real paradigm-shift in the way computers do object recognition.

I'm not saying that you shouldn't attempt it nontheless; but I want to point to these problems. Do not expect a perfect (or even good) solution. It doesn't exist.

Konrad Rudolph
haha yeah that's another case
hmak
A: 

Rigan Ap-apid presented a paper at WorldComp '08 on just this problem space. The paper is allegedly here, but the server was timing out for me. I attended the presentation of the paper and he covered comparable systems and their effectiveness as well as his own approach. You might contact him directly.

plinth
Try this link: http://www.math.admu.edu.ph/~raf/pcsc05/proceedings/AI4.pdf
Rasmus Faber
Ah, that might actually be another paper by Rigan, but it might be helpful anyway.
Rasmus Faber
A: 

CrowdSifter by Dolores Labs might do the trick for you. I read their blog all the time as they seem to love statistics and crowdsourcing and like to talk about it. They use amazon's mechanical turk for a lot of their processing and know how to process the results to get the right answers out of things. Check out their blog at the very least to see some cool statistical experiments.

wizard
+3  A: 

See the seminal paper "Finding Naked People" by Fleck/Forsyth published in ECCV. (Advanced).

http://www.cs.hmc.edu/~fleck/naked.html

graveca