+11  A: 

Check these articles:

CMS
Thanks. The second link definitely covers the "well-formed URLs" low-hanging fruit.
jhs
+6  A: 

I'm not sure if detecting URLs with a regex is the right way to solve this problem. Usually you will miss some sort of obscure edge case that spammers will be able to exploit if they are motivated enough.

If your goal is just to filter spam out of comments then you might want to think about Bayesian filtering. It has proved to be very accurate in flagging email as spam, it might be able to do the same for you as well, depending on the volume of text you need to filter.

Nathan Reed
+4  A: 

Well, obviously the low hanging fruit are things that start with http:// and www. Trying to filter out things like "www . g mail . com" leads to interesting philosophical questions about how far you want to go. Do you want to take it the next step and filter out "www dot gee mail dot com" also? How about abstract descriptions of a URL, like "The abbreviation for world wide web followed by a dot, followed by the letter g, followed by the word mail followed by a dot, concluded with the TLD abbreviation for commercial".

It's important to draw the line of what sorts of things you're going to try to filter before you continue with trying to design your algorithm. I think that the line should be drawn at the level where "gmail.com" is considered a url, but "gmail. com" is not. Otherwise, you're likely to get false positives every time someone fails to capitalize the first letter in a sentence.

Benson
+6  A: 

I know this doesn't help with auto-link text but what if you search and replaced all full-stop periods with a character that looks like the same thing, such as the unicode character for hebrew point hiriq (U+05B4)?

The following paragraph is an example:

This might workִ The period looks a bit odd but it is still readableִ The benefit of course is that anyone copying and pasting wwwִgoogleִcom won't get too farִ :)

Arnold Spence
That might not work for my specific case but that is easily the cleverest and most bang-for-the-buck answer so far!
jhs
Agreed. Very neat way of avoiding the problem. My <sub>·</sub> answer is really just a tweak on this.
Sharkey
+8  A: 

I'm concentrating my answer on trying to avoid spammers. This leads to two sub-assumptions: the people using the system will therefore be actively trying to contravene your check and your goal is only to detect the presence of a URL, not to extract the complete URL. This solution would look different if your goal is something else.

I think your best bet is going to be with the TLD. There are the two-letter ccTLDs and the (currently) comparitively small list of others. These need to be prefixed by a dot and suffixed by either a slash or some word boundary. As others have noted, this isn't going to be perfect. There's no way to get "buyfunkypharmaceuticals . it" without disallowing the legitimate "I tried again. it doesn't work" or similar. All of that said, this would be my suggestion:

[^\b]\.([a-zA-Z]{2}|aero|asia|biz|cat|com|coop|edu|gov|info|int|jobs|mil|mobi|museum|name|net|org|pro|tel|travel)[\b/]

Things this will get:

It will of course break as soon as people start obfuscating their URLs, replacing "." with " dot ". But, again assuming spammers are your goal here, if they start doing that sort of thing, their click-through rates are going to drop another couple of orders of magnitude toward zero. The set of people informed enough to deobfuscate a URL and the set of people uninformed enough to visit spam sites have, I think, a miniscule intersection. This solution should let you detect all URLs that are copy-and-pasteable to the address bar, whilst keeping collateral damage to a bare minimum.

Jon Bright
The TLD is a good chokepoint to defend myself, thanks for your answer! I am thinking of combining it with capar's answer and substitute the dot for a "dot-looking" unicode character. That way "...again. it doesn't work" would change unnoticeably but the URL would still not work even if sombody deletes the space. For the really obscure stuff maybe I can fall back on the "flag as inappropriate" feedback.
jhs
To follow up: The TLD is the Achilles heel for spam URLs. In my case (a paragraph or two of prose text where URLs are unwelcome), scanning for a TLD is a straightforward way to detect suspicious strings. From there, several of the great heuristics and techniques in other answers may apply.But since this answer is a good foundation for many of the others, I will select it as the accepted answer.
jhs
+1  A: 

Having made several attempts at writing this exact piece of code, I can say unequivocally, you won't be able to do this with absolute reliability, and you certainly won't be able to detect all of the URI forms allowed by the RFC. Fortunately, since you have a very limited set of URLs you're interested in, you can use any of the techniques above.

However, the other thing I can say with a great deal of certainty, is that if you really want to beat spammers, the best way to do that is to use JavaScript. Send a chunk of JavaScript that performs some calculation, and repeat the calculation on the server side. The JavaScript should copy the result of the calculation to a hidden field so that when the comment is submitted, the result of the calculation is submitted as well. Verify on the server side that the calculation is correct. The only way around this technique is for spammers to manually enter comments or for them to start running a JavaScript engine just for you. I used this technique to reduce the spam on my site from 100+/day to one or two per year. Now the only spam I ever get is entered by humans manually. It's weird to get on-topic spam.

Bob Aman
That is a very interesting idea. I may use that (perhaps in a second phase after building the basic algorithm).
jhs
Link to an answer where I explained the concept more fully: http://stackoverflow.com/questions/8472/practical-non-image-based-captcha-approaches/1603989#1603989
Bob Aman
+1  A: 

Of course you realize if spammers decide to use tinuyrl or such services to shorten their URLs you're problem just got worse. You might have to write some code to look up the actual URLs in that case, using a service like TinyURL decoder

Conrad
+3  A: 

Since you are primarily looking for invitations to copy and paste into a browser address bar, it might be worth taking a look at the code used in open source browsers (such as Chrome or Mozilla) to decide if the text entered into the "address bar equivalent" is a search query or a URL navigation attempt.

J c
That's pretty clever. Thanks.
jhs
+2  A: 

Ping the possible URL

If you don't mind a little server side computation, what about something like this?

urls = []
for possible_url in extracted_urls(comment):
    if pingable(possible_url):
       urls.append(url)  #you could do this as a list comprehension, but OP may not know python

Here:

  1. extracted_urls takes in a comment and uses a conservative regex to pull out possible candidates

  2. pingable actually uses a system call to determine whether the hostname exists on the web. You could have a simple wrapper parse the output of ping.

    [ramanujan:~/base]$ping -c 1 www.google.com

    PING www.l.google.com (74.125.19.147): 56 data bytes 64 bytes from 74.125.19.147: icmp_seq=0 ttl=246 time=18.317 ms

    --- www.l.google.com ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max/stddev = 18.317/18.317/18.317/0.000 ms

    [ramanujan:~/base]$ping -c 1 fooalksdflajkd.com

    ping: cannot resolve fooalksdflajkd.com: Unknown host

The downside is that if the host gives a 404, you won't detect it, but this is a pretty good first cut -- the ultimate way to verify that an address is a website is to try to navigate to it. You could also try wget'ing that URL, but that's more heavyweight.

ramanujan
Excuse me, I most certainly *do* know Python! :) But anyway list comprehensions are completely... what's the word? Incomprehensible. (To non-Python programmers.)
jhs
Right. That's why I said "may not" :)
ramanujan
+1  A: 

Consider incorporating the OWASP AntiSAMY API...

jm04469
+1  A: 
Sharkey
Subscripted middot. Genius! I'll want to test it but if it works on IE7, FF3, and Safari I'd say that's good enough. I'm thinking of mixing this with @Jon Bright's idea of only doing the substitution for fishy URLs (i.e. a dot followed by a valid TLD).
jhs
I've only tried it on FF3, let me know if it works! This might be a good technique for deranged mail clients which URLize or email-address-ize all sorts of stupid things.
Sharkey
.TLD I'm not so sure about, mostly because there's a fair few of them to check for, would make one ugly regexp. Also don't forget dotted quads (eg: IP addresses) are valid URLs, kind of.
Sharkey
Yes, definitely there needs to be a multilayer defense-in-depth to really catch as much abuse as you can. The thing about TLDs is that even though there are many, there aren't *that many* and in my particular case (a 1 or 2 paragraph field of prose text) I can probably get away with an ugly regex. (Most useful regexes are ugly anyhow!)
jhs
+1  A: 

There's already some great answers in here, so I won't post more. I will give a couple of gotchas though. First, make sure to test for known protocols, anything else may be naughty. As someone whose hobby concerns telnet links, you will probably want to include more than http(s) in your search, but may want to prevent say aim: or some other urls. Second, is that many people will delimit their links in angle-brackets (gt/lt) like <http://theroughnecks.net> or in parens "(url)" and there's nothing worse than clicking a link, and having the closing > or ) go allong with the rest of the url.

P.S. sorry for the self-referencing plugs ;)

Tracker1