views:

515

answers:

1

I've always been interested in writing web software like forums or blogs, things which take a limited markup to rewrite into HTML. But lately, I've noticed more and more that for PHP, try googling "PHP BBCode parser -PEAR" and test a few out, you either get an inefficient mess, or you get poor code with XSS holes here and there.

Taking my previously mentioned example, of the poor BBCode parsers out there, how would you avoid XSS? I'll now take your typical regular expression for handling a link, and you can mention how vulnerable it is and how to avoid it.

// Assume input has already been encoded by htmlspecialchars with ENT_QUOTES
$text = preg_replace('#\[url\](.*?)\[/url\]#i','<a href="\1">\1</a>', $text);
$text = preg_replace('#\[url=(.*?)\](.*?)\[/url\]#i','<a href="\1">\2</a>', $text);

Handling image tags are hardly more secure than this.

So I have several specific questions, mostly specific to PHP implementations.

  1. Is it better practice, in this example, to only match using a uri/url validation expression? Or, is it better to use (.*?) and a callback, then ascertain whether or not the input is a valid link? As would be obvious above, the javascript:alert('XSS!') would work in the above URL tags, but would fail if the uri-matching was done.
  2. What about functions like urlencode() within a callback, would they be any deterrence or problem (as far as URI standards go)?
  3. Would it be safer to write a full-stack parser? Or, is the time and processing power needed to develop and use such a thing too weighty for something handling several different entries per page?

I know my example is one of many, and is more specific than some. However, don't shirk from providing your own. So, I'm looking for principles and best practices, and general recommendations for XSS-protection in a text-parsing situation.

+4  A: 

test a few out, you either get an inefficient mess, or you get poor code with XSS holes

Hell yeah. I've not met a bbcode implementation yet that wasn't XSS-vulnerable.

'<a href="\1">\1</a>'

No good: fails to HTML-escape ‘<’, ‘&’ and ‘"’ characters.

Is it better practice, in this example, to only match using a uri/url validation expression? Or, is it better to use (.*?) and a callback, then ascertain whether or not the input is a valid link?

I would take the callback. You need the callback anyway to do the HTML-escaping; it's not possible to be secure with only simple string replacement. Drop the sanitisation in whilst you're doing it.

What about functions like urlencode() within a callback

Nearly; actually you need htmlspecialchars(). urlencode() is about encoding query parameters, which isn't what you need here.

Would it be safer to write a full-stack parser?

Yes.

bbcode is not really amenable to regex parsing, because it's a recursive tag-based language (like XML, which regex also cannot parse). Many bbcode holes are caused by nesting and misnesting problems. For example:

[url]http://www.example.com/[i][/url]foo[/i]

Could come out as something like

<a href="http://www.example.com/&amp;lt;i&gt;"&gt;foo&lt;/i&gt;

there are many other traps that generate broken code (up to an including XSS holes) on various bbcode implementations.

I'm looking for principles and best practices

If you need a bbcode-like language that you can regex, you need to:

  • reduce the number of possible tags that can be put inside other tags. Arbitrary nesting isn't really possible to support
  • use special characters for ‘<’ and ‘>’ HTML tag delimiters, to distinguish them from real angle brackets that should appear as such in the text. I use ASCII control codes (having previously filtered any control characters out at the user input stage).
  • split the string being processed on these control characters on content between these two control characters, so that you never end up letting a bbcode span reach inside a tag or over a tag boundary.
  • because you can't have bbcode spans reaching through tag boundaries work from the outside in, doing large block elements first and working inwards to links and finally bold and italic.
  • for sanity, process a block at a time. eg. If you're starting a new <p> on a double-newline, no bbcode tags can span between the two separate blocks.

It's still damned hard to get right. A proper parser is much more likely to be watertight.

bobince
Hmm, I do agree with you in what you said, but I haven't had much skill in making a proper parser. Know of any decent tutorials for XML-esque parsing? I've found it difficult to find a good one that isn't overly complicated, yet still on the skill level necessary.
The Wicked Flea
If you can't find a third-party parser library that satisfies your needs, you could do it by hand: first preg_split-with-PREG_SPLIT_DELIM_CAPTURE over the string with something like ‘\[[^\]]+\]’ to pick out the tags, then walk through the list keeping a stack of opened tags.
bobince
(Even-numbered indexes in the list would be text, odd-numbered ones tags. Normally text would get HTML-escaped, and maybe have smileys autoreplaced if you're doing that, but some tags might change that.)
bobince