tags:

views:

126

answers:

3

I recently started looking into building web applications using .NET MVC and I stumbled upon this blog post by Phil Haack: JSON Hijacking. For those of you who aren't aware of this vulnerability when using JSON to transfer sensitive data it's really a must read.

It seems that there are three ways to handle this vulnerability.

  1. Require a POST instead of GET in your JSON service.
  2. Wrap your JSON array responses in a JSON object.
  3. Don't expose sensitive data in any service that isn't protected by 1 or 2.

The third alternative isn't really an option since it really limits the use of JSON.

So wich one of the other two do you prefer?

The .NET MVC 2 preview requires a POST for JSON responses by default, I think this is a great way to protect any developer that doesn't know about this problem yet. But to me it feels a little "hacky" to break REST in this way. Unless someone talks me out of it I'm sticking to wrapping my arrays in another object and unwrapping it client side.

+3  A: 

I personally wrap all my responses in a comment:

/* {
    "foo": 3,
    "bar": "string with *\x2F sequence in"
} */

and strip that off before JSON.parsing. This makes it useless as a target for script tags.

It's worth noting that this problem is not only to do with JSON, but any HTTP response you serve that could be interpreted as JavaScript. Even, say, a .htaccess-protected text file is vulnerable to leaking through third-party script tag inclusion, if it's in a format that happens to be valid JavaScript.

And here's the crunch: thanks to E4X, even normal, static XML documents are also valid JavaScript. E4X is a disastrous and useless extension to JavaScript, implemented and invented at Mozilla, which allows you to write <element>content</element> XML literals inline in JS; as such, a protected XML file is now vulnerable to the same cross-site-leakage risks as JSON. Thank you Mozilla. See Google doctype's article on this.

bobince
the e4x article you mention talks about security risks in the opposite direction: that *consumers* of e4x data must be wary of executing it w/o parsing. If you *produce* e4x data, what's the worry?
Jason S
The page addresses injection in both directions. Previously (and still in older Firefoxen) you could alter the XML prototype to get access to newly-created-by-external-script E4X objects, and there are still potential dangers for JavaScript content nested in E4X to invoke attacker callbacks. This is especially the case for pages that include attacker-supplied content, even suitably escaped.
bobince
A: 

Since this is basically a CSRF attack, you could put a token (e.g. hash of session id and secret) in each of your JSON calls and check for the validity of that token on the server. That is the same you should do for regular POST requests anyways.

stefanw
This only prevents posting data impersonating someone else. It doesn't stop the bad guy from stealing sensitive information that's served by a GET JSON service.
Oscar Kilhed
The bad guy needs to know the token, but I don't think that there is a way to get it. Please prove me wrong.
stefanw
A: 

This is a classic csrf (cross site request forgery).

Here is a solution:

http://blog.codeville.net/2008/09/01/prevent-cross-site-request-forgery-csrf-using-aspnet-mvcs-antiforgerytoken-helper/

DaMacc