views:

72

answers:

2

Recently a client was concerned that their SWF was "insecure" because the XML path was coming from Flashvars. This seems to me to be something that isn't really a concern as the SWF is only displaying images / text and a few button links. I can understand how someone could path to the swf and add a remote XML path in to add javascript to the button url targets, but really what damage could this do?
Eg. they could change

http://mysite.com/theflash.swf?xmlpath=xml/thedata.xml

to this

 http://mysite.com/theflash.swf?xmlpath=http://dodgysite.com/thechangeddata.xml

Obviously they could build a fake wrapper html file around this but I still don't see how they could do anything harmful with this. Am I missing something?

My next question is what is the best way to go about preventing this from happening?
So far I have in my XSS checking class:

  • unescape the string and remove any spaces or linebreaks (\t, \n, \r)
  • check the string for any of the following (asfunction:, javascript:, event:, vbscript:)
  • check for absolute or relative path by looking for (http or https)
  • if absolute, check that the domain is the same as the main movie.

Most of this process I found in this article: http://www.adobe.com/devnet/flashplayer/articles/secure_swf_apps_02.html

Is there a better way than this?
What else could be done to prevent XSS in flash?

+2  A: 

I think you did already a good job!

This may not always be possible, but you could also validate the data structure that you are receiving.

For example: if the XML contains paths to images, you could verify that the files are ending in .jpg/.png and are loaded from the right directory.

jdecuyper
Why the downvote?
jdecuyper
Blacklisting is not a good solution, because it's difficult to ensure that you've covered all the cases.
tc.
Hi tc! If you know what kind of data structure you are expecting, you can make some basics test on it in order to validate its authenticity before using it. Using your arguments, this is actually whitelisting because the data structure would be tested against a list of properties it must have and not the contrary.
jdecuyper
Thanks for your comments. Your data validation is a good idea.
danjp
+1  A: 

Blacklisting is a terrible solution. The implicit assumption is that "I'll be able to catch all attacks if I look for these substrings"; it's often wrong:

  1. You add an "upload" facility to your site (wiki/bug tracker/whatever), which sticks uploaded files in /userUploads/. There are lots of security problems with this, but let's say that you manage to filter out "unsafe" files (HTML containing JavaScript, etc). Fine.
  2. The attacker uploads an XML file. Your upload script thinks it's "safe" because it's not HTML and doesn't include tags.
  3. The attacker sends someone to http://example.com/theflash.swf?xmlpath=../../../../userUploads/innocent.xml.

Ultimately, you're trying to figure out how a URL parser will treat the string by looking for a few substrings. It's much more effective to stick it through a URL parser and extract the relevant semantics yourself.

I think a potentially safe option is to ensure that the path starts with "xml/" and doesn't contain "/../", but it's still a terrible "solution".

A better option is a whitelist: The filename can only contain [a-z0-9_-]. You generate the path with "xml/$filename.xml". This works provided you don't make a "test.xml".

An even better option is just to maintain a mapping from names to paths, e.g. "data" maps to "xml/data.xml", but "exploit" has no mapping, so it returns an error. It means you can't add files as easily, but also means that the user cannot specify arbitrary paths.

EDIT: Security problems like this arise because of unexpected interactions between different parts of the system ("all files on the filesystem can be trusted") or incorrect assumptions ("URL resolution will give a URL under the same 'directory'", "concatenating paths can't navigate up the directory hierarchy", "all filenames are normal", "checking whether a directory exists can't create it"). I've given an example; no doubt there are others.

If you need to make the config different per deployment, then ... use a config! foo.swf could fetch config.xml, which contains a list of allowed paths. Better is to have config.xml give a mapping from page name to XML path.

In general, exposing implementation details like "all paths happen to match xml/.*\.xml" is icky, a layering violation, and looks a lot like bad security.

tc.
Thanks for your input. I can see the problems with blacklisting, however I'm not sure if there are many other options. A combination of (limited) validation and white/black-listing seems to be the best option. I'm not concerned with uploading, just protecting any deployed SWF sites. The reason the XML path must come from flashvars is that these builds are deployed in different countries and the site structure cannot be guaranteed. Therefore checking for "xml/" or "../" isn't an option as this requires both (and more) possibilities.What are the dangers of an XSS "attack" for a site like this?
danjp