views:

39

answers:

2

I just ran across this point that references a security vulnerability in Web Apps that depends on looking at the size of encrypted web pages to deduce what the uses is doing. The simplest solution to this I can think of would be to use a tool to minify all static content so that (after encryption) only a small number of result sizes exist so as to minimize the information available to an eavesdropper.

Are there any tools for doing this?

+1  A: 

No, I do not know of a tool to prevent this attack. The reason why is because this is a very limited attack that isn't common in the real world. Many crypto attacks are completely useless in the real world.

To prevent this attack the server can add random padding to the message. In the case of async-scripts you could add junk xml or json elements. In other cases you could add html or javascript comments. This is trivial to implement and I don't think this warrants a "tool".

Military networks do this to defend against this very attack by using a constant stream of data. I think its implemented ether on the transport or network layer. It would be tricky to pull that off on the application layer with http. Also, bandwidth is much less important than military secrets, where as this is probably not the case with your web app.

Rook
It would warrant a tool if you had 3k files or you want to adjust the files sizes regularly.
BCS
If you have a large number of files you could add a random number of bogus http headers to all requests. I'm not sure what platform you are running on, but PHP can do this.
Rook
+1  A: 

The default thing to do a minification of static content would be to enable HTTP Compression. It would reduce the number of result sizes a bit.

But consider, that if you shrink the content to half the size, there will not necessarily be only half as many result sizes! This would only be the case, if your original content used all the possible sizes. Let's say, your original content offers 4 different sizes: 10kB, 12 kB and 14 kB

If your compression shrinks each of them to half size, you'll still end up with three different sizes: 5kB, 6kB and 7kB.

Note To make it clear (maybe it wasn't): This is rather an advice against using minification/compression. See also my comment.

Chris Lercher
Good point on pure compression. OTOH I was thinking of white space removal/addition. However when you mix the two, you would end up with another problem because getting a file to compress to exactly a give size by tweaking the uncompressed text might be really hard/annoying.
BCS
How does this make the size of responses less predictable?
Rook
@The Rook: That's my point: It doesn't - at least not much! That's why I'd advise *against* minification (except, if you're satisfied with the minimal gain that you get, because when the content length is smaller, then there are statistically fewer possible variations.) IOW, I would advise on using padding - or even better (though unrealistic): Sending a constant stream, no matter if you transfer real data, or just dummy data.
Chris Lercher
Constant data isn't a bad idea, I know military networks do this to defend against this very attack. I think its implemented ether on the transport or network layer. It would be tricky to pull that off on the application layer with http.
Rook
One way to do it: Use a fixed XMLHttpRequest object, that is sent repeatedly by a timer. A click on a link/button just sets a flag in that object to true. String parameters would have to be split to fit into a fixed size buffer. More effort for sure, and slower response times for the users after clicking on an element that results in a HttpRequest - But it would all be not *too* different from the way some web apps are written nowadays (command pattern, single Comet connection, ...) This could also be abstracted by an extra layer. The main problem IMO: It creates a lot more network traffic.
Chris Lercher