views:

102

answers:

5

I've heard a bit about using automated theorem provers in attempts to show that security vulnerabilities don't exist in a software system. In general this is fiendishly hard to do.

My question is has anyone done work on using similar tools to /find/ vulnerabilities in existing or proposed systems?


Eidt: I'm NOT asking about proving that a software system is secure. I'm asking about finding (ideally previously unknown) vulnerabilities (or even classes of them). I'm thinking like (but an not) a black hat here: describe the formal semantics of the system, describe what I want to attack and then let the computer figure out what chain of actions I need to use to take over your system.

+1  A: 

It's not really related to theorem-proving, but fuzz testing is a common technique for finding vulnerabilities in an automated way.

Kristopher Johnson
+2  A: 

So, at least in some meaningful sense, the opposite of proving something is secure is finding code paths for which it isn't.

Try Byron Cook's TERMINATOR project.

And at least two videos on Channel9. Here's one of them

His research is likely to be a good starting point for you to learn about this extremely interesting area of research.

Projects such as Spec# and Typed-Assembly-Language are related too. In their quest to move the possibility of safety checks from runtime back to compile-time, they allow the compiler to detect many bad code paths as compilation errors. Strictly, they don't help your stated intent, but the theory they exploit might be useful to you.

uosɐſ
+1  A: 

There is the L4 verified kernel which is trying to do just that. However, if you look at the history of exploitation completely new attack patterns are found and then a lot of software written up to that point is very vulnerable to attack. For instance format string vulnerabilities weren't discovered until 1999. About a month ago H.D. Moore released DLL Hijacking and literary everything under windows is vulnerable.

I don't think its possible to prove that a piece of software is secure against an unknown attack. At least not until a theorem is able to discover such an attack, and as far as i know this hasn't happened.

Rook
By security, I think you mean that a set of invariants hold for a certain piece of code running on known hardware. If that is the case, I believe that it is possible to prove that invariants hold. It just gets harder as the software gets bigger, but from what I've read, we're getting smarter at it.
uosɐſ
If that isn't true, I'd be very upset.
uosɐſ
@uosɐſ Let me put this a different way, I have written a lot of exploits over the years and I have lost track of the number of CVE numbers issued to me, and I don't think this can be proven. At least not yet.
Rook
You sounds more qualified than me. Do you believe that it is possible to secure trivial code? I mean, remove the OS, run the most trivial thing you can think of directly on hardware, could such a thing be absolutely secure from a remote attack? If so, that's one extreme. And we know the other extreme (XP?). Somewhere in the middle there is a boundary. We gotta figure out that boundary.
uosɐſ
But as you say, "at least not yet". I'm not challenging you, I'm just curious to hear more of your thoughts.
uosɐſ
@uosɐſ Its a matter of the unknown. If a theorem can prove that it can defend against the unknown then follows a theorem can be used to discover a previously unknown pattern. Even a very simple program, hello world, is very complex. Even if you are assuming that it is executing properly. Keep in mind that vulnerabilities are found in processors. In any math proof you have to make assumptions, are these proofs assuming that the processor is behaving normally? (http://www.engadget.com/2009/03/19/new-smm-exploit-targets-intel-cpu-caching-vulnerability/)
Rook
@uosɐſ: yes you can prove a chosen program on a given machine is invulnerable to external attack http://en.wikipedia.org/wiki/Cyrix_coma_bug
BCS
@BCS cool so how many exploits have you written?
Rook
@BCS, that's a relief! But I don't understand your wikipedia reference. Can you explain? Or is it sarcasm??
uosɐſ
@uosɐſ: It's some trivial code that locks the system. If you are willing to say that is what you want to do, the program is provably invulnerable to anything short of a hardware reset. You now have your one extrema.
BCS
@Rook, None....
BCS
@BCS aah i see your argument now.
Rook
A: 

Disclaimer: I have little to no experience with automated theorem provers

A few observations

  • Things like cryptography are rarely ever "proven", just believed to be secure. If your program uses anything like that, it will only be as strong as the crypto.
  • Theorem provers can't analyze everything (or they would be able to solve the halting problem)
  • You would have to define very clearly what insecure means for the prover. This in itself is a huge challenge
cobbal
I'm not 100% certain about this, but I'm pretty sure that much of cryptography is proven down to the axioms (especially with the proposed P!=NP), it is just often misapplied. As you note in your third point, it's important to strictly define what you're counting on the mechanism for - only then can you prove that your use is a correct application.
uosɐſ
@uosɐſ much of cryptanalysis involves reducing the number of rounds needed to brute force a cipher/hash. Also, I'm fairly sure that "factoring is hard" hasn't been proven (unreliable source: http://en.wikipedia.org/wiki/Integer_factorization )
cobbal
Again, I'm not an expert, but for your consideration: Integer factorization isn't the only "hard" mechanism available. Also, the recent P!=NP would (pending review) prove that hard problems exist. And I think it's been proven that !IF! hard problems exist, that integer factorization or some of its friends are in that category.
uosɐſ
Also, the Byron Cook links address the halting problem. Briefly: Yes, the halting problem remains, but that is just saying that, as you say, you can't statically analyze every program for termination, but it doesn't mean that there aren't a wide range of kinds of programs that are statically analyzable for termination.
uosɐſ
Leads me to wonder if there is some intersection of statically analyzable qualities and Turing Completeness - probably that's been/being explored.
uosɐſ
@uosɐſ That's a good point. It is certainly possible to design programs that can be proved. What I doubt is that a random program off the street will fall into your analyzable subset (no rationale, just a guess). That was what I interpreted the question as asking.
cobbal
Agreed, we're not there yet - I believe our programming paradigms will have to change to fit those criteria (and remain turing-complete). I recall that as far as finding vulnerabilities, Byron's work can analyze single-threaded (and I think multi-threaded now!) code with linear math. Those may be only a few of the restrictions. But according to him, with his work, they've found (and then fixed) vulnerabilities that had existed in practical, live code such as Windows Drivers.
uosɐſ
A: 

Yes. Many theorem proving projects show the quality of their software by demonstrating holes or defects in software. To make it security related, just imagine finding a hole in a security protocol. Carlos Olarte's Ph.D. thesis under Ugo Montanari has one such example.

It is in the application. Not really the theorem prover itself that has anything to do with security or special knowledge thereof.