views:

199

answers:

4

One thing I've come to be interested in in digital logic/architecture design is Automated Theorem Proving to verify, for example, a floating point multiplication module.

Unit tests are handy, but its almost intractable to try to test (brute-force) every possible input to a floating-point module. Instead, you find either a proof for (1) that it will always generate a correct result or (2) a proof that shows it will generate at least one incorrect result.

I'm trying now to integrate similar logic into my software, and I'm wondering if I could use it either in conjunction with Test Driven Development or in place of Test Driven Development?

A: 

Always in conjunction with tests. Otherwise you have no way of showing that your implementation actually corresponds to what you have proved is correct.

workmad3
That makes a lot of sense. I guess what I'm asking is not whether to eliminate tests altogether, but just testing at a light level (as opposed to exhaustively testing all of the code).
ajray
Don't drop the test load at all. You need to test it just as exhaustively. The difference is mainly that you have a proof that says if it is an implementation of your proved algorithm then it will work for the situation you want, rather than only seem to in most cases :)
workmad3
+1  A: 

Sounds like you are trying to build 'mathematically proven software', which is a very very hard thing to achieve.

See other question: why cant programs be proven
(NB: the linked question title is misleading - some programs can be proven, but its hard)

codeulike
Thanks for the link! And yes, one project this is for is scientific computational modeling software.
ajray
+1  A: 

Are you saying you will mathematically prove that your algorithm is correct on paper or write code that will do the same? (I'm intrigued in how you will do the latter - please link to a blog post or some explanation on how to do this in the comments if possible)

In the former case, without tests you have no way to prove your implementation mirrors your intentions and works as intended.
In the latter case, if the theorem proving code doesn't exercise the implementation - the previous argument holds.

Personally I'd just use TDD - cause its easy for me and others to read a bunch of well written tests and figure out what the code does. Not to mention much quicker too. You don't have to test every possible output - but rather identify a representative set of inputs. Each input should exercise a different outcome/path through the code.

Gishu
+1  A: 

In the past I have had quite a lot of experience with formal verification, although it was for hardware components (VHDL/Verilog). The same principles could be applied to software, but if you have any kind of I/O or events, the input space becomes unmanageable. It is also impractical to prove any kind of statement on a large "model" as the state-space becomes too large.

Ideally, you would use a theorem prover on "computational" functions to prove their correctness. However, testing should still be used for several reasons:

  • There might be "bugs" in your theorem. This is especially "dangerous" if the person writing the theorems is the person writing the actual code being tested.
  • The theorems being tested are often not complete.
  • You must place assertions in your test environment to make sure the assumptions you used for the inputs ("environment") of your code are valid.
Tal Pressman