Just because it compiles doesn't mean it runs! That's the essence of unit testing. Try the code out. Make sure it's doing what you thought it was doing.
Lets face it, if you bring over a matrix transform from matlab, it's easy to mess up a plus or minus sign somewhere. That sort of thing is hard to see. Without trying it out, you just don't know whether it will work correctly. Debugging 100 lines of code is a lot easier than debugging 100,000 lines of code.
Some folks take this to extremes. They try to test every conceivable thing. Testing becomes an end unto itself.
That can be useful later on during maintenance phases. You can quickly check to make sure your updates haven't broken anything.
But the overhead involved can cripple product development! And future changes that alter functionality can involve extensive test-updating overhead.
(It can also get messy with respect to multi-threading and arbitrary execution order.)
Ultimately, unless directed otherwise, my tests try to hit the middle ground.
I look to test at larger granularities, providing a means of verifying basic general functionality. I don't worry so much about every possible fencepost scenario. (That's what ASSERT macros are for.)
For example: When I wrote code to send/receive messages over UDP, I threw together a quick test to send/receive data using that class via the loopback interface. Nothing fancy. Quick, fast, & dirty code. I just wanted to try it out. To make sure that it was actually working before I built something on top of it.
Another example: Reading in camera images from a Firewire camera. I threw together a quick&dirty GTK app to read the images, process them, and display them in realtime. Other folks call that integration testing. But I can use it to verify my Firewire interface, my Image class, my Bayer RGGB->RGB transform, my image orientation & alignment, even whether the camera was mounted upside down again. More detailed testing would only have been warranted if this had proven insufficient.
On the other hand, even for something as simple as:
template<class TYPE> inline TYPE MIN(const TYPE & x, const TYPE & y) { return x > y ? y : x; }
template<class TYPE> inline TYPE MAX(const TYPE & x, const TYPE & y) { return x < y ? y : x; }
I wrote a 1 line SHOW macro to make sure I hadn't messed up the sign:
SHOW(MIN(3,4)); SHOW(MAX(3,4));
All I wanted to do was to verify that it was doing what it should be doing in the general case. I worry less about how it handles NaN / +-Infinity / (double,int) than whether one of colleagues decided to change the argument order and goofed.
Tool wise, there's a lot of unit-testing stuff out there. If it helps you, more power to you. If not, well you don't really need to get too fancy.
I'll often write a test program that dumps data into and out of a class, and then prints it all out with a SHOW macro:
#define SHOW(X) std::cout << # X " = " << (X) << std::endl
(Alternatively, many of my classes can self-print using a built-in operator<<(ostream&) method. It's an amazingly useful technique for debugging as well as for testing!)
Makefiles can be trivially extended to automatically generate output files from test programs, and to automatically compare (diff) these output files against previously known (reviewed) results.
Not fancy, perhaps somewhat less than elegant, but as techniques go this is very effective, fast to implement, and very low overhead. (Which has its advantages when your manager disapproves of wasting time on that testing stuff.)
One last thought I'll leave you with. This is going to get me marked down, so DON'T do it!
Some time ago I needed a testing program. It was a required deliverable. The program itself had to verify that another class was working properly. But it couldn't access external datafiles. (We couldn't rely on where the program would be located relative to anything else. No absolute paths either.) The unit-testing framework for the project was incompatible with the compiler I was required to use. It also had to be in one file. The project makefile system didn't support linking multiple files together for a lowly test program. (Application programs, sure. They could use libraries. But only a single file for each test program.)
So, God forgive me, I "broke the rules"...
<embarrassed>
I used macros. When a #define macro was set, the data was written into a second .c file as an initializer for a struct array. Subsequently, when the software was recompiled, and that second .c file (with the struct array) was #included, and the #define macro was not set, it compared the new results against the previously stored data. Yes, I #included a .c file. O' the embarrassment of it all.
</embarrassed>
But it can be done...