tags:

views:

341

answers:

3

I'm still having issues justifying TDD to myself. As I have mentioned in other questions, 90% of the code I write does absolutely nothing but

  1. Call some Windows API functions and
  2. Print out the data returned from said functions.

The time spent coming up with the fake data that the code needs to process under TDD is incredible -- I literally spend 5 times as much time coming up with the example data as I would spend just writing application code.

Part of this problem is that often I'm programming against APIs with which I have little experience, which forces me to write small applications that show me how the real API behaves so that I can write effective fakes/mocks on top of that API. Writing implementation first is the opposite of TDD, but in this case it is unavoidable: I do not know how the real API behaves, so how on earth am I going to be able to create a fake implementation of the API without playing with it?

I have read several books on the subject, including Kent Beck's Test Driven Development, By Example, and Michael Feathers' Working Effectively with Legacy Code, which seem to be gospel for TDD fanatics. Feathers' book comes close in the way it describes breaking out dependencies, but even then, the examples provided have one thing in common:

  • The program under test obtains input from other parts of the program under test.

My programs do not follow that pattern. Instead, the only input to the program itself is the system upon which it runs.

How can one effectively employ TDD on such a project? I'm already wrapping most of the API inside C++ classes before I actually use that API, but sometimes the wrappers themselves can become quite complicated, and deserve their own tests.

A: 

Edit I understand this is not what you need. I'm leaving it here as community wiki as the comments are useful.

Haha, well, whenever I see job adverts with the words: "requires test driven development" or "agile development methodologies" and the like I run the other way. I am strictly of the opinion that examining the problem and understanding the best way to solve it (do I work in a pair, or liaise regularly with the customer, or just write something against the hardware spec) is part of the job and doesn't need a fancy name and forcing on projects that don't need them. Rant over.

I would say you don't need to, at least, you don't need to test the Windows API - you're testing functions for an API you can't modify anyway.

If you're building a function that does some process on the output of a Windows API call, you can test that. Let's just say for example you are pulling Window Titles given a hWnd and inverting them. You can't test GetWindowTitle and SetWindowTitle, but you can test InvertString, which you wrote, simply by calling your function with "Thisisastring" and testing if the result of the function is "gnirtsasisihT". If it is, great, update a test score in the matrix. If it isn't, oh dear, whatever modifications you made broke the program, not good, go back and fix.

There is a question as to if that's actually necessary, for such a simple function. Does having a test prevent any bugs sneaking in? How often is the algorithm likely to be miscompiled/broken by changes etc?

Such tests are more useful on a project on which I work called MPIR, which builds against many different platforms. We run a build on each of these platforms then test the resultant binary to make sure the compiler hasn't created an error via optimisation, or something we've done in writing the algorithm doesn't do unexpected things on that platform. It's a check, to make sure we don't miss things. If it passes, great, if it fails, somebody goes and looks into why.

Personally, I'm not sure exactly how an entire development process can be driven purely by tests. They're checks, after all. They don't tell you when it is time to make a significant change in direction in your codebase, just what you've done works. So, I'm going to go so far as to say TDD is just a buzzword. Somebody feel free to disagree with me.

Ninefingers
It's not TDD itself I'm really looking for. It's more the tests themselves. I've just found it takes even longer when I design a monolithic system and then try to instrument it for testing later. Perhaps I wrote the question poorly.
Billy ONeal
I tend to build things in bits. That's my test. So, if I want to get a window title, I build a function to do it given perhaps a partial name (or whatever generic functionality I need) then write a small program to check it works. If it does, excellent, integrate into the rest of the code base. This say, I can run tools like valgrind on certain parts of the code too. I think this is exactly what you're trying to avoid though?
Ninefingers
+2  A: 

I don't think it's feasible to unit test thin wrapper classes. The thicker your wrapper, the easier it should be to test the bits that don't directly hit the API, as the wrapper itself can have multiple layers, the lowest of which can be mocked somehow.

While you could do something like:

// assuming Windows, sorry.

namespace Wrapper
{
   std::string GetComputerName()
   {
      char name[MAX_CNAME_OR_SOMETHING];
      ::GetComputerName(name);
      return std::string(name);
   }
}

TEST(GetComputerName) // UnitTest++
{
   CHECK_EQUAL(std::string(getenv("COMPUTERNAME")), Wrapper::GetComputerName());
}

I don't know that tests like this bring a lot of value, and would tend to make my testing focus on the transformation of data rather than the collection of such.

dash-tom-bang
It's not the actual mocking step that's difficult. It's creating the complicated test data to actually return from the mock.
Billy ONeal
Right- I wasn't trying to demonstrate a mock, but giving an example of a test on a function and using that as an example of why I believe that testing a presumably well tested 3rd party API is not a good use of time.
dash-tom-bang
Hmmm.. problem is that some of the wrappers aren't exactly thin. +1 for taking the time to write out an answer though.
Billy ONeal
Right- that's why I thought that maybe testing the transformation of the data would be easier than testing that the data you're getting is good. I.e. a "thick" wrapper implies that you're doing something more than just querying the API. Although perhaps this gets to the core of your point, since filling out those structs is irritating to say the least. I suspect, though, that the more you can distill your transformations into single units of work, the more apparent a path for testing will become.
dash-tom-bang
+3  A: 

See below for FindFirstFile/FindNextFile/FindClose example


I use googlemock. For an external API I generally create an interface class. Assume I was going to call fopen, fwrite, fclose

class FileIOInterface {
public:
  ~virtual FileIOInterface() {}

  virtual FILE* Open(const char* filename, const char* mode) = 0;
  virtual size_t Write(const void* data, size_t size, size_t num, FILE* file) = 0;
  virtual int Close(FILE* file) = 0;
};

The actual implementation would be this

class FileIO : public FileIOInterface {
public:
  virtual FILE* Open(const char* filename, const char* mode) {
    return fopen(filename, mode);
  }

  virtual size_t Write(const void* data, size_t size, size_t num, FILE* file) {
    return fwrite(data, size, num, file);
  }

  virtual int Close(FILE* file) {
    return fclose(file);
  }
};

Then using googlemock I make a MockFileIO class like this

class MockFileIO : public FileIOInterface {
public:
  virtual ~MockFileIO() { }

  MOCK_MEHTOD2(Open, FILE*(const char* filename, const char* mode));
  MOCK_METHOD4(Write, size_t(const void* data, size_t size, size_t num, FILE* file));
  MOCK_METHOD1(Close, int(FILE* file));
}

This makes writing the tests easy. I don't have to provide a test implementation of Open/Write/Close. googlemock handles that for me. as in. (note I use googletest for my unit testing framework.)

Assume I have a function like this that needs testing

// Writes a file, returns true on success.
bool WriteFile(FileIOInterface fio, const char* filename, const void* data, size_size) {
   FILE* file = fio.Open(filename, "wb");
   if (!file) {
     return false;
   }

   if (fio.Write(data, 1, size, file) != size) {
     return false;
   }

   if (fio.Close(file) != 0) {
     return false;
   }

   return true;
}

And here's the tests.

TEST(WriteFileTest, SuccessWorks) {
  MockFileIO fio;

  static char data[] = "hello";
  const char* kName = "test";
  File test_file;

  // Tell the mock to expect certain calls and what to 
  // return on those calls.
  EXPECT_CALL(fio, Open(kName, "wb")
      .WillOnce(Return(&test_file));
  EXPECT_CALL(fio, Write(&data, 1, sizeof(data), &test_file))
      .WillOnce(Return(sizeof(data)));
  EXPECT_CALL(file, Close(&test_file))
      .WillOnce(Return(0));

  EXPECT_TRUE(WriteFile(kName, &data, sizeof(data));
}

TEST(WriteFileTest, FailsIfOpenFails) {
  MockFileIO fio;

  static char data[] = "hello";
  const char* kName = "test";
  File test_file;

  // Tell the mock to expect certain calls and what to 
  // return on those calls.
  EXPECT_CALL(fio, Open(kName, "wb")
      .WillOnce(Return(NULL));

  EXPECT_FALSE(WriteFile(kName, &data, sizeof(data));
}

TEST(WriteFileTest, FailsIfWriteFails) {
  MockFileIO fio;

  static char data[] = "hello";
  const char* kName = "test";
  File test_file;

  // Tell the mock to expect certain calls and what to 
  // return on those calls.
  EXPECT_CALL(fio, Open(kName, "wb")
      .WillOnce(Return(&test_file));
  EXPECT_CALL(fio, Write(&data, 1, sizeof(data), &test_file))
      .WillOnce(Return(0));

  EXPECT_FALSE(WriteFile(kName, &data, sizeof(data));
}

TEST(WriteFileTest, FailsIfCloseFails) {
  MockFileIO fio;

  static char data[] = "hello";
  const char* kName = "test";
  File test_file;

  // Tell the mock to expect certain calls and what to 
  // return on those calls.
  EXPECT_CALL(fio, Open(kName, "wb")
      .WillOnce(Return(&test_file));
  EXPECT_CALL(fio, Write(&data, 1, sizeof(data), &test_file))
      .WillOnce(Return(sizeof(data)));
  EXPECT_CALL(file, Close(&test_file))
      .WillOnce(Return(EOF));

  EXPECT_FALSE(WriteFile(kName, &data, sizeof(data));
}

I didn't have to provide a test implementation of fopen/fwrite/fclose. googlemock handles this for me. You can make the mock strict if you want. A Strict mock will fail the tests if any function that is not expected is called or if any function that is expected is called with the wrong arguments. Googlemock provides a ton of helpers and adapters so you generally don't need to write much code to get the mock to do what you want. It takes a few days to learn the different adapters but if you're using it often they quickly become second nature.


Here's an example using FindFirstFile, FindNextFile, FindClose

First the interface

class FindFileInterface {
public:
  virtual HANDLE FindFirstFile(
    LPCTSTR lpFileName,
    LPWIN32_FIND_DATA lpFindFileData) = 0;

  virtual BOOL FindNextFile(
    HANDLE hFindFile,
    LPWIN32_FIND_DATA lpFindFileData) = 0;

  virtual BOOL FindClose(
    HANDLE hFindFile) = 0;

  virtual DWORD GetLastError(void) = 0;
};

Then the actual implementation

class FindFileImpl : public FindFileInterface {
public:
  virtual HANDLE FindFirstFile(
    LPCTSTR lpFileName,
    LPWIN32_FIND_DATA lpFindFileData) {
    return ::FindFirstFile(lpFileName, lpFindFileData);
  }

  virtual BOOL FindNextFile(
    HANDLE hFindFile,
    LPWIN32_FIND_DATA lpFindFileData) {
    return ::FindNextFile(hFindFile, lpFindFileData);
  }

  virtual BOOL FindClose(
    HANDLE hFindFile) {
    return ::FindClose(hFindFile);
  }

  virtual DWORD GetLastError(void) {
    return ::GetLastError();
  }
};

The Mock using gmock

class MockFindFile : public FindFileInterface {
public:
  MOCK_METHOD2(FindFirstFile,
               HANDLE(LPCTSTR lpFileName, LPWIN32_FIND_DATA lpFindFileData));
  MOCK_METHOD2(FindNextFile,
               BOOL(HANDLE hFindFile, LPWIN32_FIND_DATA lpFindFileData));
  MOCK_METHOD1(FindClose, BOOL(HANDLE hFindFile));
  MOCK_METHOD0(GetLastError, DWORD());
};

The function I need to test.

DWORD PrintListing(FindFileInterface* findFile, const TCHAR* path) {
  WIN32_FIND_DATA ffd;
  HANDLE hFind;

  hFind = findFile->FindFirstFile(path, &ffd);
  if (hFind == INVALID_HANDLE_VALUE)
  {
     printf ("FindFirstFile failed");
     return 0;
  }

  do {
    if (ffd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) {
       _tprintf(TEXT("  %s   <DIR>\n"), ffd.cFileName);
    } else {
      LARGE_INTEGER filesize;
      filesize.LowPart = ffd.nFileSizeLow;
      filesize.HighPart = ffd.nFileSizeHigh;
      _tprintf(TEXT("  %s   %ld bytes\n"), ffd.cFileName, filesize.QuadPart);
    }
  } while(findFile->FindNextFile(hFind, &ffd) != 0);

  DWORD dwError = findFile->GetLastError();
  if (dwError != ERROR_NO_MORE_FILES) {
    _tprintf(TEXT("error %d"), dwError);
  }

  findFile->FindClose(hFind);
  return dwError;
}

The unit tests.

#include <gtest/gtest.h>
#include <gmock/gmock.h>

using ::testing::_;
using ::testing::Return;
using ::testing::DoAll;
using ::testing::SetArgumentPointee;

// Some data for unit tests.
static WIN32_FIND_DATA File1 = {
  FILE_ATTRIBUTE_NORMAL,  // DWORD    dwFileAttributes;
  { 123, 0, },            // FILETIME ftCreationTime;
  { 123, 0, },            // FILETIME ftLastAccessTime;
  { 123, 0, },            // FILETIME ftLastWriteTime;
  0,                      // DWORD    nFileSizeHigh;
  123,                    // DWORD    nFileSizeLow;
  0,                      // DWORD    dwReserved0;
  0,                      // DWORD    dwReserved1;
  { TEXT("foo.txt") },    // TCHAR   cFileName[MAX_PATH];
  { TEXT("foo.txt") },    // TCHAR    cAlternateFileName[14];
};

static WIN32_FIND_DATA Dir1 = {
  FILE_ATTRIBUTE_DIRECTORY,  // DWORD    dwFileAttributes;
  { 123, 0, },            // FILETIME ftCreationTime;
  { 123, 0, },            // FILETIME ftLastAccessTime;
  { 123, 0, },            // FILETIME ftLastWriteTime;
  0,                      // DWORD    nFileSizeHigh;
  123,                    // DWORD    nFileSizeLow;
  0,                      // DWORD    dwReserved0;
  0,                      // DWORD    dwReserved1;
  { TEXT("foo.dir") },    // TCHAR   cFileName[MAX_PATH];
  { TEXT("foo.dir") },    // TCHAR    cAlternateFileName[14];
};

TEST(PrintListingTest, TwoFiles) {
  const TCHAR* kPath = TEXT("c:\\*");
  const HANDLE kValidHandle = reinterpret_cast<HANDLE>(1234);
  MockFindFile ff;

  EXPECT_CALL(ff, FindFirstFile(kPath, _))
    .WillOnce(DoAll(SetArgumentPointee<1>(Dir1),
                    Return(kValidHandle)));
  EXPECT_CALL(ff, FindNextFile(kValidHandle, _))
    .WillOnce(DoAll(SetArgumentPointee<1>(File1),
                    Return(TRUE)))
    .WillOnce(Return(FALSE));
  EXPECT_CALL(ff, GetLastError())
    .WillOnce(Return(ERROR_NO_MORE_FILES));
  EXPECT_CALL(ff, FindClose(kValidHandle));

  PrintListing(&ff, kPath);
}

TEST(PrintListingTest, OneFile) {
  const TCHAR* kPath = TEXT("c:\\*");
  const HANDLE kValidHandle = reinterpret_cast<HANDLE>(1234);
  MockFindFile ff;

  EXPECT_CALL(ff, FindFirstFile(kPath, _))
    .WillOnce(DoAll(SetArgumentPointee<1>(Dir1),
                    Return(kValidHandle)));
  EXPECT_CALL(ff, FindNextFile(kValidHandle, _))
    .WillOnce(Return(FALSE));
  EXPECT_CALL(ff, GetLastError())
    .WillOnce(Return(ERROR_NO_MORE_FILES));
  EXPECT_CALL(ff, FindClose(kValidHandle));

  PrintListing(&ff, kPath);
}

TEST(PrintListingTest, ZeroFiles) {
  const TCHAR* kPath = TEXT("c:\\*");
  MockFindFile ff;

  EXPECT_CALL(ff, FindFirstFile(kPath, _))
    .WillOnce(Return(INVALID_HANDLE_VALUE));

  PrintListing(&ff, kPath);
}

TEST(PrintListingTest, Error) {
  const TCHAR* kPath = TEXT("c:\\*");
  const HANDLE kValidHandle = reinterpret_cast<HANDLE>(1234);
  MockFindFile ff;

  EXPECT_CALL(ff, FindFirstFile(kPath, _))
    .WillOnce(DoAll(SetArgumentPointee<1>(Dir1),
                    Return(kValidHandle)));
  EXPECT_CALL(ff, FindNextFile(kValidHandle, _))
    .WillOnce(Return(FALSE));
  EXPECT_CALL(ff, GetLastError())
    .WillOnce(Return(ERROR_ACCESS_DENIED));
  EXPECT_CALL(ff, FindClose(kValidHandle));

  PrintListing(&ff, kPath);
}

I didn't have to implement any of the mock functions.

gman
The problem is not setting up the stubs. The problem is that Win32 returns complicated data structures, and it takes a long time to make up the test data for these structures. Namely because I don't know what that structure looks like in advance.
Billy ONeal
Pick a specific windows API function to discuss.
gman
How about FindFirstFile/FindNextFile/FindClose?
Billy ONeal
That's a good example to hit (because it can become very complicated, heh). What data do you want to present to your application based on the results of those calls? Further- if your "internal" implementation hangs onto the iterator details and doesn't expose them, perhaps the testing isn't as important here and is better spent on the clients of this returned state.
dash-tom-bang
@dash-tom-bang: Yes -- but if the iterator implementation is complicated it deserves it's own tests. This is how I took care of the problem (scroll down to the edit where I posted code): http://stackoverflow.com/questions/2531874/how-might-i-wrap-the-findxfile-style-apis-to-the-stl-style-iterator-pattern-in-c -- but that doesn't seem to be good code to me. There are examples of APIs that require mocking of 5 or 6 functions, such as the Service Control Manager -- which are *much* more complicated. I was wondering if there was a better way.
Billy ONeal
@dash-tom-bang: The reason for implementing this iterator pattern in the first place is to make testing other code easier. Once you have an iterator pattern, you can test by passing in an iterator to a vector and be done with it.
Billy ONeal
Billy ONeal
@Billy: I find for myself, being the lazy coder I am, feeling that sufficient tests on clients of a wrapper API to be a good proxy for tests of that API itself, although I know I'll probably find performance to be an issue someday. I've been TDDing sporadically for almost ten years now, though, so /it works for me/. ;)
dash-tom-bang
@Dash-tom-bang: I was of that opinion until I wrote tests for such a class I had been using for months and found it had been leaking resources.
Billy ONeal
@gman: +200% reputation. Enjoy :)
Billy ONeal
@Billy- fair enough. :)
dash-tom-bang