My job involves system-level performance testing with third party tools that I do not have sources for. I'm also testing Windows, and can use debugging symbols but not Windows source code. I'd like a quantitative way to describe the areas of the host OS my tests cover. There are two big steps to this: identifying what DLLs and functions I want to look at, and then determining how to profile calls to those.
Ideas for coverage:
- All functions from kernel.dll, ntdll.dll, user.dll, etc... The main built in modules. This might be a huge amount of overkill and will probably identify lots of gaps that only have to do with deprecated functionality.
- Just the module names for any DLLs used by the target application. Not as detailed, but also less likely to miss key functionality in the target app.
- App specific modules like d3d10.dll for DirectX 10 apps.
- Basic blocks. I'm guessing this would be a PhD thesis amount of work.
Profiling ideas:
- Run VTune call graph analysis on all of my tests. This sort of works, but seems to provide a limited view of which builtin functions actually get called.
- Dynamically instrument the app with something like Pin or DynamoRIO. Possible con: slow.
- Catch calls with WinDbg. Not sure if this would be easier or quicker than Pin.
- Static analysis using a disassembly tool like IDA Pro.
Is there any published work along these lines on Windows? Have you ever used one of these tools for hooking or logging enough that you could recommend it?