views:

233

answers:

3

I ran the Visual Studio 2008 profiler against my ASP.NET application and came up with the following result set.

CURRENT FUNCTION                                      TIME (msec)
---------------------------------------------------|--------------
Data.GetItem(params)                               |   10,158.12
---------------------------------------------------|--------------

Functions that were called by Data.GetItem(params)    TIME (msec)
---------------------------------------------------|--------------
Model.GetSubItem(params)                           |     0.83
Model.GetSubItem2(params)                          |     0.77
Model.GetSubItem3(params)                          |     0.76
etc.

The issue I'm facing is that the sum of the Functions called by Data.GetItem(params) do not sum up to the 10,158.12 msec total. This would lead me to believe that the bulk of the time is actually spent executing the code within that method.

My question is ... does Visual Studio provide a way to analyze the method itself so I can see which sections of code are taking the longest? if it does not are there any recommended tools to do this? or should I start writing my own timing scripts?

Thank you

+2  A: 

The VS 2008 profiler does not support block level profiling, but I believe that Red Gate's profiler does.

S.Skov
I've been using Redgate ANTS for just this purpose, and I'm quite happy with it.
Odrade
@Odrade: From the website, I can't figure out what ANTS actually does. It kinda looks like maybe it samples the call stack on wall-clock time. If so, that's what I think it should do. But it also has things like hit counts, which I think are useless, and they imply instrumentation, which I also think is useless. The timeline is a nice concept, but an even more useful idea would be just to have profiling done while a hot-key is pressed. Also, they seem to fall for the idea that the routine to look at is ...
Mike Dunlavey
@Odrade: ... the one that exhibits the greatest drop in inclusive time between it and its children. That only finds hotspots, and (IMO) in real software most performance problems are extraneous function calls, not hotspots. OTOH, if they are stack-sampling, on wall-clock time, ignoring recursion, and reporting inclusive percent on a line-by-line basis, then I think they are on the right track. Zoom does this. LTProf does this (but doesn't have a hot key). And, of course, the good-old manual method does it too.
Mike Dunlavey
+1  A: 

Don't concentrate on timing the code. That's the top-down approach.

Bottom-up is way more effective. This method works just fine in Visual Studio.

Mike Dunlavey
The manual sampling method can help identify which method(s) is/are CPU hogs, but the OP has already identified that (more or less).
Odrade
@Odrade: Devil's in the details. a) Which *lines of code* are at fault? Manual sampling pinpoints them. b) Concept of "CPU hog" - Programs spend time by calling other programs, so you can have very wasteful code that almost never "hogs" the CPU. Manual sampling pinpoints those calls. (There are profilers, like Zoom, that find these, but most do not.)
Mike Dunlavey
@Odrade: ... and don't forget programs that waste time by (unknown to the programmer) doing unnecessary I/O. Manual sampling pinpoints that too, and the VS profiler does not. It says "do instrumentation", but that doesn't pinpoint guilty lines of code, only functions.
Mike Dunlavey
A: 

An another approach would be to break up your GetItem method into a number of smaller methods (perhaps doing a binary chop) to narrow down where the time is being spent. Probably easier than writing some timing scripts.

Craig Fisher