I'm looking for a profiler for my C# application being developed in Visual Studio 2008. I am looking for something that is inexpensive (open sourced is preferred) and that it can integrate into VS2008. I found the Visual Studio Profiler but I do not know how to use it. I installed the Stand Alone version which depends on Visual Studio (not to stand alone I guess?) but nothing ever shows up in the Tools menu like their walk through says it will.
views:
640answers:
8The Visual Studio Profiler is part of Team System only. It is not included in Visual Studio Professional.
There is a free .NET profiler called nprof, but it's not released yet and it can be rather volatile. Also, there are some excellent commercial profiler's such as ANTS Profiler from Red Gate; however, these are not low cost.
Here's a list of open source .Net profilers.
I have used and like the Ants-Profiler from Red Gate, but it does cost money (highly worth it, IMHO).
There is some discussion on profilers for .NET in this stackoverflow thread. I have used CLR Profiler some, and it has helped me take care of a few performance issues in software before. Could be worth a try. Microsoft has published a guide on how to use the CLR Profiler.
My recommendation is dotTrace. Isn't free, price is 170 EUR for Personal License.
If you just want to do memory profiling, the .NET Memory Profiler is excellent. It's got a trial period and small cost after that -- well worth it. If you want to spend some money, DevPartner Studio is very good.
Check out the EQATEC profiler, free and works pretty well. Also works for ASP.NET and .NET CF.
For performance tuning, as opposed to memory diagnostics, there's a simple way to do it.
It's counterintuitive, but all you have to do is run the program under the IDE, and while it's being slow, pause it several times, examining the call stack to see why it's doing whatever it's doing. Chances are excellent that multiple samples will show it doing something that you could eliminate. The time saved is roughly equal to the fraction of samples that contained the code you fixed.
It is "quick and dirty", but unlike most profilers, it pinpoints the actual statements needing attention, not just the functions containing them. It also gives directly a rough estimate of the speedup you can expect by fixing them. It is not confused by recursion, and it avoids the call-tree difficulty that a problem might be small in any branch, but could be big by being spread over many brances.
I take several samples N, usually no more than 20. If there is a hotspot or a rogue method call somewhere mid-stack, taking some fraction F of the execution time, then the number of samples that will show it is NF +- sqrt(NF(1-F). If N=20 and F=0.15, for example, the number of samples that will show it is 3 +- 1.6, so I have an excellent chance of finding it.
Often F is more like 0.5, so the number of samples showing it is 10 +- 2.2, so it will not be missed.
Notice this has absolutely nothing to do with how fast the code is, or how often it runs. If optimizing it will save you a certain percentage of time, that determines what percentage of samples will display it for you.
Usually there are multiple places to optimize. If problem 1 has F1=0.5, and problem 2 has F2 = 0.1, then if you fix problem 1 (doubling the program's speed), then F2 usually increases by that factor, to 0.2. So you can do it again and be sure of finding problem 2. In this way, you can knock down a succession of problems, until the code is practically optimal.