views:

166

answers:

5

Is it possible to profile individual methods via attributes in .NET?

I am currently trying to locate some of the bottle necks in a large legacy application that makes heavy use of static methods. Integrating a framework is simply not an option at the moment. Since most of the calls use static methods, interfaces and dependency injection are not available. Hacking the code to log diagnostics is not a viable solution either.

I know that there are some profiling tools on the market, but they are currently outside of the budget. Ideally, I would be able to create my own custom attribute that would log some basic information on method entry and method exit. I've never really worked with custom attributes so any insight into if this is even possible would be appreciated.

If possible, I'd like to be enable the profiling via a config file. This would support profiling via unit and integration tests.

+2  A: 

You could use PostSharp to do some weaving, basically turning:

[Profiled]
public void Foo()
{
     DoSomeStuff();
}

into

public void Foo()
{
    Stopwatch sw = Stopwatch.StartNew();
    try
    {
        DoSomeStuff();
    }
    finally
    {
        sw.Stop();
        ProfileData.AddSample("Foo", sw.Elapsed);
    }
}

Indeed, looking at the PostSharp documentation, you should be able to use Gibraltar (with PostSharp) for this, if you can afford it. Otherwise you may well end up spending a day or so getting the hang of PostSharp, but it could still be well worth it.

Note that I know you said you couldn't afford to integrate a framework into your codebase, but it's not like you'll really be "integrating" so much as getting PostSharp to run some post-compile transformations on your code.

Jon Skeet
Thanks. PostSharp seems to offer exactly what I need and it won't require any major changes to existing code. It definitely meets all my requirements on my local dev box, but I don't know if will be useful in production because it appears that it will need to be installed on the servers running the apps. Thanks for the great advice!
Michael
@Michael: Does it *definitely* need to be installed there? I would have *hoped* that for very simple weaving - particularly if you write your own aspects - you shouldn't need any trace of PostSharp after the rewriting. It's a while since I've tried it though.
Jon Skeet
I can't that it definitely needs to be installed. I've only tried it out for a few hours. The steps in the getting started tutorial mention the install. At first I tried to only reference the binaries without the install and it didn't work. Same code after installing, worked like a charm. I'm hoping that there is a way to use it without installing on each machine, but I still have a bit of learning to do.
Michael
@Michael: It needs to be installed on the *build* machine, but that doesn't necessarily mean it needs to be installed on the *deployment* machine.
Jon Skeet
A: 

There are also some free profiler tools that it might be worth looking at.

Darin Dimitrov
A: 

You can't implement this via Attributes, unless you want to use Aspect Oriented Programming via something like PostSharp to achieve this.

You could put conditional logic in there, based on a define (potentially set in a build configuration), however. This could turn on or off your logging with timings depending on your current compile settings.

Reed Copsey
+1  A: 

You can't use attributes for what you are doing. However, you do have some choices:

First, many of the profiler tools out there (like RedGate ANTS) are relatively inexpensive ($200-$300), easy to use, and most offer free evaluation periods of a couple of weeks - so you can see if they will give you the lift you need now, before you decide whether to buy them. Also, the .NET CLR profiler is free to download.

If that's not possible, PostSharp is probably the easiest way to weave such logic into your code.

Lastly, if you can't use PostSharp for whatever reason and you are willing to go and add attributes to your code, you may as well add a simple instrumentation block to each method in the form of a using block:

public void SomeMethodToProfile()
{
    // following line collects information about current executing method
    // and logs it when the metric tracker is disposed of
    using( MetricTracker.Track( MethodBase.GetCurrentMethod() )
    { 
        // original code here...
    }
}

A typical MetricTracker implementation looks something like this:

public sealed class MetricTracker : IDisposable
{
    private readonly string m_MethodName;
    private readonly Stopwatch m_Stopwatch;

    private MetricTracker( string methodName ) 
       { m_MethodName = methodName; m_Stopwatch = Stopwatch.StartNew(); }

    void IDisposable.Dispose()
       { m_Stopwatch.Stop(); LogToSomewhere(); }

    private void LogToSomewhere()
       { /* supply your own implementation here...*/ }

    public static void Track( MethodBase mb )
       { return new MetricTracker( mb.Name ); }
}
LBushkin
I like this implementation but it can't be used for our current situation. Though it is very lightweight in comparison to just about everything else, it is very intrusive in that I would need to modify alot of existing code. A few others have mentioned PostSharp. From my initial review, it seems like it will do what I need. Thanks for the input.
Michael
A: 

I do performance tuning in C#. All I need is this technique. It's not a big deal.

It's based on a simple idea. If you're waiting for a lot longer than necessary, that means part of the program is also waiting a lot longer than necessary, for something that doesn't really need to be done.

And how is it waiting? Nearly always, at a call site, on the call stack.

So if you just pause it while you're waiting, and look at the call stack, you will see what it's waiting for, and if it isn't really necessary (which it usually isn't), you will see why immediately.

Don't trust just one sample - do it a few times. Anything that shows up on more than one stack sample is something that, if you can do something about it, will save a lot of time.

So you see, it's not about timing functions or counting how many times they are called. It's about dropping in on the program unannounced, a few times, and asking it what it is doing and why. If something is wasting 80% (or 20% or whatever) of the time, then 80% of the cycles will be in the state of not being truly necessary, so just drop in on them and take a look. You don't need precision measurement.

It works with big problems. It also works with small problems. And if you do the whole thing more than once, as the program gets fast, the small problems become relatively bigger and easier to find.

Mike Dunlavey