views:

1021

answers:

4

We have lots of logging calls in our app. Our logger takes a System.Type parameter so it can show which component created the call. Sometimes, when we can be bothered, we do something like:

class Foo
{
  private static readonly Type myType = typeof(Foo);

  void SomeMethod()
  {
     Logger.Log(myType, "SomeMethod started...");
  }
 }

As this requires getting the Type object only once. However we don't have any actual metrics on this. Anyone got any idea how much this saves over calling this.GetType() each time we log?

(I realise I could do the metrics myself with no big problem, but hey, what's StackOverflow for?)

Thanks.

+12  A: 

I strongly suspect that GetType() will take significantly less time than any actual logging. Of course, there's the possibility that your call to Logger.Log won't do any actual IO... I still suspect the difference will be irrelevant though.

EDIT: Benchmark code is at the bottom. Results:

typeof(Test): 2756ms
TestType (field): 1175ms
test.GetType(): 3734ms

That's calling the method 100 million times - the optimisation gains a couple of seconds or so. I suspect the real logging method will have a lot more work to do, and calling that 100 million times will take a lot longer than 4 seconds in total, even if it doesn't write anything out. (I could be wrong, of course - you'd have to try that yourself.)

In other words, as normal, I'd go with the most readable code rather than micro-optimising.

using System;
using System.Diagnostics;
using System.Runtime.CompilerServices;

class Test
{
    const int Iterations = 100000000;

    private static readonly Type TestType = typeof(Test);

    static void Main()
    {
        int total = 0;
        // Make sure it's JIT-compiled
        Log(typeof(Test)); 

        Stopwatch sw = Stopwatch.StartNew();
        for (int i = 0; i < Iterations; i++)
        {
            total += Log(typeof(Test));
        }
        sw.Stop();
        Console.WriteLine("typeof(Test): {0}ms", sw.ElapsedMilliseconds);

        sw = Stopwatch.StartNew();
        for (int i = 0; i < Iterations; i++)
        {
            total += Log(TestType);
        }
        sw.Stop();
        Console.WriteLine("TestType (field): {0}ms", sw.ElapsedMilliseconds);

        Test test = new Test();
        sw = Stopwatch.StartNew();
        for (int i = 0; i < Iterations; i++)
        {
            total += Log(test.GetType());
        }
        sw.Stop();
        Console.WriteLine("test.GetType(): {0}ms", sw.ElapsedMilliseconds);
    }

    // I suspect your real Log method won't be inlined,
    // so let's mimic that here
    [MethodImpl(MethodImplOptions.NoInlining)]
    static int Log(Type type)
    {
        return 1;
    }
}
Jon Skeet
Interesting that Sam Meldrum's benchmark below gives an even less significant difference...
Gaz
Given that on my Eee PC (hardly a powerhouse) his test runs in only ~2500 ms, my guess is he's either running a debug version or running under the debugger. However they're doing slightly different things anyway. Note inlining etc.
Jon Skeet
And actually, his test is bogus anyway - just noticed a bug. Will add a comment.
Jon Skeet
Hmm... running a fixed version of Sam's code I get *very* different results. I suspect that pretty much *all* of Sam's code is being optimised out in the first case (with optimisations on.)
Jon Skeet
I get 2198, 2034, 2422 in debug and 826, 525, and 1075 in release. It seems odd to me that the typeof() version isn't faster since that's something known at compile time. Could just not be implemented that way.
Bearddo
I've done an multi threaded logging library that needed to be high performance, and micro optimizations, brought an increased throughput from around 300K logging operations per second to more than 450k, just by identifying hot spots (mostly eliminating boxing operations and doing some caching ...
Pop Catalin
It really depends what's the goal of the library... I usually go for readable code, unless unreadable code performs better (and explain in comments why it's done like that).
Pop Catalin
@Pop: Exactly. Go for the most readable code until you've proved that it's a significant hit to your particular circumstances. Sometimes profiling will show you that, sometimes it's too invasive.
Jon Skeet
+3  A: 

The GetType() function is marked with the special attribute [MethodImpl(MethodImplOptions.InternalCall)]. This means its method body doesn't contain IL but instead is a hook into the internals of the .NET CLR. In this case, it looks at the binary structure of the object's metadata and constructs a System.Type object around it.

EDIT: I guess I was wrong about something ...

I said that: "because GetType() requires a new Object to be build" but it seems this is not correct. Somehow, the CLR caches the Type and always returns the same object so it doesn't need to build a new Type object.

I'm based on the following test:

Object o1 = new Object();
Type t1 = o1.GetType();
Type t2 = o1.GetType();
if (object.ReferenceEquals(t1,t2))
    Console.WriteLine("same reference");

So, I don't expect much gain in your implementation.

bruno conde
What makes you think it's creating a new object each time? In fact it's trivial to show that isn't get case. Print object.ReferenceEquals(x.GetType(), x.GetType()).
Jon Skeet
:) I did that even before you write this comment and corrected my answer. Thanks.
bruno conde
+2  A: 

I doubt you are going to get a satisfying answer from SO on this subject. The reason being that performance, especially scenarios of this type, are highly application specific.

Someone may post back with a quick stopwatch example of which would be faster in terms of raw miliseconds. But frankly that doesn't mean anything for your application. Why? It depends highly on the usage pattern around that particular scenario. For instance ...

  1. How many types do you have?
  2. How big are you methods?
  3. Do you do this for every method, or only big ones?

These are just a few of the questions that will greatly alter the relevance of a straight time benchmark.

JaredPar
+1. Add the fact that micro benchmarks, can get away with code bloat (extra variables) without creating cache misses, while in a real application caching of values might create, gc pressure, cache misses, paging...micro benchmarks are irrelevant. Profiling is the way to go.
Pop Catalin
I agree that microbenchmarks should be used with care - but I think in this case they show that the call to GetType() it pretty cheap, so the micro-optimisation is very unlikely to help. Note that this was a static field though - not a lot of extra GC pressure there...
Jon Skeet
Further note: profiling would change the performance of the app *much* more than the difference between calling GetType() and not. It's a naturally invasive operation. Application-specific benchmarking is the best way to go. If you can't show that an optimisation helps *significantly* in the app...
Jon Skeet
... then it's almost certainly not worth doing it. "Significantly" must mean *at least* measurably under normal conditions (i.e. not under a full profiler - just very low-impact timers if necessary.)
Jon Skeet
@Jon Skeet: I agree, but considering this the fact that this is a logging application I was assuming it was fine tuned, (micro optimized), because in such a context it makes sense to micro optimize (for latency, throughput, memory etc....)
Pop Catalin
@Pop: It's not a logging application - it's an application which logs. I'd expect the code within the logging library to be highly tuned, but I don't think the application code needs to be, unless they're *really* calling this method hugely often.
Jon Skeet
+1  A: 

The difference is probably negligible as far as application performance is concerned. But the first approach where you cache the type should be faster. Let's go and test.

This code will show you the difference:

using System;

namespace ConsoleApplicationTest {
    class Program {
        static void Main(string[] args) {

            int loopCount = 100000000;

            System.Diagnostics.Stopwatch timer1 = new System.Diagnostics.Stopwatch();
            timer1.Start();
            Foo foo = new Foo();
            for (int i = 0; i < loopCount; i++) {
                bar.SomeMethod();
            }
            timer1.Stop();
            Console.WriteLine(timer1.ElapsedMilliseconds);

            System.Diagnostics.Stopwatch timer2 = new System.Diagnostics.Stopwatch();
            timer2.Start();
            Bar bar = new Bar();
            for (int i = 0; i < loopCount; i++) {
                foo.SomeMethod();
            }
            timer2.Stop();
            Console.WriteLine(timer2.ElapsedMilliseconds);

            Console.ReadLine();
        }
    }

    public class Bar {
        public void SomeMethod() {
            Logger.Log(this.GetType(), "SomeMethod started...");
        }
    }

    public class Foo {
        private static readonly Type myType = typeof(Foo); 
        public void SomeMethod() { 
            Logger.Log(myType, "SomeMethod started..."); 
        }
    }

    public class Logger {
        public static void Log(Type type, string text) {
        }
    }
}

On my machine, this gave results of approx. 1500 milliseconds for the first approach and approx. 2200 milliseconds for the second.

(code and timings corrected - doh!)

Sam Meldrum
Try changing the second foo.SomeMethod() to bar.SomeMethod(). You're currently benchmarking the same thing twice :)
Jon Skeet
Thanks Jon - too hasty - doh!
Sam Meldrum