views:

291

answers:

4

Hi,

I currently have a function:

public static Attribute GetAttribute(MemberInfo Member, Type AttributeType)
{
    Object[] Attributes = Member.GetCustomAttributes(AttributeType, true);

    if (Attributes.Length > 0)
        return (Attribute)Attributes[0];
    else
        return null;
}

I am wondering if it would be worthwhile caching all the attributes on a property into a Attribute = _cache[MemberInfo][Type] dictionary,

This would require using GetCustomAttributes without any type parameter then enumerating over the result. Is it worth it?

+4  A: 

The only way you can know for sure, is to profile it. I am sorry if this sounds like a cliche. But the reason why a saying is a cliche is often because it's true.

Caching the attribute is actually making the code more complex, and more error prone. So you might want to take this into account-- your development time-- before you decide.

So like optimization, don't do it unless you have to.

From my experience ( I am talking about AutoCAD-like Windows Application, with a lot of click-edit GUI operations and heavy number crunching), the reading of custom attribute is never--even once-- the performance bottleneck.

Ngu Soon Hui
Alas I have no profiling tools available to be able to test properly. It probably is premature, figuring that it its a reasonably low level function in my app I wanted to get the most out of it.
Courtney de Lautour
Courtney: You can do a simple profiling by calling each implementation in a loop that repeats many times, and measure how long each look takes to run. You don't really need a profiler in this case.
Antoine Aubry
+2  A: 

Are you actually having a performance problem? If not then don't do it until you need it.

It might help depending on how often you call the method with the same paramters. If you only call it once per MemberInfo, Type combination then it won't do any good. Even if you do cache it you are trading speed for memory consumption. That might be fine for your application.

Mike Two
+3  A: 

Your question is a case of premature optimization.

You don't know the inner workings of the reflection classes and therefore are making assumptions about the performance implications of calling GetCustomAttributes multiple times. The method itself could well cache its output already, meaning your code would actually add overhead with no performance improvement.

Save your brain cycles for thinking about things which you already know are problems!

Programming Hero
I was actually going to say in the question but ended up editing it out that I don't know if it does do that internally.
Courtney de Lautour
+2  A: 

You will get better bangs for your bucks if you replace the body of your method with this:

return Attribute.GetCustomAttribute(Member, AttributeType,false); // only look in the current member and don't go up the inheritance tree.

If you really need to cache on a type-basis:

public static MyCacheFor<T>{
  static MyCacheFor(){
   // grab the data
    Value=// ExtractExpensiveData(typeof(T));
  }
  public static readonly MyExpensiveToExtractData Value;
}

Beats dictionary lookups everytime. Plus it's threadsafe:)

Cheers, Florian

PS: Depends how often you call this. I had some cases where doing a lot of serialization using reflection really called for caching, as usual, the motus operandi is this:

  1. Write
  2. Test
  3. Debug
  4. Test again
  5. CPU profile
  6. Mem profile
  7. Optimize
  8. Test once more
  9. CPU Profile
  10. MEM Profile This one is important as your optimization of cpu bound code will certainly shift the burden to memory
Florian Doyon
I am doing serialization via reflection so that indicates it may be worth doing at some point. However as everyone here has said - no point in optimizing until its a problem :) Cheers
Courtney de Lautour
In this case, let me share a trick: if you're going to cache something where your key is a type, use a generic type rather than a hashtable. I updated my code above
Florian Doyon