tags:

views:

377

answers:

8

I've been sitting on this idea for quite a long time and would like to hear what you guys think about it.

The standard idiom for writing a singleton is roughly as follows:

public class A {
...
   private static A _instance;

   public static A Instance() {
      if(_instance == null) {
          _instance = new A();
      }

      return _instance;
   }
...
}

Here I'm proposing another solution:

public class A {
...
   private static A _instance;

   public static A Instance() {
       try {
         return _instance.Self();
       } catch(NullReferenceExceptio) {
         _instance = new A();
       }           

       return _instance.Self();
   }

   public A Self() {
       return this;
   }
...
}

The basic idea behind it is that the runtime cost of 1 dereference and unthrown exception is lesser than that of one null check. I've tried to measure the potentional performance gain and here are my numbers:

Sleep 1sec (try/catch): 188788ms

Sleep 1sec (nullcheck): 207485ms

And the test code:

using System;
using System.Collections.Generic;
using System.Threading;
using System.Diagnostics;

public class A
{
   private static A _instance;
   public static A Instance() {
       try {
         return _instance.Self();
       } catch(NullReferenceException) {
         _instance = new A();
       }           

       return _instance.Self();
   }

   public A Self() {
       return this;
   }

    public void DoSomething()
    {
     Thread.Sleep(1);
    }
}

public class B
{
   private static B _instance;

   public static B Instance() {
      if(_instance == null) {
          _instance = new B();
      }

      return _instance;
   }

    public void DoSomething()
    {
     Thread.Sleep(1);
    }
}

public class MyClass
{
    public static void Main()
    {
     Stopwatch sw = new Stopwatch();
     sw.Reset();

     sw.Start();
     for(int i = 0; i < 100000; ++i) {
      A.Instance().DoSomething();
     }

     Console.WriteLine(sw.ElapsedMilliseconds);
     sw.Reset();

     sw.Start();
     for(int i = 0; i < 100000; ++i) {
      B.Instance().DoSomething();
     }
     Console.WriteLine(sw.ElapsedMilliseconds);

     RL();
    }

    #region Helper methods

    private static void WL(object text, params object[] args)
    {
     Console.WriteLine(text.ToString(), args); 
    }

    private static void RL()
    {
     Console.ReadLine(); 
    }

    private static void Break() 
    {
     System.Diagnostics.Debugger.Break();
    }

    #endregion
}

The resulting performance gain is almost 10%, the question is whether it's a micro-op, or it can offer significant performance boost for singleton happy applications (or it's middleware, like, logging)?

+17  A: 

What you're asking about is the best way to implement a bad singleton pattern. You should have a look at Jon Skeet's article on how to implement the singleton pattern in C#. You'll find that there are much better (safer) ways and they don't suffer from the same performance issues.

tvanfosson
Aha, enlightening!
arul
I think what arul is asking was would his solution provide a solution to not have to perform a null check in a double checked locking scenario. I believe his solution would be viable provided the null check pattern is incorporated.
Brett Ryan
But it's actually more complicated and (I think) slower than the nested class, fully-lazily instantiated version.
tvanfosson
+6  A: 

I think that if my application is calling a singleton's constructor often enough for a 10% performance boost to mean anything I'd be rather worried.

That said, neither version is thread-safe. Focus on getting things like that right first.

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."

Sam DeFabbia-Kane
That quote isn't an answer to everything, and is quite misused sometimes. When working in resource constrained environment you have to optimize to get the best result.
arul
No, it's not the answer to every situation. And if you're working in a resource-constrained environment, that might be different. But I know that if I were in a position where a 10% performance gain on a singleton constructor were significant, I'd probably start looking for my real problem.
Sam DeFabbia-Kane
It may be 1% actually, which is still rather enough implement for e.g. computer games.
arul
You shouldn't use a singleton *in the first place* if you care about performance. It becomes a bottleneck in threaded code, and it doesn't play nice with the cache either (compared to local variables). Finally, of course, that quote isn't an answer to everything, but it **certainly is the answer when you're comparing two incorrect implementations**. So you've implemented a singleton which will fail 10% faster than the other buggy implementation? What good is that? If you're going to use a singleton (which you generally shouldn't), get it right before you make it fast.
jalf
+2  A: 

This optimization seems trivial. I've always be taught to use try/catch blocks only to catch conditions that would be difficult or impossible to check with an if-statement.

Its not that your proposed approach wouldn't work. It just isn't significantly better than the original way.

Justin R.
Well, the _instance being null is kind of exceptional state, it happens only once during the lifetime of the app. (or if explicitely set to null through reflection, which is again exceptional enough) :)
arul
arul, `_instance == null` it is still an expected state, and the exception would cost much more time then you are ever going to recover.
Henk Holterman
Not for me, sorry my thinking process is different here. Expected maybe, but in a wider context damn exceptional.
arul
It's not exceptional if you know with certainty it will always happen.
Shin
@Shin: good point, didn't change my mind though :>
arul
I agree with Shin and Henk on this. An exception can be foreseeable, but not predictable. At run time it *[could]* happen, but it *[shouldn't]* happen under normal circumstances. The fact that `_instance` will always be `null` when `Instance()` is first called means the situation is predictable and not exceptional.That's how I approach the situation, at least.
Justin R.
A: 

Hi Arul, I think that your solution is perfectly acceptable though I do think there are a few things that you should (and are rarely) consider before implementing a lazy loaded singleton panti-pattern.

  1. Could you provide a DI version instead, would a unity container suffice with an interface contract? This would allow you to swap the implementation later if need be, also makes testing a lot easier.
  2. If you must insist on using a singleton, do you really need a lazy-loaded implementation? The cost of creating the instance on the static constructor either implicitly or explicitly will only be executed when the class has been referenced at run-time, and in my experience almost ALL singletons are only ever referenced when they want to get access to "Instance" anyway.

If you need to implement it the way you've described, you'd do it like the following.

UPDATE: I've updated the example to instead lock class type instead of a lock variable as Brian Gideon points out the instance could be in a half initialized state. Any use of lock(typeof()) is strongly advised against and I recommend that you never use this approach.

public class A {
    private static A _instance;
    private A() { }
    public static A Instance {
        get {
            try {
                return _instance.Self();
            } catch (NullReferenceException) {
                lock (typeof(A)) {
                    if (_instance == null)
                        _instance = new A();
                }
            }
            return _instance.Self();
        }
    }
    private A Self() { return this; }
}
Brett Ryan
Any reason for the down-vote?
Brett Ryan
I'd like to know that too.
arul
That is not thread-safe. There is still a problem because you have not created an explicit memory barrier in the try block.
Brian Gideon
Wouldn't the fact that the barrier has been defined within the catch where the object is initialized be enough?
Brett Ryan
Unfortunately not. A second thread may never see the write to _instance. It is also possible that the second thread grabs the new instance in a half-initialized state since there is not any synchronization mechanism in the try block. That is why lock free strategies are *extremely* difficult to get right.
Brian Gideon
Thanks for pointing that out Brian, I've updated the example to lock the type instead, which I strongly advise against doing in any case and now change my recommendation to never use this approach in any circumstance, my original belief still stands, if it's not too costly just use a static constructor and don't use lazy-loading, or use a DI container that can manage the instance for you.
Brett Ryan
A: 

The "improved" implementation has a big problem and that is that it fires an interrupt. A rule of thumb is that application logic should not be dependent on exceptions firing.

Dejan
A: 

You should never use exception handling for control flow.

Besides, instead of:

 if(_instance == null) {
          _instance = new A();
      }

      return _instance;

you can do:

return _instance ?? (_instance = new A());

Which is semantically the same.

codymanix
Which also emits merely the same IL code I assume?
arul
I didn't look but it should.
codymanix
+7  A: 

This is a horrid way to do this. As others have pointed out, exceptions should only be used for handling exceptional, unexpected situations. And because we make that assumption, many organizations run their code in contexts which aggressively seek out and report exceptions, even handled ones, because a handled null reference exception is almost certainly a bug. If your program is unexpectedly dereferencing invalid memory and continuing merrily along then odds are good that something is deeply, badly broken in your program and it should be brought to someone's attention.

Do not "cry wolf" and deliberately construct a situation that looks horribly broken but is in fact by design. That's just making more work for everyone. There is a standard, straightforward, accepted way to make a singleton in C#; do that if that's what you mean. Don't try to invent some crazy thing that violates good programming principles. People smarter than me have designed a singleton implementation that works; it's foolish to not use it.

I learned this the hard way. Long story short, I once deliberately used a test that would most of the time dereference bad memory in a mainline scenario in the VBScript runtime. I made sure to carefully handle the exception and recover correctly, and the ASP team went crazy that afternoon. Suddenly all their checks for server integrity started reporting that huge numbers of their pages were violating memory integrity and recovering from that. I ended up rearchitecting the implementation of that scenario to only do code paths that did not result in exceptions.

A null ref exception should always be a bug, period. When you handle a null ref exception, you are hiding a bug.

More musing on exception classifications:

http://blogs.msdn.com/ericlippert/archive/2008/09/10/vexing-exceptions.aspx

Eric Lippert
Is there any other reason besides it being "horrid, bad and against everything we've been taught"? The end justifies the means ... .net CLR also happily dereferences invalid memory, instead of testing it against null.
arul
I don't understand your question. I give you a perfectly good reason why it is horrid: because it makes your program hard to debug, because it makes your program hard to understand, because it makes your program appear to be badly broken to analysis tools, and because there are well-established better ways to do it.
Eric Lippert
A: 

why not just:

public class A {
...
private static A _instance = new A();

public static A Instance() {
  return _instance;
}
...
}
Esben Skov Pedersen