There are a lot of questions out there about whether singletons are "bad," and what patterns to use instead. They're generally focused on the singleton design pattern, which involves retrieving the singleton instance from a static method on the class. This is not one of those questions.
Ever since I really "discovered" dependency injection several months back, I've been driving its adoption in our team, removing static and singleton patterns from our code over time, and using constructor-based injection wherever possible. We've adopted conventions so that we don't have to keep adding explicit bindings to our DI modules. We even use the DI framework to provide logger instances, so that we can automatically tell each logger which class it's in without additional code. Now that I have a single place where I can control how various types are bound, it's really easy to determine what the life cycle of a particular category of classes (utility classes, repositories, etc.) should be.
My initial thinking was that there would probably be some advantage to binding classes as singletons if I expected them to be used fairly often. It just means there's a lot less new
ing going on, especially when the object you're creating ends up having a big dependency tree. Pretty much the only non-static fields on any of these classes are those values getting injected into the constructor, so there's relatively little memory overhead to keeping an instance around when it's not being used.
Then I read "Singleton I love you, but you're bringing me down" on www.codingwithoutcomments.com, and it made me wonder whether I had the right idea. After all, Ninject compiles object-creation functions by default, so there's very little reflection overhead involved in creating additional instances of these objects. Because we're keeping business logic out of the constructor, creating new instances is a really lightweight operation. And the objects don't have a bunch of non-static fields, so there isn't a lot of memory overhead involved in creating new instances, either.
So at this point, I'm beginning to think that it probably won't matter much either way. Are there additional considerations I haven't taken into account? Has anyone actually experienced a significant improvement in performance by changing the life cycles of certain types of objects? Is following the DI pattern the only real important thing, or are there other reasons that using a single instance of an object can be inherently "bad?"