views:

426

answers:

8

I'm fairly new to the DI concept, but I have been using it to some extent in my designs - mainly by 'injecting' interfaces into constructors and having factories create my concrete classes. Okay, it's not configuration-based - but it's never NEEDED to be.

I started to look at DI frameworks such as Spring.NET and Castle Windsor, and stumbled across this blog by Ayende.

What I got from this is

A) DI frameworks are awesome, but B) It means we don't have to worry about how our system is designed in terms of dependencies.

For me, I'm used to thinking hard about how to loosely-couple my system but at the same time have some sort of control over dependencies.

I'm a bit scared of losing this control, and it being just a free-for-all. ClassA needs ClassB = no problem, just ask and ye shall receive! Hmmm.

Or is that just the point and this is the future and I should just go with it?

Thoughts?

+1  A: 

I must be high, because I thought the whole point of dependency injection is that the code that does stuff simply declares its dependencies so that someone who's creating it will know what to create with it for it to operate correctly.

How dependency injection makes you lazy is maybe it forces someone else to deal with dependencies? That's the whole point! That someone else doesn't need to be really someone else; it just means the code you write doesn't need to be concerned with dependencies because it declares them upfront. And they can be managed because they are explicit.

Edit: Added the last sentence above.

MSN
+7  A: 

I wouldn't say that you don't have to think about dependencies, but using an IoC framework allows you to change the types which fulfill the dependencies with little or no hassle, since all the wiring is done in a central place.

You still have to think about what interfaces you need and getting them right is not always a trivial matter.

I don't see how a loosely coupled system could be considered lazily designed. If you go through all the trouble of getting to know an IoC framework, you're certainly not taking the shortcut.

Brian Rasmussen
+3  A: 

I think that ideally, if you already have a loosley coupled system.., using a container will only move the place where you take the dependencies out of your code making them softer and let your system depend on the container building your object graph.

In reality, attempting to use the the container will probably show you that your system is not as loosley coupled as you thought it was.. so in this way, it may help you to create a better design.

Well, i'm a newbie at this subjet.. so maybe i'm not that right.

Cheers.

Fredy Treboux
+1  A: 

Dependency injection can be a bit difficult to get used to - instead of a direct path through your code, you end up looking at seemingly unconnected objects, and a given action traces it's path through a series of these objects whose coupling seems, to be kind, abstract.

It's a paradigm shift similar to getting used to OO. The intention is that your objects are written do have a focused and single responsibility, using the dependent objects as they're declared by the interface and handled by the framework.

This not only makes loose coupling easier, it makes it almost unavoidable, or at least nearly so, which makes it much simpler to do things like run your object in a mock environment - The IOC container is taking the place of the run environment.

Steve B.
+9  A: 

One basic OO principle is that you want your code to depend on interfaces and not implementations, DI is how we do that. Historically, here is how it evolved:

  1. People initially created classes they depended upon by "new'ing" them:

    IMyClass myClass = new MyClass();

  2. Then we wanted to remove instantiation so there were static methods to create them:

    IMyClass myClass = MyClass.Create();

  3. Then we no longer depended on the lifecycle of the class, but still depended on it for instantiation, so then we used the factory:

    IMyClass myClass = MyClassFactory.Create();

  4. This moved the direct dependency from the consuming code to the factory, but we still had the dependency on MyClass indirectly, so we used the service locator pattern like this:

    IMyClass myClass = (IMyClass)Context.Find("MyClass");

  5. That way we were only dependent on an interface and a name of a class in our code. But it can be made better, why not depend simply on an interface in our code? We can with dependency injection. If you use property injection you would simply put a property setter for the interface you want in your code. You then configure what the actual dependency is outside of your code and the container manages the lifecycle of that class and your class.

Logicalmind
A: 

You still have to worry. My team use Castle Windsor in our current project. It annoys me that it delays dependency lookup from compile time to runtime.

Without Castle Windsor, you write code and if you haven't sorted your dependencies out. Bang, the compiler will complain. With Castle Windsor you configure the dependencies in an xml file. They're still there, just separated out from the bulk of your code. The problem is, your code can compile fine if you make a mess of defining the dependencies. But, at runtime, Castle Windsor looks up a concrete classes to service requests for an interface by using reflection. If the dependency can't be found, you get an error at runtime.

I think Castle Windsor does check the dependencies exist when its initialized so that it can throw an error pretty quick. But, it's still annoying when using a strongly typed language that this fuss can't be sorted out at runtime.

So... anyway. Dependencies still seriously matter. You'll all most certainly pay more attention to them using DI than before.

Scott Langham
Shouldn't you use extensive unit-testing if you use one of these frameworks though? Making sure that when you ask the DI framework for a class, you get back the one you expect?
Duncan
Yes... but when you're unit-testing you're feeding in different fake or mock objects that make up the environment of the class you're testing. At run-time, you need the real objects. So, you get a different set of dependencies between testing and real execution.
Scott Langham
Testing the real dependencies are correct isn't (formally speaking) unit-testable... because a unit-test, does just that: tests one unit, isolated from its real dependencies.
Scott Langham
This can still be automatically tested though, yes. But... I find it funny that if we don't use DI... the compiler will test this for us (as we always did before DI became popular), now we use DI... we need to write extra tests to check something that we never had to worry about before.
Scott Langham
I suppose nothings perfect. Pretty good really, because if it was we'd have automated away the programmers.
Scott Langham
I guess it depends on the DI container. Guice module configuration is in Java and it checks the type safety of the bindings at compile time.
parkr
+1  A: 

I would disagree and say they lead to better design in many cases. Too often devs create components that do too much and have too many dependencies. With IOC developers i find tend to migrate to a better way of thinking and produce smaller simpler components that can be assembled together into an app.s

If they follow the spirit and do tests, they will further refine your components. Both exercises force you to write better testable components which fits very well with how IOC containers work.

mP
A: 

We wrote custom DI framework, thought it took some time getting it right but it all worth the effort. We have divided the who systems into layers and the dependency injection in each layer is bound by rules. E.g. In the Log layer, CUD and BO interfaces cannot be injected.

We are still contemplating over the rules and some of these change every week while the others are remain the same.