views:

330

answers:

9

The advantages of DI, as far as I am aware, are:

  • Reduced Dependencies
  • More Reusable Code
  • More Testable Code
  • More Readable Code

Say I have a repository, OrderRepository, which acts as a repository for an Order object generated through a Linq to Sql dbml. I can't make my orders repository generic as it performs mapping between the Linq Order entity and my own Order POCO domain class.

Since the OrderRepository by necessity is dependent on a specific Linq to Sql DataContext, parameter passing of the DataContext can't really be said to make the code reuseable or reduce dependencies in any meaningful way.

It also makes the code harder to read, as to instantiate the repository I now need to write

new OrdersRepository(new MyLinqDataContext())

which additionally is contrary to the main purpose of the repository, that being to abstract/hide the existence of the DataContext from consuming code.

So in general I think this would be a pretty horrible design, but it would give the benefit of facilitating unit testing. Is this enough justification? Or is there a third way? I'd be very interested in hearing opinions.

A: 

It's possible to write generic data access objects in Java:

package persistence;

import java.io.Serializable;
import java.util.List;

public interface GenericDao<T, K extends Serializable>
{
    T find(K id);
    List<T> find();
    List<T> find(T example);
    List<T> find(String queryName, String [] paramNames, Object [] bindValues);

    K save(T instance);
    void update(T instance);
    void delete(T instance);
}

I can't speak for LINQ, but .NET has generics so it should be possible.

duffymo
Yes this pattern is also generally applicable in .NET, but with the specifics of my example the Orders object is only exposed as a property of the Linq DataContext. So the point is that the OrdersRepository already has a dependency on the specific DataContext, whether DI is used or not.
fearofawhackplanet
+1  A: 

The power of dependency injection comes when you use an Inversion of Control container such as StructureMap. When using this, you won't see a "new" anywhere -- the container will take control of your object construction. That way everything is unaware of the dependencies.

roufamatic
+2  A: 

Small comment first: dependency injection = IoC + dependency inversion. What matter the most for testing and what you actually describe, is dependency inversion.

Generally speaking, I think that testing justifies dependency inversion. But it doesn't justify dependency injection, I wouldn't introduce a DI container just for testing.

However, dependency inversion is a principle that can be bended a bit if necessary (like all principles). You can in particular use factories in some places to control the creation of object.

If you have DI container, it's what happens automatically; the DI container act as a factory and wires the object together.

ewernli
+11  A: 

Dependency Injection's primary advantage is testing. And you've hit on something that seemed odd to me when I first started adopting Test Driven Development and DI. DI does break encapsulation. Unit tests should test implementation related decisions; as such, you end up exposing details that you wouldn't in a purely encapsulated scenario. Your example is a good one, where if you weren't doing test driven development, you would probably want to encapsulate the data context.

But where you say, Since the OrderRepository by necessity is dependent on a specific Linq to Sql DataContext, I would disagree - we have the same setup and are only dependent on an interface. You have to break that dependency.

Taking your example a step further however, how will you test your repository (or clients of it) without exercising the database? This is one of the core tenets of unit testing - you have to be able to test functionality without interacting with external systems. And nowhere does this matter more than with the database. Dependency Injection is the pattern that makes it possible to break dependencies on sub-systems and layers. Without it, unit tests end up requiring extensive fixture setup, become hard to write, fragile and too damn slow. As a result - you just won't write them.

Taking your example a step farther, you might have

In Unit Tests:

// From your example...

new OrdersRepository(new InMemoryDataContext());

// or...

IOrdersRepository repo = new InMemoryDataContext().OrdersRepository;

and In Production (using an IOC container):

// usually...

Container.Create<IDataContext>().OrdersRepository

// but can be...

Container.Create<IOrdersRepository>();

(If you haven't used an IOC container, they're the glue that makes DI work. Think of it as "make" (or ant) for object graphs...the container builds the dependency graph for you and does all of the heavy lifting for construction). In using an IOC container, you get back the dependency hiding that you mention in your OP. Dependencies are configured and handled by the container as a separate concern - and calling code can just ask for an instance of the interface.

There's a really excellent book that explores these issues in detail. Check out xUnit Test Patterns: Refactoring Test Code, by Mezaros. It's one of those books that takes your software development capabilities to the next level.

Rob
The primary advantage of DI is decoupling. Citation from http://en.wikipedia.org/wiki/Dependency_inversion_principle :"The goal of the dependency inversion principle is to decouple high-level components from low-level components such that reuse with different low-level component implementations become possible."
Yauheni Sivukha
Thanks for this informative answer. I am confused where you say that the OrdersRepository doesn't have to be dependent on the specific DataContext though. Anything that retrieves the Order Linq entity is by definition dependent on the specific DataContext which exposes that entity. The code would fail if I instantiated the OrdersRepository with anything other than the specific DataContext with which it is designed to work. So infact, I guess another point I could have made in the OP is that using DI seems to compromise the security/robustness of the code.
fearofawhackplanet
I should point out also that I simplified the example in the OP a little. A more accurate picture of what I have would be `OrdersRepository -> IRepository<Order> -> Linq DataContext`, i.e. `new OrdersRepository(new Repository<Order>(new MyDataContext()))` So I have a generic repository exposed through an interface which the OrdersRepository wraps. But the very specific mapping role of the OrdersRepository (from Linq entity to domain entity) is such that I can't see how a generic version exposed through an interface here would be either workable or useful.
fearofawhackplanet
@fearofawhackplanet on DataContext dependence (how):LINQ to SQL works via IQueryable<T> and returns POCO entity objects. POCOs are generated, but can be extended via partial classes (i.e., you don't need to map to a separate domain object). You break the dependence by keeping the POCOs in a collection and returning .AsQueryable().I ripped some files out of our environment and left them on http://drop.io/TestableDataContext that shows one approach. They don't compile in isolation, but can give you an idea on how to approach this if you buy into the next comment (the why).
Rob
@fearofawhacplanet on usefuless (why): You break the dependence so you can unit test against a DataContext without requiring a database. If that's not a goal, I would agree that it's not worth using DI in your scenario.
Rob
+2  A: 

The beauty of Dependency Injection is the ability to isolate your components.

One side-effect of isolation is easier unit-testing. Another is the ability to swap configurations for different environments. Yet another is the self-describing nature of each class.

However you take advantage of the isolation provided by DI, you have far more options than with tighter-coupled object models.

Bryan Watts
+1  A: 

I find that relationship between needing Dependency Injection container and testability is the other way around. I find DI extremely useful because I write unit testable code. And I write unit testable code because it's inherently better code- more loosely coupled smaller classes.

An IoC container is another tool in the toolbox that helps you manage complexity of the app - and it works quite well. I found it allows to better to code to an interface by taking instantiation out of the picture.

Igor Zevaka
+1  A: 

The design of tests here always depends on SUT (System Under Test). Ask yourself a question - what do I want to test?

If yours repository is just accessor to the database - then it is needed to test it like accessor - with database involvement. (Actually such kind of tests are not unit tests but integration)

If yours repository performs some mapping or business logic and acts like accessor to the database, then this is the case when it is needed to do decomposition in order to make your system to comply with SRP (Single Responsibility Principle). After decomposition you will have 2 entities:

  1. OrdersRepository
  2. OrderDataAccessor

Test them separately from each other, breaking dependencies with DI.

As of constructor ugliness... Use DI framework to construct your objects. For example with using Unity your constructor:

var repository = new OrdersRepository(new MyLinqDataContext());

will be look like:

var repository = container.Resolve<OrdersRepository>;
Yauheni Sivukha
+2  A: 

Dependency Injection is just a means to an end. It's a way to enable loose coupling.

Mark Seemann
+1  A: 

The jury is still out for me about the use of DI in the context of your question. You've asked if testing alone is justification for implementing DI, and I'm going to sound a little like a fence-sitter in answering this, even though my gut-response is to answer no.

If I answer yes, I am thinking about testing systems when you have nothing you can easily test directly. In the physical world, it's not unusual to include ports, access tunnels, lugs, etc, in order to provide a simple and direct means of testing the status of systems, machines, and so on. This seems reasonable in most cases. For example, an oil pipeline provides inspection hatches to allow equipment to be injected into the system for the purposes of testing and repair. These are purpose built, and provide no other function. The real question though is if this paradigm is suited to software development. Part of me would like to say yes, but the answer it seems would come at a cost, and leaves us in that lovely grey area of balancing benefits vs costs.

The "no" argument really comes down to the reasons and purposes for designing software systems. DI is a beautiful pattern for promoting the loose coupling of code, something we are taught in our OOP classes is a very important and powerful design concept for improving the maintainability of code. The problem is that like all tools, it can be misused. I'm going to disagree with Rob's answer above in part, because DI's advantages are NOT primarily testing, but in promoting loosely coupled architecture. And I'd argue that resorting to designing systems based solely on the ability to test them suggests in such cases that either the architecture is is flawed, or the test cases are inappropriately configured, and possibly even both.

A well-factored system architecture is in most cases inherently simple to test, and the introduction of mocking frameworks over the last decade makes the testing much easier still. Experience has taught me that any system that I found hard to test has had some aspect of it too tightly coupled in some way. Sometimes (more rarely) this has proven to be necessary, but in most cases it was not, and usually when a seemingly simple system seemed too hard to test, it was because the testing paradigm was flawed. I've seen DI used as a means to circumvent system design in order to allow a system to be tested, and the risks have certainly outweighed the intended rewards, with system architecture effectively corrupted. By that I mean back-doors into code resulting in security problems, code bloated with test-specific behaviour that is never used at runtime, and spaghettification of source code such that you needed a couple of Sherpa's and a Ouija Board just to figure out which way was up! All of this in shipped production code. The resulting costs in terms of maintenance, learning curve etc can be astronomical, and to small companies, such losses can prove to be devastating in the longer term.

IMHO, I don't believe that DI should ever be used as a means to simply improve testability of code. If DI is your only option for testing, then the design usually needs to be refactored. On the other hand, implementing DI by design where it can be used as a part of the run-time code can provide clear advantages, but should not be misused as it can also result in classes being misused, and it should not be over-used simply because it seems cool and easy, as it can in such cases over-complicate the design of your code.

:-)

S.Robins