172

Dependency injection (DI) is a well known and fashionable pattern. Most of engineers know its advantages, like:

  • Making isolation in unit testing possible/easy
  • Explicitly defining dependencies of a class
  • Facilitating good design (single responsibility principle (SRP) for example)
  • Enabling switching implementations quickly (DbLogger instead of ConsoleLogger for example)

I reckon there's industry wide consensus that DI is a good, useful pattern. There's not too much criticism at the moment. Disadvantages which are mentioned in the community are usually minor. Some of them:

  • Increased number of classes
  • Creation of unnecessary interfaces

Currently we discuss architecture design with my colleague. He's quite conservative, but open minded. He likes to question things, which I consider good, because many people in IT just copy the newest trend, repeat the advantages and in general don't think too much - don't analyse too deep.

The things I'd like to ask are:

  • Should we use dependency injection when we have just one implementation?
  • Should we ban creating new objects except language/framework ones?
  • Is injecting a single implementation bad idea (let's say we have just one implementation so we don't want to create "empty" interface) if we don't plan to unit test a particular class?

8 Answers8

214

First, I would like to separate the design approach from the concept of frameworks. Dependency injection at its simplest and most fundamental level is simply:

A parent object provides all the dependencies required to the child object.

That's it. Note, that nothing in that requires interfaces, frameworks, any style of injection, etc. To be fair I first learned about this pattern 20 years ago. It is not new.

Due to more than 2 people having confusion over the term parent and child, in the context of dependency injection:

  • The parent is the object that instantiates and configures the child object it uses
  • The child is the component that is designed to be passively instantiated. I.e. it is designed to use whatever dependencies are provided by the parent, and does not instantiate it's own dependencies.

Dependency injection is a pattern for object composition.

Why interfaces?

Interfaces are a contract. They exist to limit how tightly coupled two objects can be. Not every dependency needs an interface, but they help with writing modular code.

When you add in the concept of unit testing, you may have two conceptual implementations for any given interface: the real object you want to use in your application, and the mocked or stubbed object you use for testing code that depends on the object. That alone can be justification enough for the interface.

Why frameworks?

Essentially initializing and providing dependencies to child objects can be daunting when there are a large number of them. Frameworks provide the following benefits:

  • Autowiring dependencies to components
  • Configuring the components with settings of some sort
  • Automating the boiler plate code so you don't have to see it written in multiple locations.

They also have the following disadvantages:

  • The parent object is a "container", and not anything in your code
  • It makes testing more complicated if you can't provide the dependencies directly in your test code
  • It can slow down initialization as it resolves all the dependencies using reflection and many other tricks
  • Runtime debugging can be more difficult, particularly if the container injects a proxy between the interface and the actual component that implements the interface (aspect oriented programming built in to Spring comes to mind). The container is a black box, and they aren't always built with any concept of facilitating the debugging process.

All that said, there are trade-offs. For small projects where there aren't a lot of moving parts, and there's little reason to use a DI framework. However, for more complicated projects where there are certain components already made for you, the framework can be justified.

What about [random article on the Internet]?

What about it? Many times people can get overzealous and add a bunch of restrictions and berate you if you aren't doing things the "one true way". There isn't one true way. See if you can extract anything useful from the article and ignore the stuff you don't agree with.

In short, think for yourself and try things out.

Working with "old heads"

Learn as much as you can. What you will find with a lot of developers that are working into their 70s is that they have learned not to be dogmatic about a lot of things. They have methods that they have worked with for decades that produce correct results.

I've had the privilege of working with a few of these, and they can provide some brutally honest feedback that makes a lot of sense. And where they see value, they add those tools to their repertoire.

122

Dependency injection is, like most patterns, a solution to problems. So start by asking if you even have the problem in the first place. If not, then using the pattern most likely will make the code worse.

Consider first if you can reduce or eliminate dependencies. All other things being equal, we want each component in a system to have as few dependencies as possible. And if the dependencies are gone, the question of injecting or not becomes moot!

Consider a module which downloads some data from an external service, parses it, and performs some complex analysis, and writes to result to a file.

Now, if the dependency to the external service is hardcoded, then it will be really difficult to unit test the internal processing of this module. So you might decide to inject the external service and the file system as interface dependencies, which will allow you to inject mocks instead which in turn makes unit-testing the internal logic possible.

But a much better solution is simply to separate the analysis from the input/output. If the analysis is extracted to a module without side effects it will be much easier to test. Note that mocking is a code-smell - it is not always avoidable, but in general, it is better if you can test without relying on mocking. So by eliminating the dependencies, you avoid the problems which DI is supposed to alleviate. Note that such a design also adheres much better to SRP.

I want to emphasize that DI does not necessarily facilitate SRP or other good design principles like separation of concerns, high cohesion/low coupling and so on. It might just as well have the opposite effect. Consider a class A which uses another class B internally. B is only used by A and therefore fully encapsulated and can be considered an implementation detail. If you change this to inject B into the constructor of A, then you have exposed this implementation detail and now knowledge about this dependency and about how to initialize B, the lifetime of B and so on have to exist some other place in the system separately from A. So you have an overall worse architecture with leaking concerns.

On the other hand there are some cases where DI really is useful. For example for global services with side effects like a logger.

The problem is when patterns and architectures become goals in themselves rather than tools. Just asking "Should we use DI?" is kind of putting the cart before the horse. You should ask: "Do we have a problem?" and "What is the best solution for this problem?"

A part of your question boils down to: "Should we create superfluous interfaces to satisfy the demands of the pattern?" You probably already realize the answer to this - absolutely not! Anyone telling you otherwise is trying to sell you something - most likely expensive consulting hours. An interface only has value if it represents an abstraction. An interface which just mimics the surface of a single class is called a "header interface" and this is a known antipattern.

JacquesB
  • 61,955
  • 21
  • 135
  • 189
66

In my experience, there are a number of downsides to dependency injection.

First, using DI does not simplify automated testing as much as advertised. Unit testing a class with a mock implementation of an interface lets you validate how that class will interact with the interface. That is, it lets you unit test how the class under test uses the contract provided by the interface. However, this provides much greater assurance that input from the class under test into the interface is as expected. It provides rather poor assurance that the class under test responds as expected to output from the interface as that is almost universally mock output, which is itself subject to bugs, oversimplifications and so on. In short, it does NOT let you validate that the class will behave as expected with a real implementation of the interface.

Second, DI makes it much harder to navigate through code. When trying to navigate to the definition of classes used as input to functions, an interface can be anything from a minor annoyance (e.g. where there is a single implementation) to a major time sink (e.g. when an overly generic interface like IDisposable is used) when trying to find the actual implementation being used. This can turn a simple exercise like "I need to fix a null reference exception in the code that happens right after this logging statement is printed" into a day long effort.

Third, the use of DI and frameworks is a double-edged sword. It can greatly reduce the amount of boiler-plate code needed for common operations. However, this comes at the expense of needing detailed knowledge of the particular DI framework to understand how these common operations are actually wired together. Understanding how dependencies are loaded into the framework and adding a new dependency into the framework to inject can require reading a fair amount of background material and following some basic tutorials on the framework. This can turn some simple tasks into rather time consuming ones.

Eric
  • 895
26

My biggest pet peeve about DI was already mentioned in a few answers in a passing way, but I'll expand on it a bit here. DI (as it is mostly done today, with containers etc.) really, REALLY hurts code readability. And code readability is arguably the reason behind most of today's programming innovations. As someone said - writing code is easy. Reading code is hard. But it's also extremely important, unless you're writing some kind of tiny write-once throwaway utility.

The problem with DI in this regard is that it's opaque. The container is a black box. Objects simply appear from somewhere and you have no idea - who constructed them and when? What was passed to the constructor? Who am I sharing this instance with? Who knows...

And when you work primarily with interfaces, all the "go to definition" features of your IDE go up in smoke. It's awfully difficult to trace out the flow of the program without running it and just stepping through to see just WHICH implementation of the interface was used in THIS particular spot. And occasionally there's some technical hurdle which prevents you from stepping through. And even if you can, if it involves going through the twisted bowels of the DI container, the whole affair quickly becomes an exercise in frustration.

To work efficiently with a piece of code that used DI, you must be familiar with it and already know what goes where.

Added 5 years later: The state-of-the-art IDEs of today like Visual Studio or the JetBrains family have actually gotten a bit better with the "Go to definition" part. If there is just one implementation of an interface, it will jump to that. Still, the other issues persist.

Added 2 more years later: Seems like this answer is a work in progress.

On the topic of Interfaces - this doesn't bother me so much anymore, because, as stated above, the IDEs have gotten pretty good at it and the pattern is common enough that most people should be familiar with it now. And it does have a benefit - it allows you to easily mock out large chunks of your code in your unit tests.

That said:

  1. Unit tests are really the only place that I'm aware of which benefits of mass-interface-ization;
  2. Actually, interfaces and DI are orthogonal things. You can use DI without interfaces, and you can refer to everything through interfaces without using DI (say, by using a ServiceLocator instead).

But there is one more thing that I can add about DI which I've noticed more of late - it often forces the construction of way more services (objects) than is needed, because it needs to satisfy every possible dependency of every possible dependency, no matter if it is relevant for the task at hand or not.

As an extreme example, I've seen Controller classes (in a MVC-type web application) that have over 20 DI-supplied parameters in their constructor. And each Action uses maybe one or two of them, but they ALL need to be instantiated in order to instantiate the Controller class.

And the same problem goes down the tree as well - those services can each depend on multiple other services (say, DB Repository classes), which then also ALL need to be instantiated. Etc.

How much of this is in any particular project will, of course, vary. And it is true, that if a class grows too large, you should split it up, which would then also help with this issue. But it seems like there is always at least SOME level of unavoidable useless class instantiation.

Is it a dealbreaker? Is the overhead noticeable? I don't know. It can be managed, for sure. But it feels wasteful and unnecessary.

Vilx-
  • 5,420
25

I followed Mark Seemann's advice from "Dependency injection in .NET" - to surmise.

DI should be used when you have a 'volatile dependency', e.g. there's a reasonable chance it might change.

So if you think you might have more than one implementation in the future or the implementation might change, use DI. Otherwise new is fine.

Dennis
  • 8,267
  • 6
  • 38
  • 70
Rob
  • 261
7

I have to say that in my opinion, the entire notion of Dependency Injection is overrated.

DI is the modern day equivalent of global values. The things you are injecting are global singletons and pure code objects, otherwise, you couldn't inject them. Most uses of DI are forced on you in order to use a given library (JPA, Spring Data, etc). For the most part, DI provides the perfect environment for nurturing and cultivating spaghetti.

In all honesty, the easiest way to test a class is to ensure that all dependencies are created in a method that can be overridden. Then create a Test class derived from the actual class and override that method.

Then you instantiate the Test class and test all its methods. This won't be clear to some of you - the methods you are testing are the ones belonging to the class under test. And all of these method tests occur in a single class file - the unit testing class associated with the class under test. There is zero overhead here - this is how unit testing works.

In code, this concept looks like this...

class ClassUnderTest {

   protected final ADependency;
   protected final AnotherDependency;

   // Call from a factory or use an initializer 
   public void initializeDependencies() {
      aDependency = new ADependency();
      anotherDependency = new AnotherDependency();
   }
}

class TestClassUnderTest extends ClassUnderTest {

    @Override
    public void initializeDependencies() {
      aDependency = new MockitoObject();
      anotherDependency = new MockitoObject();
    }

    // Unit tests go here...
    // Unit tests call base class methods
}

The result is exactly equivalent to using DI - that is, the ClassUnderTest is configured for testing.

The only differences are that this code is utterly concise, completely encapsulated, easier to code, easier to understand, faster, uses less memory, does not require an alternate configuration, does not require any frameworks, will never be the cause of a 4 page (WTF!) stack trace that includes exactly ZERO (0) classes which you wrote, and is completely obvious to anyone with even the slightest OO knowledge, from beginner to Guru (you would think, but would be mistaken).

That being said, of course we can't use it - it's too obvious and not trendy enough.

At the end of the day, though, my biggest concern with DI is that of the projects I have seen fail miserably, all of them have been massive code bases where DI was the glue holding everything together. DI is not an architecture - it is really only relevant in a handful of situations, most of which are forced on you in order to use another library (JPA, Spring Data, etc). For the most part, in a well designed code base, most uses of DI would occur at a level below where your daily development activities take place.

6

Enabling switching implementations quickly (DbLogger instead of ConsoleLogger for example)

While DI in general is surely a good thing, I'd suggest not to use it blindly for everything. For example, I never inject loggers. One of advantages of DI is making the dependencies explicit and clear. There's no point in listing ILogger as dependency of nearly every class - it's just clutter. It's the responsibility of the logger to provide the flexibility you need. All my loggers are static final members, I may consider injecting a logger when I need a non-static one.

Increased number of classes

This is an disadvantage of the given DI framework or mocking framework, not of DI itself. In most places my classes depend on concrete classes, which means that there's zero boilerplate needed. Guice (a Java DI framework) binds by default a class to itself and I only need to override the binding in tests (or wire them manually instead).

Creation of unnecessary interfaces

I only create the interfaces when needed (which is rather rare). This means that sometimes, I have to replace all occurrences of a class by an interface, but the IDE can do this for me.

Should we use dependency injection when we have just one implementation?

Yes, but avoid any added boilerplate.

Should we ban creating new objects except language/framework ones?

No. There'll be many value (immutable) and data (mutable) classes, where the instances just get created and passed around and where there's no point in injecting them -- as they never get stored in another object (or only in other such objects).

For them, you may need to inject a factory instead, but most of the time it makes no sense (imagine e.g., @Value class NamedUrl {private final String name; private final URL url;}; you really don't need a factory here and there's nothing to inject).

Is injecting a single implementation bad idea (let's say we have just one implementation so we don't want to create "empty" interface) if we don't plan to unit test a particular class?

IMHO it's fine as long as it causes no code bloat. Do inject the dependency but do not create the interface (and no crazy config XML!) as you can do it later without any hassle.

Actually, in my current project, there are four classes (out of hundreds), which I decided to exclude from DI as they're simple classes used in too many places, including data objects.


Another disadvantage of most DI frameworks is the runtime overhead. This can be moved to compile time (for Java, there's Dagger, no idea about other languages).

Yet another disadvantage is the magic happening everywhere, which can be tuned down (e.g., I disabled proxy creation when using Guice).

maaartinus
  • 2,713
-5

Really your question boils down to "Is unit testing bad?"

99% of your alternate classes to inject will be mocks that enable unit testing.

If you do unit testing without DI you have the problem of how to get the classes to use mocked data or a mocked service. Lets say 'part of logic' as you may not be separating it out into services.

There are alternate ways of doing this, but DI is a good and flexible one. Once you have it in for testing you are virtually forced to use it everywhere, as you need some other piece of code, even it's so-called 'poor man's DI' which instantiates the concrete types.

It's hard to imagine a disadvantage so bad that the advantage of unit testing is overwhelmed.

Ewan
  • 83,178