7

In my C# solution, I have a Tests project containing unit tests (xUnit) that can run on every build. So far so good.

I also want to add integration tests, which won't run on every build but can run on request. Where do you put those, and how do you separate unit tests from integration tests? If it's in the same solution with the same [Fact] attributes, it will run in the exact same way.

What's the preferred approach? A second separate test project for integration tests?

6 Answers6

6

The separation is not unit versus integration test. The separation is fast versus slow tests.

How you organize these tests to make them easier to run is really up to you. Separate folders is a good start, but other annotations like traits or [Fact] can work just as well.


I think there is a fundamental misconception here about what constitutes an integration test and a unit test. The beginning of Flater's answer gives you the differences between the two (and yes, sadly, I'm going to quote an answer already on this question)

Flater said:

The difference between unit tests and integration tests is that they test different things. Very simply put:

  • Unit tests test if one thing does what it's supposed to do. ("Can Tommy throw a ball?" or "Can Timmy catch a ball?")
  • Integration tests test if two (or more) things can work together. ("Can Tommy throw a ball to Timmy?")

And some supporting literature from Martin Fowler:

Integration tests determine if independently developed units of software work correctly when they are connected to each other. The term has become blurred even by the diffuse standards of the software industry, so I've been wary of using it in my writing. In particular, many people assume integration tests are necessarily broad in scope, while they can be more effectively done with a narrower scope.

(emphasis, mine). Later on he elaborates on integration tests:

The point of integration testing, as the name suggests, is to test whether many separately developed modules work together as expected.

(emphasis, his)

With regard to the "narrower scope" of integration testing:

The problem is that we have (at least) two different notions of what constitutes an integration test.

narrow integration tests

  • exercise only that portion of the code in my service that talks to a separate service
  • uses test doubles of those services, either in process or remote
  • thus consist of many narrowly scoped tests, often no larger in scope than a unit test (and usually run with the same test framework that's used for unit tests)

broad integration tests

  • require live versions of all services, requiring substantial test environment and network access
  • exercise code paths through all services, not just code responsible for interactions

(emphasis, mine)

Now we start getting to the root of the problem: an integration test can execute quickly or slowly.

If the integration tests execute quickly, then always run them whenever you run unit tests.

If the integration tests execute slowly, because they need to interact with outside resources like the file system, databases or web services then those should be run during a continuous integration build, and run by developers on command. For instance, right before code review run all of the tests (unit, integration or otherwise) that apply to the code you have changed.

This is the best balance between developer time and finding defects early on in the development life cycle.

4

The difference between unit tests and integration tests is that they test different things. Very simply put:

  • Unit tests test if one thing does what it's supposed to do. ("Can Tommy throw a ball?" or "Can Timmy catch a ball?")
  • Integration tests test if two (or more) things can work together. ("Can Tommy throw a ball to Timmy?")

The example integration test I gave may seem so simple that it's not worth testing after having done the example unit tests; but keep in mind that I've oversimplified this for the sake of explanation.

integration tests, which won't run on every build but can run on request

That isn't really how you're supposed to approach integration tests. They are just as essential as unit tests and should be run on the same schedule.

You should think of unit and integration tests as two pieces of "the test package", and it's this package that you need to run when testing. It makes no sense to only test half of your application's purpose and consider that a meaningfully conclusive test.

Without adding integration tests to your general testing schedule, you're simply ignoring any test failures that would be popping up. Tests exist specifically because you want to be alerted of their failures, so why would you intentionally hide those alerts? It's the equivalent of turning on the radio and then sticking your fingers in your ears.

and how do you separate unit tests from integration tests?

While they are commonly separated into different projects (usually of the same solution), that's not a technical requirement but rather a way of structuring the codebase.

If it's in the same solution with the same [Fact] attributes, it will run in the exact same way.

As I mentioned, running them in the exact same way is what you're supposed to be doing.

I assume you mean "in the same project" and not "in the same solution". They are often separated into separate projects as a matter of structuring the codebase, but there's no technical requirement forcing you to separate the unit and integration tests. You could just as well put them in the same project, which would make sense for small codebases with only a handful of tests.

Flater
  • 58,824
2

There is no "one size fits all" approach.

Some shops have them in the same project as the unit tests, others prefer to have them in a separate project by themselves.

However you do it, I'd recommend flagging them as such so the build server (assuming you have one) can be set to run a subset of tests as the build manager and/or developers deems fit.

I'm not au fait with xUnit to be honest, but it appears that this can be done by trait.

Once these are in place, developers can then pick and choose which categories of test they wish to run locally which can speed up development.

Robbie Dee
  • 9,823
1

It probably depends on testing framework (you are using one, right?)

For JUnit 4/5, we have separate test suites.

We run one set locally, another before group level pull requests (unit tests for all jars). The 126k integration tests, which can take several hours to run (end to end / integration tests using Selenium, Weblogic, Oracle RDBMS, and Oracle BI Publisher, docker, bunch of other 3rd party COTS) are run before release pull requests, and those run every 6 hours.

At need, of course, we can run the those tests locally, too, but it is painful due to the duration and the special setup needed for the local docker images, so we generally just wait and see how the group build servers handle it.

Kristian H
  • 1,281
0

There is probably no "correct" answer, but I have a few simple rules of thumb, which work well for me:

  • Tests should check if business (use-case) requirements are met. It means there no simple or complicated component to test, there is just a requirement, which is the only reason to add a new test.

  • Only APIs are tested, never the implementation.

  • Mocks should be avoided at minimum (only for very slow or expensive resources).

  • Unit tests can be run on the level of code without need to be deployed.

  • Integration tests validate a component is correctly integrated in the system. It means the component must be deployed.

This difference, deployed or not, determines the place in your deployment process (pipeline stage).

ttulka
  • 373
0

I'm reading the question a bit differently. I guess the problem is in the definition of "differentiate". Above answers seems to interpret as "describe the difference". The person asking the question seems to be asking for how do I tell the software to run them differently (e.g. I also want to add integration tests, which won't run on every build but can run on request. ). This is clearly a request on how do I tell the CI environment to run unit tests only versus running integration tests only versus running both. As such there are much better answers available: