91

I am writing tests for a project that consists of multiple submodules. Each test case that I have written runs independent of each other and I clear all data between tests.

Even though the tests run independently, I am considering enforcing an execution order, as some cases require more than one submodule. For example, a submodule is generating data, and another one is running queries on the data. If the submodule generating the data contains an error, the test for the query submodule will also fail, even if the submodule itself works fine.

I can not work with dummy data, as the main functionality I am testing is the connection to a black box remote server, which only gets the data from the first submodule.

In this case, is it OK to enforce an execution order for the tests or is it bad practice? I feel like there is a smell in this setup, but I can not find a better way around.

edit: the question is from How to structure tests where one test is another test's setup? as the "previous" test is not a setup, but tests the code which performs the setup.

6 Answers6

242

I can not work with dummy data, as the main functionality I am testing is the connection to a black box remote server, which only gets the data from the first submodule.

This is the key part for me. You can talk about "unit tests" and them "running independently of each other", but they all sound like they are reliant on this remote server and reliant on the "first sub module". So everything sounds tightly coupled and dependent on external state. As such, you are in fact writing integration tests. Having those tests run in a specific order is quite normal as they are highly dependent on external factors. An ordered test run, with the option of an early quit out of the test run if things goes wrong is perfectly acceptable for integration tests.

But, it would also be worth taking a fresh look at the structure of your app. Being able to mock out the first submodule and external server would then potentially allow you to write true unit tests for all the other submodules.

Robbie Dee
  • 9,823
David Arno
  • 39,599
  • 9
  • 94
  • 129
33

Yes, it's a bad practice.

Generally, a unit test is intended to test a single unit of code (e.g. a single function based on a known state).

When you want to test a chain of events that might happen in the wild, you want a different test style, such as an integration test. This is even more true if you're depending on a third party service.

To unit test things like this, you need to figure out a way to inject the dummy data, for example implementing a data service interface that mirrors the web request but returns known data from a local dummy data file.

Paul
  • 3,347
16

The enforced execution order you propose only makes sense if you also abort the test run after the first failure.

Aborting the test run on the first failure means that each test run can uncover only a single problem and it can't find new problems until all preceding problems have been fixed. If the first test to run finds a problem that takes a month to fix, then during that month effectively no tests will be executed.

If you don't abort the test run on the first failure, then the enforced execution order doesn't buy you anything because each failed test needs to be investigated anyway. Even if only to confirm that the test on the query submodule is failing because of the failure that was also identified on the data generating submodule.

The best advice I can give is to write the tests in such a way that it is easy to identify when a failure in a dependency is causing the test to fail.

Robbie Dee
  • 9,823
7

The smell you're referring to is the application of the wrong set of constraints and rules to your tests.

Unit Tests often get confused with "automated testing", or "automated testing by a programmer".

Unit Tests must be small, independent, and fast.

Some people incorrectly read this as "automated tests written by a programmer must be small independent and fast". But it simply means if your tests are not small, independent, and fast, they are not Unit Tests, and therefore some of the rules for Unit Tests should not, cannot, or must not apply for your tests. A trivial example: you should run your Unit Tests after every build, which you must not do for automated tests that are not fast.

While your tests not being Unit Tests means you can skip one rule and are allowed to have some interdependence between tests, you also found out that there are other rules which you may have missed and will need to reintroduce - something for the scope of another question.

Peter
  • 3,778
6

As noted above, what you are running seems to be an integration test, however you state that:

For example, a submodule is generating data, and another one is running queries on the data. If the submodule generating the data contains an error, the test for the query submodule will also fail, even if the submodule itself works fine.

And this may be a good place to start refactoring. The module that runs queries on the data should not be dependent on a concrete implementation of the first (data generating) module. Instead it would be better to inject an interface containing the methods to get to that data and this can then be mocked out for testing the queries.

e.g.

If you have:

class Queries {

    int GetTheNumber() {
        var dataModule = new Submodule1();
        var data = dataModule.GetData();
        return ... run some query on data
    }
}

Instead prefer:

interface DataModule {
    Data GetData();
}


class Queries {

    IDataModule _dataModule;

    ctor(IDataModule dataModule) {
       _dataModule = dataModule;
    }

    int GetTheNumber() {
        var data = _dataModule.GetData();
        return ... run some query on data
    }
}

This removes the dependency from queries on your data source and allows you to set up easily repeatable unit tests for particular scenarios.

Paddy
  • 2,633
6

The other answers mention that ordering tests is bad (which is true most of the time), but there is one good reason for enforcing order on test execution: making sure your slow tests (i.e., integration tests) run after your faster tests (tests that don't rely on other outside resources). This ensures that you execute more tests faster, which can speed up the feedback loop.