5

I'm looking for some advice on testing strategies for service to service communication.

I have one service (service A) that makes a call to another service (B) - which is a rest API. Both services are owned by me.

I have some unit tests around the service calls and I simply mock the HTTP library so no requests are actually sent to the service. That works well for unit testing but I was wondering if it is worthwhile to add some integration tests that actual test the service calls and responses.

The problem I see is service B updates a database so any integration tests in service A will have to reset any changes they make by calling the DB directly. To me this doesn't seem ideal as now service A knows more about the implementation of service B than it should.

Are these tests valuable? When I've seen these kind of tests before they are often brittle and rely on development environments being in a good state. If this was a third party API for example I wouldn't have tests which call it directly.

I can think of two options:

  1. Write the integration tests in service A and have these tests call service B's database to reset/inset data as needed.

  2. Stick with mocks and don't add integration tests to service A. Instead add some functional tests to service B which test the various rest endpoints.

Any advice or thoughts?

Glorfindel
  • 3,167
M.M
  • 71

3 Answers3

9

The problem I see is service B updates a database, so any integration tests in service A will have to reset any changes they make by calling the DB directly. [...] Are these tests valuable?

I don't think so. Nondeterministic tests are risky in many ways. For instance

  • Can fail at any time due to undetermined circumstances.
  • Random failures are hard to track and reproduce.
  • Slow down development and ALM. For example, by causing false negatives, making us spend time analyzing causes and effects.

All in all, they make for flaky tests and, flaky tests, are the seed of evil. If you wonder why, Sam Newman summarizes it quite well

A test suite with flaky tests can become a victim of what Diana Vaughan calls normalization of deviance - the idea that over time we can become so accustomed to things being wrong that we start to accept them as being normal and not a problem.

-Building Microservices- by Sam Newman

The normalization of deviance undermines confidence in testing. When tests are "expensive" to run, developers can feel reluctant to execute them frequently. Or add more.

Any advice or thoughts?

For the sake of determinism1, when I write tests, the code under test should be disconnected from dependencies whose state and context are unknown or/and may change unexpectedly during the execution.

So far, a method that has worked well for me consists of narrowing down the scope of the tests and isolation. I choose what I want to test (what's in and out) and then remove any source of indeterminism. For example, given a dependency graph, I remove those dependencies whose state and context can change unexpectedly or I can't recreate at will.

If I can recreate both and it doesn't slow down tests, I don't isolate the code. A way to do it is with sandbox executions of containers with a database or a Test Double2 backed by ephemeral storage.

Write the integration tests in service A and have these tests call service B's database to reset/inset data as needed.

Among all the things you could do, this is probably the worse. Assuming you have the privileges to do it, you will become the target of a new blaming culture in the company and the PM's anger when (s)he realizes that you coupled A' development lifecycle with B's. Not to mention how it will make you feel when C service's team manage to get the nose in your database...

Stick with mocks and don't add integration tests to service A. Instead add some functional tests to service B which test the various rest endpoints.

It will make tests unrealistic. From a functional standpoint, it might seem irrelevant, but from the non-functional requirements perspective, integrations are mandatory. You don't want to wait until production to know the system's actual SLA, overall performance or fault tolerance.

Related links


1: It should be possible to execute all our tests in any order, time and | or environment.
2: It might interest: Mountebank

Laiv
  • 14,990
2

Your integration tests should not involve the DB directly. Well, the question is what interaction you want to test:

+-----+      +-----+      +------+
|  A  |<---->|  B  |<---->|  DB  |
+-----+      +-----+      +------+

Are you trying to test the A–B interaction or the A–B–DB interaction? If you only want to test the A–B subsystem and the B service is some kind of abstraction over the database where no other service is assumed to write to the DB, then you should not access the DB even for your tests.

The most important problem is that B is no longer free to change the database without also updating your A–B integration tests: renaming tables, adding columns, changing the DB technology, etc.

The simplest way to test A–B in isolation is to launch a separate A and B instance only for the test. The DB is effectively a part of B, so you also want to start with a fresh database. It is sometimes feasible to create a new database per test which would be ideal. E.g. creating a new SQLite database is super simple. If setting up a DB is more involved, you can keep a DB around for integration tests that will be reset before each test.

Integration tests are different from unit tests. In unit tests, you want to perform every test in isolation. This is not feasible for integration tests because initialization of the environment is often very involved and time-consuming. It is often best to organize your individual test cases into test suites, where each test case depends on the results of the previous test. The environment is only initialized at the beginning of each test suite. In exchange for faster tests you pay with less useful test results: if an early test case in a test suite fails, the remaining tests can't be run.

One simple way to add integration tests to an existing system is to record real-world interactions, then replay them for the test. E.g. I once wrote a tool that could parse the log output of a system: to create a new testcase I would copy the input & output from the logfile, anonymize the data, and save it to files testname.in.txt and testname.out.txt. The test runner would then go through a directory full of these files, replay the input, and diff the result with the expected output. However, you do have to take care to select representative test cases. Repeating similar tests is a waste of time.

amon
  • 135,795
1

You are right, a test that accesses the database layer directly will be more brittle, however depending on the value the test provides it might still be worth it. e.g. if you are testing an important, fragile piece of functionality in a legacy app, then the value provided by this test may well be worth the cost of maintaining this brittle test.

That said, there are a couple of alternative approaches that you might find useful.

  1. Set up your services / tests so that either the changes can be reset through the API, or so that the changes do not need resetting. For example, if your test creates a user, expose an endpoint that allows you to either delete or deactivate users, or create test users with a prefix that is unique for each test run and ignore those with a different prefix. Remember that testability is a (very valuable) feature of software - you shouldn't feel that it is somehow controversial to introduce features exclusively to improve the testability of your software.

  2. Test against a fake HTTP server, e.g. a mock implementation that records requests received and sends appropriate responses based on the test being run. This has the disadvantage of not testing interaction with the "real" service, however provides coverage that unit tests cannot. In fact this sort of testing can provide coverage of scenarios that can be more difficult against the "real" service, e.g. testing error responses, or high latency.

Justin
  • 1,748