3

What are your opinions on writing "production" code to facilitate testing?

Specifically, the use case is this: we have a system with a multiple step workflow, where a few stages are done by humans.

We are looking to build automated tests for this workflow (not unit tests-those already exist) but need something to stand-in for the human steps in the workflow.

The easiest option, by far, is to write a bit of code in the application itself to do those steps when configured to do so-with plenty of safeguards that this won't run in production. Not only is this the easiest to implement but it will also be the lowest profile and lowest latency in moving entities through the workflow stages. Certainly, this is irregular, but is it terrible?

Another option is to write and deploy a separate "agent" to do those steps, but of course, we then have to manage that agent and it is more complex to write. (We are not looking to test that part of the application at the moment.)

Yet another option is to use some sort of workflow automation framework. This is probably the most complicated option, but it may have other use cases in automated testing. If so, do you all have recommendations?

Kramer
  • 147

3 Answers3

4

I try to avoid code not designed for use in an operational configuration or is traceable to a requirement. This comes from my background in safety-critical systems, where dead code must be removed and deactivated code needs additional verification to ensure it cannot be activated in production systems. This added verification adds to the cost and complexity of long-term maintenance.

Without understanding the system, I'm unsure why you need either proposed option, and why standard system-level automated tests won't work. Suppose you have a pipeline that consists of processing steps A, B, and C, where A and C are automated and B is a manual step that occurs outside the software. In that case, you should be able to define the input partitions for A and C such that all valid (and perhaps even some invalid) inputs are provided. Given the inputs, you can make assertions about the output of A and C.

It seems like you may not need to test the workflow end-to-end thoroughly. Instead, you would want to set up the system state to various states and assert that the processing step happens as expected, resulting in the expected final state.

Of course, a better understanding of the workflow would help. But if it's outside the software system, I would only expect you to need to assert the expected state and output given defined inputs for workflow steps performed inside the system.

Thomas Owens
  • 85,641
  • 18
  • 207
  • 307
2

Disclosure: I'm a member of a UI testing platform project.

When human actions are required for a test, they can be automated by dedicated tools, avoiding the need to leak test code into production.

This is usually done in

  • system tests, where all test steps are modeled as human input (Selenium, TestComplete, RCPTT)
  • but there are also libraries for integration tests, where human actions are only mocked occasionally (SWTBot, Robot)
Basilevs
  • 3,896
0

Your approach is reasonable.

  • When debugging, we sometimes have to add code or design code only for debugging purposes, with some switches to disable this code in the production environment.

  • When unit testing, we often write code which is only executed in the unit test environment, but not in production. We also design our code differently to make unit testing possible or more smooth.

So when it comes to broader scale testing scenarios, it can be perfectly justified to add some code whose exclusive purpose is to support these scenarios. Choosing the approach from which you think it is the most effective one is very pragmatic and gets things done.

Doc Brown
  • 218,378