5

Automated testing is pretty hyped-up in recent years, with particular emphasis on TDD at the "unit" level. The touted advantages include things like:

  • Stabilizing existing code: breaking changes are identified before deployment
  • Coercing code, to some extent, into smaller, more testable structures
  • A codebase that self-documents requirements
  • A codebase that self-documents the usage of its units and modules

The perceived disadvantages include things like:

  • Increased initial development time
  • Increased lines of code per requirement (test code + implementation)
  • The patterns required for certain features to be testable is, in some cases, said to drastically increase complexity / lines of actual implementation code required per feature
  • Increased developer skills requirement
  • Increased tooling

There are also some simple limitations and impediments:

  • Tests cannot ensure code is bug-free; only the specific, test-encoded requirements
  • Tests can only validate testable code
  • Some interfaces and/or hardware may be very/prohibitively time-consuming to mock or fake
  • Some requirements cannot [yet] be tested automatically (e.g., "it looks good")

And so, for a given project, either new or ongoing, how should an unbiased, professional programmer weigh these things against each other to determine whether an automated testing suite is a net gain?

Or, if knowing ahead of time is too fuzzy of a science, having implemented an automated testing suite, can we measure and/or demonstrate its net impact?


For example:

We might conclude "intuitively" that a large, widely used financial system benefits from a thorough automated test suite. Without crunching any number, the cost of adding many meaningful tests "feels" lower than the liability of financial transactions that "forget" which accounts the money is really in.

On the other end, a test suite of any size around a PHP implementation of the echo command is probably going to inflate our development time by 10 or 100 times with no observable gain.

But, what about the things between the extremes?

svidgen
  • 15,252

2 Answers2

5

For me there is a simple criteria to decide if automatic tests are needed:

Are you planning to evolve and maintain the project? "Yes" - you should write tests.

There are some cases when you can go without tests, mostly it is "do once and forget" type of work, like small web-site, CMS setup, etc. Everything which you do quickly, test if it works and forget about it.

For anything more complex tests are necessary. And there are no disadvantages:

Increased initial development time

This is a very common mistake. Tests decrease development time on any development stage.

If you don't write tests the development process is like this:

  • Write some new code, often touching some old code too
  • Manually test if it works
  • Guess what else can be broken by your changes, re-test it too
  • If you are lucky, you can find something that worked before, but now is broken (regressions)
  • Go fix it
  • Repeat the test / fix process

Besides your main goal (write the new code), you have to do a lot of boring and repetitive work. The amount of this work grows really quickly as project evolves.

What is even worse is that you have to do this boring repetitive work and you still can not guarantee that you didn't break anything that worked before. You can not guarantee that you well tested all the cases for the new code.

Bugs and regressions will go to production. For the user it is really annoying to see the updates when new features don't work as expected and old features are broken. And these bugs will eat even more time - users, support, managers and developers, all will spend time on finding and fixing these bugs.

Compare this with the scenario where you have tests and add tests for any bug you fix:

  • Write some new code, often touching some old code too
  • Add tests for the new code
  • Run the test suite
  • Fix errors
  • Repeat until it passes

The test / fix cycle with automatic tests will be way faster than manual testing and it will actually re-test everything what was working before. You can't even dream of this with manual testing.

The only additional step is to add tests for the new code. It will take much less time than all the additional efforts and problems you have with manual testing. Plus it is not boring nor repetitive - this is something what programmers should do and can do - write the code.

Yes, you still can't guarantee that there are no bugs. But you are sure that all the existing features work and that the new feature works too (at least those cases you added new tests for).

Increased lines of code per requirement (test code + implementation)

Maybe, but lines of code doesn't mean anything when we know that tests save the development time.

For example, you need to implement a popup calendar on your website:

  • Programmer A cares about lines of code and writes the calendar in pure JS in 3000 lines of code and it takes two weeks
  • Programmer B is careless, he gets jquery + jquery ui + some plugin and adds 50000 lines of code, it takes 3 hours

What do you prefer?

The patterns required for certain features to be testable is, in some cases, said to drastically increase complexity / lines of actual implementation code required per feature

Again, this is a mistake.

Same patterns required for testing also improve your code structure. Testing makes you split the code into small, reusable and independently testable units and it is only a win for the general code structure.

Increased developer skills requirement

Not exactly true. Even if you have beginner developers who have to write unit tests, they also will have to learn how to organize their code in better ways and will improve as programmers.

If they don't have to write tests, you'll get a whole project implemented as a solid mess, which will fall apart really quickly. Adding a new feature will generate 10 new bugs and at some point it will be not possible to add new features at all.

Increased tooling

I am not sure I understand this point.

Do you mean that you will have to use some Continuous Integration server? This is a good thing. It will help you shorten the "develop-release" cycle and you'll have no fear giving you users the new version.

Do you mean some infrastructure to write and run tests? There are plenty options available for every programming language.

Regarding the general limitations of the unit tests you mentioned: these are real and you'll want to solve these once you start writing tests.

But that's not the reason to go without tests at all. Write tests where it is possible, manually test the rest and you'll quickly find some ways to add tests for the rest too, just to avoid this boring chore.

0

"All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident." - Schopenhauer

At a macro level the argument was had a long time ago with formal testing. The argument was that programmers are highly trained professionals who should write code without defects. This is of course ridiculous as (even if this were true) many external factors come into play:

  • Woolly/changing requirements
  • Changing environment
  • Limited time to test
  • Changes in external systems
  • Different expectations for various users
  • Feature fatigue

The benefit which never seems to get talked about with automated unit testing during development is that a failed test fails early. It is all too easy just fix the test and carry on blithely on your way. But consider how expensive the change would have been to attend to had the software been shipped.

So...your question. Simple answer - you don't. The very nature of software is that no two pieces are the same so you can't always compare like with like. That is something the average layperson doesn't get. They cry: "Why are there bugs?" But the truth is that every new piece of software is essentially a prototype - warts and all.

Something you can measure however is failures in CI server builds. If tests go over then either the developer hasn't run the tests or there was an issue with the merge/build set up. Failures flagged here indicate failures that would have happened further down the road when they would have been potentially more expensive to fix.

Robbie Dee
  • 9,823