2

I've worked on a few codebases without a great automated test suite, where changes that concern the whole platform have to be thoroughly tested by developers and there would be a high risk that a degradation is introduced with each release. How do you prevent this from happening?

The less than perfect answers from my experience are:

  • Test thoroughly yourself - completely unsustainable on large projects
  • Organise group testing sessions - can be effective, but a bit chaotic and a pain in the ass for everyone involved
  • Manual release QA tests - soul sucking affair where some poor sod has to trawl through a spreadsheet of manual tests, limited by their knowledge of the platform and existing bugs

What would be yours, save for improving the testing suite or hiring people whose job is QA?

styke
  • 145

2 Answers2

6

Tests prevent regressions. There are other tools which would prevent regressions, but unfortunately they are either less effective (example: code reviews), or extremely expensive (example: formal proofs), which leaves us with tests as the best option.

So you find yourself in front of a codebase which should be tested, but is not tested yet, or not enough, and you have to make a change. The solution to prevent regressions is to progressively increase the test coverage.

When you touch a small part of the codebase, check if it's tested. If you are confident enough that most if not all the logic is tested, that's great. If not, start by adding tests. The great thing is while you do it, you will:

  • Understand better the part that you wanted to modify.
  • Have a better view of how well this part of the code matches the business requirements.
  • Find a bunch of errors in the logic of the code, such as the branches which can never be reached.
  • Get a clear picture of the change you were expected to do.
  • Imagine some possible refactoring to do.

The elements in bold would reduce the risk of causing regressions by a change. Most regressions come from changes where developers misunderstood the code they were changing, or misunderstood the change itself.

Once you tested it:

  1. Write the test which will check for the change you're about to do.
  2. Do your change.
  3. Re-run the tests.
  4. Refactor.
  5. Re-run the tests.
  6. Release.

If you're not under time pressure:

  • Make additional effort to refactor more, if needed. Sometimes, there are opportunities to simplify the code tremendously, eventually replacing dozens of lines of code by a call to a method, or removing a duplication, or making a challenging algorithm easy enough for a junior programmer to understand. Since the code is tested now, you can refactor aggressively, without the risk of creating regressions while doing so.
  • If during testing, you noticed discrepancies between code and business requirements, go see the product owner.
  • If you still have free time, test code that surrounds the piece you already tested, especially if it allows to expand the refactoring further on.

Make sure you also read I've inherited 200K lines of spaghetti code — what now?

1

Writing tests really is the way to go. but assuming you can't I would go with "more logging"

If you are logging all your errors then you can tell if a release increases or decreases the number of errors of various types.

If a feature release decreases errors, you are good. If it increases you have introduced a new problem.

Coupled with a canary release, where you release a new version to a subset of users you should at least be able to gauge if the new release is better than the last, even if you can't tell if it's working correctly or not.

Ewan
  • 83,178