0

In the company I work for there is a requirement that all the code should be covered by a test of any kind because they want to have as few user reported defects as possible.

With this is mind I decided to put a policy that the code should be unit tested to C0 (100% coverage of each line, not each condition).

Now others say that this is more expensive that doing manual test.

Intuitively I would say that this is wrong as each time you do a change you have to retest everything manually and this is probably more effort than just executing the tests and updating the ones that need to be updated but I don't find a way to justify myself with numbers, papers or other information.

Can you give me some good points to proof my approach to the people I report to?

EDIT: Thanks to all that helped on the question. I see that there were some key points missing on it:

  • This is a new product that we started developing one year ago with testing in mind so we use DI everywhere and everything is prepared for testing.
  • We already have a commercial product that allows us to reach a 100% coverage as we can mock almost everything, including .Net classes so we can truly isolate classes (JustMock).
  • We have tools to calculate testing coverage.
  • We are not removing testers and manual testing. We are removing manual testing done by developers. We have a separate SQ team but management wants that the number of bugs that reach the SQ team is as small as possible so developers must reach 100% coverage by any mean before delivering code to the SQ team. So what I did was to replace developer manual testing with automated testing (unit and integration tests) and that's what management wants to revert.

The question is not the same as the "duplicated" one as I already have a 100% coverage requirement.

3 Answers3

5

Most of the industry specialists agree that unit testing helps and reduces defect rate; there have been studies based on multiple projects that show this. You can find concrete results of these studies in software engineering books; take a look for example in Steve McConnell's Code Complete at chapter 20.3 Relative Effectiveness of Quality Techniques.

On the critical side, you're making an assumption that might not hold true in all the cases:

  • it really depends on your application, on the domain type (I can't see how you can release a complex game for example without any manual testing)
  • it depends if you're at the beginning with the application and change lots of things or you already have a pretty stable application.
  • maybe your application has a short life-span and investing much in automated testing is not worth it
  • it depends on the workforce; maybe the human resources in your area in manual testing are much more cheaper than developers or testers that know how to code

Having 100% unit test code coverage doesn't make sense (in most of the cases) because:

  1. It does not guarantee you're application is bug free; there still are integration issues, performance issues, security issues etc. The most important reason of unit tests I believe it to be the assurance of refactoring code without breaking it up; not a replacement for other types of issues.

  2. It's extremely difficult to reach. Basically unit tests show you that you have working separate units, but your applications have many interconnected units. The correctness of the application also depends on how these units are interconnected.

  3. See a much better argument by Joel Spolsky .

To sum up it's a very good idea to add unit tests if you don't have any; but it's a bad idea to fully replace manual testing with 100% line coverage in unit tests.

Random42
  • 10,520
  • 10
  • 52
  • 65
1

Do some back of the envelope math to figure out where the break-even point is.

If it takes x hours for a manual tester that gets paid $y/hr to run through a manual test and the manual test has to be run z times per year, the cost of the manual test is x * y * z per year. If it takes n hours for someone making $m/hr to automate that test, that's a one-time cost of n*m. Then figure out how long it takes for the one-time investment in building the automated test (n*m) to outweigh the (x*y*z) annual cost of the manual test.

Most of the time, building the automated test is a no-brainer. It's a bit more expensive as a one-time cost but it saves the recurring cost of running the manual test. And that's before you account for the fact that manual tests are generally less reliable since humans are fallible and will occasionally leave out steps. Occasionally, though, you'll find tests that you want to run occasionally that are hard to automate and easier to just have a human run manually before major releases.

Justin Cave
  • 12,811
1

Whilst there is merit in having unit tests, most people take it too far - firstly trying for 100% unit test coverage is trying to turn your company into this. You need a more pragmatic approach,especially as unit testing does not guarantee your application is bug-free, you still need plenty of integration testing to prove all the isolated bits work together.

Manual testing is best replaced with automated integration test tools. There are plenty around (eg Cucumber or SpecFlow) that should be much more easily recognised as something to replace the manual testing without going into the maintenance nightmare of 100% unit test coverage.

Unit testing is often abused, 100% unit test coverage is definitely a bad smell.

gbjbaanb
  • 48,749
  • 7
  • 106
  • 173