31

Is it feasible to expect 100% code coverage in heavy jquery/backbonejs web applications? Is it reasonable to fail a sprint due to 100% coverage not being met when actual code coverage hovers around 92%-95% in javascript/jquery?

willz
  • 429

13 Answers13

33

It is equally realistic as it is unrealistic.

Realistic
If you have automated testing that has been shown to cover the entire code base, then insisting upon 100% coverage is reasonable.
It also depends upon how critical the project is. The more critical, the more reasonable to expect / demand complete code coverage.
It's easier to do this for smaller to medium sized projects.

Unrealistic
You're starting at 0% coverage ...
The project is monstrous with many, many error paths that are difficult to recreate or trigger.
Management is unwilling to commit / invest to make sure the coverage is there.

I've worked the gamut of projects ranging from no coverage to decent. Never a project with 100%, but there were certainly times I wished we had closer to 100% coverage.
Ultimately the question is if the existing coverage meets enough of the required cases for the team to be comfortable in shipping the product.

We don't know the impact of a failure on your project, so we can't say if 92% or 95% is enough, or if that 100% is really required. Or for that matter, the 100% fully tests everything you expect it to.

33

Who tests the tests?

It is very naive at best and unrealistic even in the theoretical sense and impractical in a business sense.

  • It is unrealistic with code that has high cyclomatic complexity. There are too many variables to cover every combination.
  • It is unrealistic with code that is heavily concurrent. The code is not deterministic so you can't cover every condition that might happen because behavior will change on every test run.
  • It is unrealistic in a business sense, it only really pays dividends to write tests for code that is critical path code, that is code that is important and code that may change frequently.

Testing every line of code isn't a good goal

It is very expensive to write tests, it is code that has to be written and tested it self, it is code that has to be documented in what it actually trying to test, it is code that has to be maintained with business logic changes and the tests fail because they are out of date. Maintaining automated tests and the documentation about them can be more expensive than maintaining the code sometimes.

This is not to say that unit test and integration tests aren't useful, but only where they make sense, and outside of industries that can kill people it doesn't make sense to try and test every line of code in a code base. Outside these critical kill lots of people quickly code bases, it is impossible to calculate a positive return on investment that 100% code coverage would entail.

Halting problem:

In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running or continue to run forever.

Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. A key part of the proof was a mathematical definition of a computer and program, which became known as a Turing machine; the halting problem is undecidable over Turing machines. It is one of the first examples of a decision problem.

Since you can not even prove something works 100% why make that your goal?

Plain and simple, in most cases it doesn't make any business sense.

17

In most cases, 100% code coverage means that you've "cheated" a little bit:

  • Complex, frequently changing parts of the system (like the gui) have been moved to declarative templates or other DSLs.
  • All code touching external systems has been isolated or handled by libraries.
  • The same goes for any other dependency, particularly the ones requiring side effects.

Basically, the difficult to test parts have been shunted to areas where they don't necessarily count as "code". It's not always realistic, but note that independent of helping you test, all of these practices make your codebase easier to work on.

12

For an impressive, real world example of 100% branch coverage, see How SQLite is Tested.

I realize your question specifically asks about javascript which is an entirely different type of software product, but I want to bring awareness to what can be done with sufficient motivation.

Bryan Oakley
  • 25,479
10

100% code coverage for unit tests for all pieces of a particular application is a pipe dream, even with new projects. I wish it were the case, but sometimes you just cannot cover a piece of code, no matter how hard you try to abstract away external dependencies. For example, let's say your code has to invoke a web service. You can hide the web service calls behind a interface so you can mock the that piece, and test the business logic before and after the web service. But the actual piece that needs to invoke the web service cannot be unit tested (very well anyway). Another example is if you need to connect to a TCP server. You can hide the code that connects to a TCP server behind a interface. But the code that physically connects to a TCP server cannot be unit tested, because if it is down for any reason then that would cause the unit test to fail. And unit tests should always pass, no matter when they are invoked.

A good rule of thumb is all of your business logic should have 100% code coverage. But the pieces that have to invoke external components, it should have as close to 100% code coverage as possible. If you cannot reach then I wouldn't sweat it too much.

Much more important, are the tests correct? Do they accurately reflect your business and the requirements? Having code coverage just to have code coverage doesn't mean anything if all you doing is testing incorrectly, or testing incorrect code. That being said, if your tests are good, then having 92-95% coverage is outstanding.

bwalk2895
  • 1,988
5

I'd say unless the code is designed with specific goal of allowing 100% test coverage, 100% may be not achievable. One of the reasons would be that if you code defensively - which you should - you should have sometimes code that handles situations that you're sure shouldn't be happening or can't be happening given your knowledge of the system. To cover such code with tests would be very hard by definition. To not have such code may be dangerous - what if you're wrong and this situation does happen one time out of 256? What if there's a change in unrelated place which makes impossible thing possible? Etc. So 100% may be rather hard to reach by "natural" means - e.g., if you have code that allocates memory and you have code that checks if it has failed, unless you mock out memory manager (which may not be easy) and write a test that returns "out of memory", covering that code might be difficult. For JS application, it may be defensive coding around possible DOM quirks in different browsers, possible failures of external services, etc.

So I would say one should strive for being as close to 100% as possible and have a good reason for the delta, but I would not see not getting exactly 100% as necessarily failure. 95% can be fine on a big project, depending on what the 5% are.

StasM
  • 3,367
2

If you are starting out with a new project, and you are strictly using a test-first methodology, then it is entirely reasonable to have 100% code coverage in the sense that all of your code will be invoked at some point when your tests have been executed. You may not however have explicitly tested every individual method or algorithm directly due to method visibility, and in some cases you may not have tested some methods even indirectly.

Getting 100% of your code tested is potentially a costly exercise, particularly if you haven't designed your system to allow you to achieve this goal, and if you are focusing your design efforts on testability, you are probably not giving enough of your attention to designing your application to meet it's specific requirements, particularly where the project is a large one. I'm sorry, but you simply can't have it both ways without something being compromised.

If you are introducing tests to an existing project where testing has not been maintained or included before, then it is impossible to get 100% code coverage without the costs of the exercise outweighing the effort. The best you can hope fore is to provide test coverage for the critical sections of code that are called the most.

Is it reasonable to fail a sprint due to 100% coverage not being met when actual code coverage hovers around 92%-95% in javascript/jquery?

In most cases I would say that it you should only consider your sprint to have 'failed' if you haven't met your goals. Actually, I prefer not to think of sprints as failing in such cases because you need to be able to learn from sprints that don't meet expectations in order to get your planning right the next time you define a sprint. Regardless, I don't think it's reasonable to consider the code coverage to be a factor in the relative success of a sprint. Your aim should be to do just enough to get everything to work as specified, and if you are coding test-first, then you should be able to feel confident that your tests will support this aim. Any additional testing you feel you may need to add is effectively sugar-coating and thus an added expense which can hold you up in completing your sprints satisfactorily.

S.Robins
  • 11,505
1

I don't do this as a matter of course, but I have done it on two large projects. If you've got a framework for unit tests set up anyway, it's not hard exactly, but it does add up to a lot of tests.

Is there some particular obstacle you are encountering that is preventing you from hitting those last few lines? If not, if getting from 95% to 100% coverage is straightforward, so you might as well go do it. Since you're here asking, I'm going to assume that there is something. What is that something?

mjfgates
  • 2,064
  • 2
  • 13
  • 15
1

Martin Fowler writes in his blog: I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing.

However, there are even standards that mandate 100% coverage at unit level. For example, it is one of the requirements in the standards of the European spaceflight community (ECSS, European Cooperation for Space Standardisation). The paper linked here, tells an interesting story of project that had the goal of reaching 100% test coverage in an already completed software. It is based on nterviews with the involved engineers who developed the unit tests.

Some of the lessons are:

  • 100% coverage is unusual but achievable
  • 100% coverage is sometimes necessary
  • 100% coverage brings in new risks
  • Don’t optimize for the 100%-metric
  • Develop a proper strategy to maximize coverage
  • 100% coverage is not a sufficient condition for good quality
0

92% is fine. I feel that the real questions are:

  • Is 92% the 'new' norm now? If the next sprint has 88% testing, will that be ok? This is frequently the start of test suites being abandoned.

  • How important is it that the software work and not have bugs. You have tests for these reasons, not "for the sake of testing"

  • Is there a plan to go back and fill in the missing tests?

  • Why are you testing? It seems like the focus is % of line covered not functionality

0

Perhaps asking if is feasible and reasonable are not the most helpful questions to ask. Probably the most practical answer is the accepted one. I will analyze this on a more philosophical level.

100% coverage would be ideal, but ideally, it would not be needed, or would be much easier to achieve. I prefer to think about if it is natural and human than feasible or reasonable.

The act of programming correctly is next to impossible with today's tools. It is very difficult to write code that is totally correct, and doesn't have bugs. It is just not natural. So, with no other obvious option, we turn to techniques like TDD, and tracking code coverage. But as long as the end result is still an unnatural process, you will have a hard time getting people to do it consistently and happily.

Achieving 100% code coverage is an unnatural act. For most people, forcing them to do achieve it would be a form of torture.

We need processes, tools, languages, and code that map to our natural mental models. If we fail to do this, there is no way to test quality into a product.

Just look at all the software out there today. Most of it messes up pretty regularly. We don't want to believe this. We want to believe our technology is magical and make us happy. And so we choose to ignore, excuse, and forget most of the times our technology messes up. But if we take an honest appraisal of things, most of the software out there today is pretty crappy.

Here are a couple efforts to make coding more natural:

https://github.com/jcoplien/trygve

https://github.com/still-dreaming-1/PurposefulPhp

The later is extremely incomplete and experimental. Actually it is a project I started, but I believe it would be a huge step forward for the craft of programming if I could ever get myself to put the time into it to complete it. Basically it is the idea that if contracts express the only aspects of a classes behavior that we care about, and we are already expressing contracts as code, why not only have the class and method definitions along with the contracts. In that way the contracts would be the code, and we would not need to implement all the methods. Let the library figure out how to honor the contracts for us.

-2

Reaching 100% on new code should be very achievable and if you're practicing TDD you'll likely hit that by default as you're very deliberately writing tests for every line of production code.

On existing legacy code that was written with no unit tests it can be difficult as often legacy code wasn't written with unit testing in mind and can require a lot of refactoring. That level of refactoring often isn't practical given the realities of risk and schedule so you make trade offs.

On my team I specify 100% code coverage and if we see less than that in the code review the technical owner of the component discuss why 100% wasn't reached with the developer and must agree with the developer's reasoning. Often if there's a problem hitting 100% the developer will talk to the technical owner before the code review. We've found that once you get into the habit and learn techniques for working around several common issues with adding tests to legacy code that hitting 100% regularly is not as difficult as you'd initially think.

Michael Feather's book "Working Effectively with Legacy Code" has been invaluable to us for coming up with strategies for adding tests to our legacy code.

-3

No, it's not possible and it never will be. If it were possible all of mathematics would fall into finitism. For example, how would you test a function that took two 64 bit integers and multiplied them? This has always been my problem with testing versus proving a program correct. For anything but the most trivial programs, testing is basically useless as it only covers a small number of cases. It's like checking a 1,000 numbers and saying you've proved the Goldbach conjecture.