143

I'm a solo developer with a pretty time-constrained work environment where development time ranges usually from 1-4 weeks per project, depending on either requirements, urgency, or both. At any given time I handle around 3-4 projects, some having timelines that overlap with each other.

Expectedly, code quality suffers. I also do not have formal testing; it usually goes down to walking through the system until it somewhat breaks. As a result, a considerable amount of bugs escape to production, which I have to fix and in turn sets back my other projects.

This is where unit testing comes in. When done right, it should keep bugs, let alone those that escape to production, to a minimum. On the other hand, writing tests can take a considerable amount of time, which doesn't sound good with time-constrained projects such as mine.

Question is, how much of a time difference would writing unit-tested code over untested code, and how does that time difference scale as project scope widens?

manlio
  • 4,256
Revenant
  • 1,425
  • 2
  • 12
  • 16

14 Answers14

161

The later you test, the more it costs to write tests.

The longer a bug lives, the more expensive it is to fix.

The law of diminishing returns ensures you can test yourself into oblivion trying to ensure there are no bugs.

Buddha taught the wisdom of the middle path. Tests are good. There is such a thing as too much of a good thing. The key is being able to tell when you are out of balance.

Every line of code you write without tests will have significantly greater costs to adding tests later than if you had written the tests before writing the code.

Every line of code without tests will be significantly more difficult to debug or rewrite.

Every test you write will take time.

Every bug will take time to fix.

The faithful will tell you not to write a single line of code without first writing a failing test. The test ensures you're getting the behavior you expect. It allows you to change the code quickly without worrying about affecting the rest of the system since the test proves the behavior is the same.

You must weigh all that against the fact that tests don't add features. Production code adds features. And features are what pay the bills.

Pragmatically speaking, I add all the tests I can get away with. I ignore comments in favor of watching tests. I don't even trust code to do what I think it does. I trust tests. But I've been known to throw the occasional hail mary and get lucky.

However, many successful coders don't do TDD. That doesn't mean they don't test. They just don't obsessively insist that every line of code have an automated test against it. Even Uncle Bob admits he doesn't test his UI. He also insists you move all logic out of the UI.

As a football metaphor (that's American football) TDD is a good ground game. Manual only testing where you write a pile of code and hope it works is a passing game. You can be good at either. Your career isn't going to make the playoffs unless you can do both. It won't make the superbowl until you learn when to pick each one. But if you need a nudge in a particular direction: the officials calls go against me more often when I'm passing.

If you want to give TDD a try I highly recommend you practice before trying to do it at work. TDD done half way, half hearted, and half assed is a big reason some don't respect it. It's like pouring one glass of water into another. If you don't commit and do it quickly and completely you end up dribbling water all over the table.

candied_orange
  • 119,268
118

I agree with the rest of the answers but to answer the what is the time difference question directly.

Roy Osherove in his book The Art of Unit Testing, Second Edition page 200 did a case study of implementing similarly sized projects with similar teams (skill wise) for two different clients where one team did testing while the other one did not.

His results were like so:

Team progress and output measured with and without tests

So in the end of a project you get both less time and fewer bugs. This of course depends on how big a project is.

Aki K
  • 1,143
32

There is only one study I know of which studied this in a "real-world setting": Realizing quality improvement through test driven development: results and experiences of four industrial teams. It is expensive to do this in a sensible way, since it basically means you need to develop the same software twice (or ideally even more often) with similar teams, and then throw all but one away.

The results of the study were an increase in development time between 15%–35% (which is nowhere near the 2x figure that often gets quoted by TDD critics) and a decrease in pre-release defect density from 40%–90%(!). Note that all teams had no prior experience with TDD, so one could assume that the increase in time can at least partially attributed to learning, and thus would go down even further over time, but this was not assessed by the study.

Note that this study is about TDD, and your question is about unit testing, which are very different things, but it is the closest I could find.

Jörg W Mittag
  • 104,619
25

Done well, developing with unit tests can be faster even without considering the benefits of extras bugs being caught.

The fact is, I'm not a good enough coder to simply have my code work as soon as it compiles. When I write/modify code, I have to run the code to make sure it does what I thought it does. At one project, this tended to end up looking like:

  1. Modify code
  2. Compile application
  3. Run application
  4. Log into application
  5. Open a window
  6. Select an item from that window to open another window
  7. Set some controls in that window and click a button

And of course, after all that, it usually took a few round trips to actually get it right.

Now, what if I'm using unit tests? Then the process looks more like:

  1. Write a test
  2. Run tests, make sure it fails in the expected way
  3. Write code
  4. Run tests again, see that it passes

This is easier and faster then manually testing the application. I still have to manually run the application (so I don't look silly when I turn in work that doesn't actually work at all), but for the most part I've already worked out the kinks, and I'm just verifying at that point. I actually typically make this loop even tighter by using a program that automatically reruns my tests when I save.

However, this depends on working in a test-friendly code base. Many projects, even those with many tests, make writing tests difficult. But if you work at it, you can have a code base that's easier to test via automated tests than with manual testing. As a bonus, you can keep the automated tests around, and keep running them to prevent regressions.

Winston Ewert
  • 25,052
23

Despite there being a lot of answers already, they are somewhat repetitive and I would like to take a different tack. Unit tests are valuable, if and only if, they increase business value. Testing for testing's sake (trivial or tautological tests), or to hit some arbitrary metric (like code coverage), is cargo-cult programming.

Tests are costly, not only in the time it takes to write them, but also maintenance. They have to be kept in sync with the code they test or they're worthless. Not to mention the time cost of running them on every change. That's not a deal-breaker (or an excuse for not doing the truly necessary ones), but needs to be factored in to cost-benefit analysis.

So the question to ask when deciding whether or not (or of what kinds) to test a function/method, ask yourself 'what end-user value am I creating/safeguarding with this test?'. If you can't answer that question, off the top of your head, then that test is likely not worth the cost of writing/maintaining. (or you don't understand the problem domain, which is a waaaay bigger problem than a lack of tests).

http://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf

Jared Smith
  • 1,935
10

It depends on the person, as well as the complexity and shape of the code you're working with.

For me, on most projects, writing unit tests means I get the work done about 25% faster. Yes, even including the time to write the tests.

Because the fact of the matter is that software isn't done when you write the code. It is done when you ship it to the customer and they're happy with it. Unit tests are by far the most efficient way I know of to catch most bugs, isolate most bugs for debugging, and to gain confidence that the code is good. You have to do those things anyways, so do them well.

Telastyn
  • 110,259
4

Some aspects to consider, not mentioned in the other answers.

  • Extra Benefit/Extra Cost depend on experience with writing unittests
    • with my first unit-test project the extra costs trippled because i had to learn a lot and i made a lot of mistakes.
    • after 10 years of experience with tdd i need 25% more coding time to write the tests in advance.
  • with more tdd-moduls there is still need for manual-gui-test and integration-testing
  • tdd only works when done from the beginning.
    • applying tdd to an existing, grown project is expensive/difficuilt. But you can implement regression-tests instead.
  • automated tests (unittests and other kind of tests) require maintanace const to keep them working.
    • having created test through copy&paste can make testcode-maintanace expensive.
    • with growing experience testcode becomes more modular and more easy to maintain.
  • with growing experience you will get the feeling when it is worth to create automated tests and when not.
    • example there is no big benefit to unittest simple getters/setters/wrappers
    • i donot write automated tests via the gui
    • i take care that the businesslayer can be tested

Summary

When starting with tdd it is difficuilt to reach the "more benefit than cost" state as long as you are under "time-constrained work environment" especially if there are "clever managers" that tell you to "get rid of the expensive, useless testing stuff"

Note: with "unit testing" i mean "testing moduls in isolation".

Note: with "regression testing" i mean

  • write some code that produces some output-text.
  • write some "regression testing" code that verifies that the result of the generation ist still the same.
  • the regression test let you know whenever the result changes (which might be ok or an indicator for a new bug)
  • the idea of "regression testing" is similar to approvaltests
    • ... taking a snapshot of the results, and confirming that they have not changed.
k3b
  • 7,621
4

Programmers, like people dealing with most tasks, underestimate how long it actually takes to complete it. With that in mind, spending 10 minutes to write a test can be looked at as time one could have spent writing tons of code when in reality, you would have spent that time coming up with the same function name and parameters you did during the test. This is a TDD scenario.

Not writing tests, is a lot like having a credit card; we tend to spend more or write more code. More code has more bugs.

Instead of deciding to have total code coverage or none at all, I suggest focusing on the critical and complicated part of your application and have tests there. In a banking app, that might be the interest calculation. An engine diagnostic tool may have complex calibration protocols. If you've been working on a project, you probably know what it is and where the bugs are.

Start slowly. Build some fluency before you judge. You can always stop.

JeffO
  • 36,956
4

Question is, how much of a time difference would writing unit-tested code over untested code, and how does that time difference scale as project scope widens?

The problem gets worse as the age of the project increases: because whenever you add new functionality and/or whenever you refactor existing implementation, you ought to retest what's previously be tested to ensure that it still works. So, for a long-lived (multi-year) project, you might need to not only test functionality but re-test it 100 times and more. For this reason you might benefit from having automated tests. However, IMO it's good enough (or even, better) if these are automated system tests, rather than automated unit tests.

A second problem is that bugs can be harder to find and fix if they're not caught early. For example if there's a bug in the system and I know it was working perfectly before you made your latest change, then I'll concentrate my attention on your latest change to see how it might have introduced the bug. But if I don't know that the system was working before you made your latest change (because the system wasn't properly tested before your latest change), then the bug could be anywhere.

The above applies especially to deep code, and less to shallow code e.g. adding new web pages where new pages are unlikely to affect existing pages.

As a result, a considerable amount of bugs escape to production, which I have to fix and in turn sets back my other projects.

In my experience that would be unacceptable, and so you're asking the wrong question. Instead of asking whether tests would make development faster you ought to ask what would make development more bug-free.

A better question might be:

  • Is unit-testing the right kind of testing, which you need to avoid the "considerable amount of bugs" you've been producing?
  • Are there other quality control/improvement mechanisms (apart from unit-testing) to recommend as well or instead?

Learning is a two-stage process: learn to do it well enough, then learn to do that more quickly.

ChrisW
  • 3,427
3

There have been a long history of Programmers board promoting TDD and other test methodologies, I won't recall their arguments and agree with them, but here is additional things to consider that should nuance a bit:

  • Testing isn't equally convenient and efficient depending of context. I develop web software, tell me if you have a program to test the whole UI... right now I'm programming excel macros, should I really develop a test module in VBA ?
  • Writing and maintaing the test software is real work that counts in the short run (it pays off in a longer run). Writing relevant tests is also an expertise to get
  • Working as a team and working alone, haven't the same tests requirements because in team you need to validate, understand and communicate code you did not write.

I'd say testing is good, but make sure you test early and test where the gain is.

Diane M
  • 2,116
2

An oft overlooked benefit of TDD is that the tests act as a safeguard to make sure you aren't introducing new bugs when you make a change.

The TDD approach is undoubtedly more time consuming initially but the takeaway point is you'll write less code which means less things to go wrong. All those bells and whistles you often include as a matter of course won't make it into the code base.

There's a scene in the film Swordfish where if memory serves, a hacker is having to work with a gun to his head and being erm... otherwise distracted. The point is it is a lot easier to work when your headspace is in the code and you have time on your side rather than months down the line with a customer screaming at you and other priorities getting squeezed.

Developers understand that fixing bugs later is more costly, but flip that on it's head. If you could be paid $500 a day to code how you code now or $1000 if you wrote in a TDD way, you'd bite the hand off the person making you the 2nd offer. The sooner you stop seeing testing as a chore and see it as a money saver, the better off you'll be.

Robbie Dee
  • 9,823
2

I can relate to your expierience - our code base had almost no tests and was mostly untestable. It took literally ages to develop something and fixing productions bugs took precious time from new features.

For a partial rewrite, I vowed to write tests for all core functionality. At the beginning, it took considerably longer and my productivity suffered noticeably, but afterwards my productivity was better than ever before.

Part of that improvement was that I had fewer production bugs which in turn led to fewer interruptions -> I had a better focus at any given time.

Furthermore, the abillity to test AND debug code in isolation really pays of - a suite of tests is vastly superior to a system which cannot be debugged except with manual setup, e. g. launching your app and navigate to the screen and do something...perhaps a few dozen times

But notice that there is a drop in productivity at the beginning, so start to learn testing on some project where the time pressure is not already insane. Also, try to start it on a greenfield project, unit testing legacy code is very hard, and it helps when you know how a good test suite looks like.

1

Just to complement previous answers: remember that testing is not a purpose itself. The purpose of making tests is for your application to behave as expected through evolution, within unexpected contexts, etc.

Therefore, writing tests does not mean to prove all behaviors of all endpoints of an entity. This is a common error. A lot of developers think they need to test all functions/objects/methods/properties/etc. This leads to a high workload and a bunch of irrelevant code and tests. This approach is common in big projects, where most developers are not aware of the holistic behavior, but can only see their domain of interaction.

The right approach when dealing with sparse resources and testing is pretty obvious and of common sense, but not commonly formalized: invest testing development resources first on high-level functionalities, and gradually descend into specificities. This means that at some point, as a lonely developer, you would not only focus on unit-testing, but on functional/integration/etc. testing and depending on your time resources, gradually into the main unitary functions, as you would plan and consider. High-level testing will provide the necessary information to address low-level/unitary testing and to plan your testing development strategy according to the resources you have.

For example, you would like to test a processing chain first as a black-box. If you find that some member of the chain fails due to the behavior didn't considered some extreme condition, you write the tests that guarantee the functionality not only on this member but also on others. Then you deliver. For the next cycle, you detect that sometimes the network fails. So you write tests that address such issue on the modules that could be vulnerable. And so on.

RodolfoAP
  • 256
0

The answer to the question is: Unit testing means less development time.

That's the wrong question of course.

Also, an answer to this question really requires a "how do you get to a point where unit testing is giving me benefits" - I cover this in the "my story" portion of my answer.

Here's the real answer:

Unit testing creates a shorter development cycle, it eases refactoring allowing better reuse of existing software, it allows unweildy legacy codebases to be brought under control, and it enables the fast development of high quality software.

It should be a requirement of all investors in public companies that all software in which their company invests time and money that the software include unit tests because it improves the quality of those investments which translates to greater profitability.

Stack exchange wanted to know if I am contributing something new to the conversation.

yes I am.

Early testing - which can only mean unit testing - solves many of the problems that plague managers of software, far beyond the tests themselves.

The excuse that "we have a final test where we check the whole system" is inexcusable and I would fire anyone who worked for me who said that. It's mathematically provable (not here though) that that statement is objectively negligent.

There are studies of software development showing that a bug that makes it to production costs something like 20 times what a bug caught in design costs.


So my story

I worked on a code base that was millions of LOC.

It was well organized, thankfully.

But it had...zero automated tests but lots of manual ones.

Problem: The knowledge of how to do the manual tests was in the heads of engineers, and engineers leave and retire and die. You can't have your IP in the head of an engineer, but SO MANY organizations do.

Problem: There were features of this system that were not commonly used, so it took a while to remember how they were supposed to behave.

Problem: The path for data from acquisition to presentation to end user was convoluted and there were multiple binaries making unpredictable changes to the data, based on other external inputs controlled by other entities.

Problem: The entire system consisting of some 10s of binaries - maybe 40 - had to be run in order to exercise any portion of the system, due to the required interactions.

Problem: Every change to the system required a day of testing. Our dev cycle made us afraid to change anything that wasn't absolutely necessary. Consequently we didn't remove dead code, we didn't improve and refactor, and we built up technical debt to unsustainable levels.

Problem: It was considered impossible to build automated tests. Those who believed this will be proven absolutely wrong by the facts.


Time to problem solve.

Problem: The entire system consisting of some 10s of binaries - maybe 40 - had to be run in order to exercise any portion of the system, due to the required interactions. This sort of test took up to a day due to the nature of the processing we were doing.

Problem: We have so much to do, how can we take the time to set up the testing environment?

I spent maybe 16 hours researching google test, I downloaded the google test gtest library, it took about 2 hours to figure out how to install it and write my first test and have it included in our build system. Now I was able to test functions without running the whole system.

2 Problems solved. 18 hours.

So the belief that it was "impossible" was wrong. It was possible, if you didn't incorrectly constrain the question to "is it possible to get every single function under a unit test regime".

I just said, "of course it's possible" and I did it.

Then I found places where not only could I write tests, I could use the library to debug problems and speed up my productivity immediately. I had 2 hours back within the day, and in fact by the end of that week I had recovered all 18 hours invested and was up about an hour in productivity, and every week I added 18 hours to that.

Multiply that by my hourly rate and that's how much money I saved the company.

Now I was able to debug more quickly - instead of putting in logging and running the whole system and looking at log files, I just had to write a 10-line test and let the framework handle the details. The test ran in 2 minutes.

Problem: I have existing functions, this isn't greenfield development - if I have to change an existing function how do I make sure I didn't break it?

Problem: Every change to the system requires a day of testing. Our dev cycle makes us afraid to change anything that's not absolutely necessary. Consequently we don't remove dead code, we don't improve and refactor, and we are building up technical debt to unsustainable levels.

With tests I was able to refactor more confidently - I just wrote tests for the existing function, refactored it, and made sure the refactored function passed the tests. The test runs at compile time, I remove all dead code, i improve and refactor at will, and technical debt never has a chance to build up.

2 Problems solved. Hundreds of thousands of dollars saved.

Problem: How do I test complex functions?

Once I had most of my "leaf-node" functions under test (the ones that didn't require much refactoring to test them and that did not use other functions) I looked at the more complex ones. I picked a library that mostly depended on my leaf-node functions, wrote as many tests as I could for it as is, and then refactored and manually tested functions that were not easily automated. I used mocking and stubbing wherever it helped, and soon I had a complex library refactored, simplified, and testable.

Problem solved.

Problem: How do you get most of the codebase under test?

I continued working through the codebase like that, making sure each investment in test effort returned a benefit immediately.

Problem solved.

Problem:

The knowledge of how to do the manual tests was in the heads of engineers.

Problem: There were features that were not commonly used, so it took a while to remember how they were supposed to behave.

Now I was able to reproduce tests accurately, and for years to come those tests will be available for others to use. The knowledge of the expected behavior is out of the engineers' heads and is now in written code in a repository. The tests themselves were a few lines of code, so I could put in lots of documentation about how the tests worked.

2 Problems solved.

My conclusion:

Testing (with a good library like gtest) Does not cost anything or take more time. It is not only free, it has a huge roi on that non-investment.
It costs more and is slower to not unit test.

This is a solved problem folks, let's get to the hard ones please.