15

I would like to know what the overall impact of resource planning on a software project is, where the requirements and design of the project are driven by automated acceptance tests and unit tests, in contrast to a more "traditional" approach to software development.

enter image description here

What, in your experience, is the overall effect on resource requirements for completing a software project under TDD, as opposed to more "traditional" development methodologies? It seems self-evident to me that quality would increase, and the amount of uncertainty decreases because testing is done earlier, but requiring tests up front seems like it would require more developer hours to accomplish. How much does the development effort increase, or does it actually decrease due to the up-front elimination of bugs?

How much more effort is required from the customer? Do they have to change the way they relate to the project, especially if they are used to big design up front? Does the number of hours required of the customer overall increase, or does it actually decrease?

I would imagine that time estimates would be very vague in an iterative TDD process at the beginning of a TDD project (since there is no Software Development Plan). Is there a point, say, 20% into a project, where confidence increases enough that a more or less stable time and money estimate can eventually be provided to the customer?

Note: I'm not looking for subjective opinions or theories here, so please don't speculate. I'm looking more for real-world experience in TDD.

Robert Harvey
  • 200,592

5 Answers5

11

The first thing that needs to be stated is that TDD does not necessarily increase the quality of the software (from the user's point of view). It is not a silver bullet. It is not a panacea. Decreasing the number of bugs is not why we do TDD.

TDD is done primarily because it results in better code. More specifically, TDD results in code that is easier to change.

Whether or not you wish to use TDD depends more on your goals for the project. Is this going to be a short term consulting project? Are you required to support the project after go-live? Is it a trivial project? The added overhead may not be worth it in these cases.

However, it is my experience that the value proposition for TDD grows exponentially as the time and resources involved in a project grows linearly.

Good unit tests give the following advantages:

  1. Unit tests warn developers of unintended side-effects.
  2. Unit tests allow for rapid development of new functionality on old, mature systems.
  3. Unit tests give new developers a faster and more accurate understanding of the code.

A side effect of TDD might be less bugs, but unfortunately it is my experience that most bugs (particularly the most nasty ones) are usually caused by unclear or poor requirements or would not necessarily be covered by the first round of unit testing.

To summarise:

Development on version 1 might be slower. Development on version 2-10 will be faster.

Stephen
  • 8,868
  • 3
  • 31
  • 43
6

There's a chapter in Making Software about Test-Driven Development, which cites the paper discussed here.

Case studies were conducted with three development teams at Microsoft and one at IBM that have adopted TDD. The results of the case studies indicate that the pre-release defect density of the four products decreased between 40% and 90% relative to similar projects that did not use the TDD practice. Subjectively, the teams experienced a 15–35% increase in initial development time after adopting TDD.

Whether these results are generalisable to your case is, of course, something that proponents of TDD will argue is obvious and detractors of TDD will argue is untrue.

5

I don't have any research papers or statistics to give you, but I'll relate my experience from working in a team/organization that historically had low-to-average unit test coverage and no end-to-end tests, and gradually moving the bar to where we are now, with more of an ATDD (but, ironically, not traditional TDD) approach.

Specifically, this is how project timelines used to play out (and still play out on other teams/products in the same organization):

  • Up to 4 weeks of analysis and implementation
  • 2 weeks of regression testing, bug fixing, stabilization, and release prep
  • 1-2 weeks of fixing known defects
  • 2-3 weeks of code cleanup and post-production issues/support (unknown defects/unplanned outages)

This seems like ridiculous overhead but it's actually very common, it's just often masked in many organizations by missing or ineffectual QA. We have good testers and a culture of intensive testing, so these issues are caught early, and fixed up front (most of the time), rather than being allowed to play out slowly over the course of many months/years. 55-65% maintenance overhead is lower than the commonly-accepted norm of 80% of the time being spent on debugging - which seems reasonable, because we did have some unit tests and cross-functional teams (including QA).

During our team's first release of our latest product, we had started retrofitting acceptance tests but they weren't quite up to snuff and we still had to rely on a lot of manual testing. The release was somewhat less painful than others, IMO partly because of our haphazard acceptance tests and also partly because of our very high unit test coverage relative to other projects. Still, we spent nearly 2 weeks on regression/stabilization and 2 weeks on post-production issues.

By contrast, every release since that initial release has had early acceptance criteria and acceptance tests, and our current iterations look like this:

  • 8 days of analysis and implementation
  • 2 days of stabilization
  • 0-2 combined days of post-production support and cleanup

In other words, we progressed from 55-65% maintenance overhead to 20-30% maintenance overhead. Same team, same product, main difference being the progressive improvement and streamlining of our acceptance tests.

The cost of maintaining them is, per sprint, 3-5 days for a QA analyst and 1-2 days for a developer. Our team has 4 developers and 2 QA analysts, so (not counting UX, project management, etc.) that's a maximum of 7 man-days out of 60, which I'll round up to a 15% implementation overhead just to be on the safe side.

We spend 15% of each release period developing automated acceptance tests, and in the process are able to cut 70% of each release doing regression tests and fixing pre-production and post-production bugs.

You might have noticed that the second timeline is much more precise and also much shorter than the first. That's something that was made possible by the up-front acceptance criteria and acceptance tests, because it vastly simplifies the "definition of done" and allows us to be much more confident in the stability of a release. No other teams have (so far) succeeded with a bi-weekly release schedule, except perhaps when doing fairly trivial maintenance releases (bugfix-only, etc.).

Another interesting side-effect is that we've been able to adapt our release schedule to business needs. One time, we had to lengthen it to about 3 weeks to coincide with another release, and were able to do so while delivering more functionality but without spending any extra time on testing or stabilization. Another time, we had to shorten it to about 1½ weeks, due to holidays and resource conflicts; we had to take on less dev work, but, as expected, were able to spend correspondingly less time on testing and stabilization without introducing any new defects.

So in my experience, acceptance tests, especially when done very early in a project or sprint, and when well-maintained with acceptance criteria written by the Product Owner, are one of the best investments you can make. Unlike traditional TDD, which other people correctly point out is focused more on creating testable code than defect-free code - ATDD really does help catch defects a lot faster; it's the organizational equivalent of having an army of testers doing a complete regression test every day, but way cheaper.

Will ATDD help you in longer-term projects done in RUP or (ugh) Waterfall style, projects lasting 3 months or more? I think the jury's still out on that one. In my experience, the biggest and ugliest risks in long-running projects are unrealistic deadlines and changing requirements. Unrealistic deadlines will cause people to take shortcuts, including testing shortcuts, and significant changes to requirements will likely invalidate a large number of tests, requiring them to be rewritten and potentially inflating the implementation overhead.

I'm pretty sure that ATDD has a fantastic payoff for Agile models, or for teams that aren't officially Agile but have very frequent release schedules. I've never tried it on a long-term project, mainly because I've never been in or even heard of an organization willing to try it on that kind of a project, so insert the standard disclaimer here. YMMV and all that.

P.S. In our case, there is no extra effort required from the "customer", but we have a dedicated, full-time Product Owner who actually writes the acceptance criteria. If you're in the "consultingware" business, I suspect it could be a lot more difficult to get the end users to write useful acceptance criteria. A Product Owner/Product Manager seems like a pretty essential element in order to do ATDD and although I can once again only speak from my own experience, I've never heard of ATDD being successfully practiced without someone to fulfill that role.

Aaronaught
  • 44,523
1

Resource Requirements

What, in your experience, is the overall effect on resource requirements for completing a software project under TDD, as opposed to more "traditional" development methodologies?

In my experience the cost of requiring upfront tests is immediately mitigated by both defining a clear acceptance criteria up front, and then writing to the test. Not only is the cost of the up front testing mitigated I've also found it generally speeds up overall development. Although those speed improvements may be wiped out by poor project definition, or changing requirements. However, we are still able to respond quite well to those kinds of changes without severe impact. ATDD also significantly reduces developer effort in verifying correct system behavior through it's automated test suite in the following cases:

  • large refactors
  • platform/package upgrades
  • platform migration
  • toolchain upgrades

This is assuming a team who is familiar with the process and practices involved.

Customer Involvement

How much more effort is required from the customer?

They have to be much more involved on an ongoing basis. I've seen a huge reduction in up front time investment, but a much greater demand ongoing. I haven't measured, but I'm fairly certain is a larger time investment for the customer.

However, I've found the customer relationship greatly improves after 5 or so demos where they are seeing their software slowly take shape. The time commitment from the customer decreases somewhat over time as a rapport is developed everyone get's used to the process and the expectations involved.

Project Estimation

I would imagine that time estimates would be very vague in an iterative TDD process at the beginning of a TDD project (since there is no Software Development Plan).

I have found that's usually a question of how well defined the ask is and if the technical lead(s) are able to card out (including card estimation) the project. Assuming the project is well carded and you have a reasonable velocity average and standard deviation we've found it's easy to get a decent estimate. Obviously the larger the project the more uncertainty there is which is why I generally break a large project into a small project with a promise to continue later. This is much easier to do once you've established a rapport with the customer.

For example:

My team's "sprints" are a week long and we have a running average and std. deviation of the last 14 weeks. If the project is 120 points we have a mean of 25 and a std. deviation of 6 then estimating the completion of a project is:

Project Total / (Mean Velocity - (2 * Std. Deviation) = 95% Time Estimate
120           / (25            - (2 * 6             ) = 9.2 weeks

We use the 2 Std. Deviation rule of thumb for our 95% confidence estimate. In practice we usually complete the project under the first std. deviation, but over our mean. This is usually due to refinements, changes, etc.

-1

requiring tests up front seems like it would require more developer hours to accomplish. How much does the development effort increase, or does it actually decrease due to the up-front elimination of bugs?

This is actually not true. If your developers are writing unit tests (and they should), then the time should be approximately the same, or better. I said better, since your code will be completely tested, and they will have to write only the code to fulfil the requirements.

The problem with developers is they tend to implement even things that are not required to make the software as generic as possible.

How much more effort is required from the customer? Do they have to change the way they relate to the project, especially if they are used to big design up front? Does the number of hours required of the customer overall increase, or does it actually decrease?

That shouldn't matter. Whoever do the requirements should do it as good as possible.

If you do agile way of development, then that does not mean big design up front. But, the better the requirements, architecture and design are done - the code quality will increase, and time to finish the software will decrease.

Therefore, if they like to do BDUF let them do it. It will make your life easier as developer.

BЈовић
  • 14,049