63

I’m asking this question regarding problems I have experienced during TDD projects. I have noticed the following challenges when creating unit tests.

  • Generating and maintaining mock data

It’s hard and unrealistic to maintain large mock data. It’s is even harder when database structure undergoes changes.

  • Testing GUI

Even with MVVM and ability to test GUI, it takes a lot of code to reproduce the GUI scenario.

  • Testing the business

I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combinations of tests (test space) is very large.

  • Contradiction in requirements

In reality it’s hard to capture all requirements under analysis and design. Many times one note requirements lead to contradiction because the project is complex. The contradiction is found late under implementation phase. TDD requires that requirements are 100% correct. In such cases one could expect that conflicting requirements would be captured during creating of tests. But the problem is that this isn’t the case in complex scenarios.

I have read this question: Why does TDD work?

Does TDD really work for complex enterprise projects, or is it practically limit to project type?

Amir Rezaei
  • 11,068

15 Answers15

59

It’s hard and unrealistic to maintain large mock data. It’s is even harder when database structure undergoes changes.

False.

Unit testing doesn't require "large" mock data. It requires enough mock data to test the scenarios and nothing more.

Also, the truly lazy programmers ask the subject matter experts to create simple spreadsheets of the various test cases. Just a simple spreadsheet.

Then the lazy programmer writes a simple script to transform the spreadsheet rows into unit test cases. It's pretty simple, really.

When the product evolves, the spreadsheets of test cases are updated and new unit tests generated. Do it all the time. It really works.

Even with MVVM and ability to test GUI, it’s takes a lot of code to reproduce the GUI scenario.

What? "Reproduce"?

The point of TDD is to Design things for Testability (Test Drive Development). If the GUI is that complex, then it has to be redesigned to be simpler and more testable. Simpler also means faster, more maintainable and more flexible. But mostly simpler will mean more testable.

I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combination of test (test space) is very large.

That can be true.

However, asking the subject matter experts to provide the core test cases in a simple form (like a spreadsheet) really helps.

The spreadsheets can become rather large. But that's okay, since I used a simple Python script to turn the spreadsheets into test cases.

And. I did have to write some test cases manually because the spreadsheets were incomplete.

However. When the users reported "bugs", I simply asked which test case in the spreadsheet was wrong.

At that moment, the subject matter experts would either correct the spreadsheet or they would add examples to explain what was supposed to happen. The bug reports can -- in many cases -- be clearly defined as a test case problem. Indeed, from my experience, defining the bug as a broken test case makes the discussion much, much simpler.

Rather than listen to experts try to explain a super-complex business process, the experts have to produce concrete examples of the process.

TDD requires that requirements are 100% correct. In such cases one could expect that conflicting requirements would be captured during creating of tests. But the problem is that this isn’t the case in complex scenario.

Not using TDD absolutely mandates that the requirements be 100% correct. Some claim that TDD can tolerate incomplete and changing requirements, where a non-TDD approach can't work with incomplete requirements.

If you don't use TDD, the contradiction is found late under implementation phase.

If you use TDD the contradiction is found earlier when the code passes some tests and fails other tests. Indeed, TDD gives you proof of a contradiction earlier in the process, long before implementation (and arguments during user acceptance testing).

You have code which passes some tests and fails others. You look at only those tests and you find the contradiction. It works out really, really well in practice because now the users have to argue about the contradiction and produce consistent, concrete examples of the desired behavior.

S.Lott
  • 45,522
  • 6
  • 93
  • 155
34

Yes

My first exposure to TDD was working on the middleware components for a Linux-based cell phone. That eventually wound up being millions of lines of source code, which in turn called into about 9 gigabytes of source code for various open-source components.

All component authors were expected to propose both an API and a set of unit tests, and have them design-reviewed by a peer committee. Nobody was expecting perfection in testing, but all publicly-exposed functions had to have at least one test, and once a component was submitted to source control, all unit tests had to always pass (even if they did so because the component was falsely reporting it worked okay).

No doubt due at least in part to TDD and an insistence that all unit tests always pass, the 1.0 release came in early, under budget, and with astonishing stability.

After the 1.0 release, because corporate wanted to be able to rapidly change scope due to customer demands, they told us to quit doing TDD, and removed the requirement that unit tests pass. It was astonishing how quickly quality went down the toilet, and then the schedule followed it.

Bob Murphy
  • 16,098
20

I'd argue the more complex the project, the more benefit you get out of TDD. The main benefits are side-effects of how TDD will force you to write the code in much smaller, much more independent chunks. Key benefits are:

a) You get much, much earlier validation of your design because your feedback loop is much tighter due to tests from the get go.

b) You can change bits and pieces and see how the system reacts because you've been building a quilt of test coverage the whole time.

c) Finished code will be much better as a result.

Wyatt Barnett
  • 20,787
11

Does TDD really work for complex projects?
Yes. Not every project so I'm told works well with TDD, but most business applications are fine, and I bet the ones which do not work well when they are written in a pure TDD manner could be written in an ATDD way without major issues.

Generating and maintaining mock data
Keep it small and only have what you need and this is not the scary issue it seems. Don't get me wrong, it is a pain. But it is worthwhile.

Testing GUI
Test the MVVM and make sure that can be tested without the view. I've found this no harder than testing any other bit of business logic. Testing the view in code I don't do, all you are testing however at this point is binding logic, which one hopes will be caught quickly when you do a quick manual test.

Testing the business
Not found to be an issue. Lots of small tests. As I said above, some cases (Sudoku puzzle solvers seems to be a popular one) are apparently difficult to do TDD.

TDD requires that requirements are 100% correct
No it does not. Where did you get this idea from? All Agile practices accept that requirements change. You do need to know what you are doing before you do it, but that is not the same as requiring the requirements to be 100%. TDD is a common practice in Scrum, where the requirements (User Stories) are, by very definition, not 100% complete.

Pang
  • 335
mlk
  • 1,069
10

First off, I believe your issue is more about unit testing in general than TDD, since I see nothing really TDD-specific (test-first + red-green-refactor cycle) in what you say.

It’s hard and unrealistic to maintain large mock data.

What do you mean by mock data ? A mock is precisely supposed to contain barely any data, ie no fields other than the one or two needed in the test, and no dependencies other than the system under test. Setting up a mock expectation or return value can be done in one line, so nothing terrible.

It’s is even harder when database structure undergoes changes.

If you mean the database undergoes changes without the proper modifications having been made to the object model, well unit tests are precisely here to warn you of that. Otherwise, changes to the model must be reflected in the unit tests obviously, but with compilation indications it's an easy thing to do.

Even with MVVM and ability to test GUI, it takes a lot of code to reproduce the GUI scenario.

You're right, unit testing the GUI (View) is not easy, and many people are doing well without it (besides, testing the GUI is not part of TDD). In contrast, unit testing your Controller/Presenter/ViewModel/whatever intermediate layer is highly recommended, actually it's one of the main reasons that patterns such as MVC or MVVM are.

I have experience that TDD works well if you limit it to simple business logic. However complex business logic is hard to test since the number of combinations of tests (test space) is very large.

If your business logic is complex, its normal that your unit tests are hard to design. It is up to you to to make them as atomic as possible, each testing only one responsibility of the object under test. Unit tests are all the more needed in a complex environment because they provide a security net guaranteeing that you don't break business rules or requirements as you make changes to the code.

TDD requires that requirements are 100% correct.

Absolutely not. Successful software requires that requirements are 100% correct ;) Unit tests just reflect what your vision of the requirements currently is ; if the vision is flawed, your code and your software will be too, unit tests or not... And that's where unit tests shine : with explicit enough test titles, your design decisions and requirements interpretation become transparent, which makes it easier to point your finger at what needs to be changed next time your customer says, "this business rule is not quite as I'd like".

guillaume31
  • 8,684
7

I gotta laugh when I hear someone complain that the reason they cannot use TDD to test their application is because their application is so complicated. What is the alternative? Have test monkeys pounding on acres of keyboards? Let the users be the testers? What else? Of course it is hard and complex. Do you think Intel does not test their chips until they ship? How "head-in-the-sand" is that?

5
> Does TDD really work for complex projects?

From my experience: Yes for Unittests (test of modules/features in isolation) because these mostly do not have the problems you mention: (Gui, Mvvm, Business-Modell). I never had more that 3 mocks/stubs to fullfill one unittest (but maybe your domain requires more).

However i am not shure if TDD could solve the problems you mentioned on the integration or end-to-end testing with BDD-style tests.

But at least some problems can be reduced.

> However complex business logic is hard to test since the number 
> of combinations of tests (test space) is very large.

This is true if you want to do complete coverage on the level of integration-test or end-to-end test. It might be easier doing the complete coverage on a unittest-level.

Example: Checking complex user permissions

Testing the Function IsAllowedToEditCusterData() on an integration-test level would require to ask different objects for information about user, domain, customer , environment.... .

Mocking these parts is quite difficuilt. This is especially true if IsAllowedToEditCusterData() has to know these different objects.

On a Unittest-Level you would have Function IsAllowedToEditCusterData() that takes for example 20 parameters that contain everything the function needs to know. Since IsAllowedToEditCusterData() does not need to know what fields a user, a domain, a customer, .... has this is easy to test.

When i had to implement IsAllowedToEditCusterData() i had two overloads of it:

One overload that does nothing more than getting those 20 parameters and then calling the overload with the 20 parameters that does decision making.

(my IsAllowedToEditCusterData() had only 5 parameters and i needed 32 different combinations to test it completely)

Example

// method used by businesslogic
// difficuilt to test because you have to construct
// many dependant objects for the test
public boolean IsAllowedToEditCusterData() {
    Employee employee = getCurrentEmployee();
    Department employeeDepartment = employee.getDepartment();
    Customer customer = getCustomer();
    Shop shop = customer.getShop();

    // many more objects where the permittions depend on

    return IsAllowedToEditCusterData(
            employee.getAge(),
            employeeDepartment.getName(),
            shop.getName(),
            ...
        );
}

// method used by junittests
// much more easy to test because only primitives
// and no internal state is needed
public static boolean IsAllowedToEditCusterData(
        int employeeAge,
        String employeeDepartmentName,
        String shopName,
        ... ) 
{
    boolean isAllowed; 
    // logic goes here

    return isAllowed;
}
k3b
  • 7,621
4

I've found TDD (and unit testing in general) to be virtually impossible for a related reason: Complex, novel, and/or fuzzy algorithms. The issue I run into most in the research prototypes I write is that I have no idea what the right answer is other than by running my code. It's too complicated to reasonably figure out by hand for anything but ridiculously trivial cases. This is especially true if the algorithm involves heuristics, approximations, or non-determinism. I still try to test the lower-level functionality that this code depends on and use asserts heavily as sanity checks. My last resort testing method is to write two different implementations, ideally in two different languages using two different sets of libraries and compare the results.

dsimcha
  • 17,284
3

The sad answer is that nothing really works for large complex projects!

TDD is as good as anything else and better than most, but TDD alone will not guarantee success in a large project. It will however increase your chances of success. Especially when used in combination with other project management disciplines (requirements verification, use cases, requirement tractability matrix, code walkthroughs et.c etc.).

1

I think so, see Test Driven Development really works

In 2008, Nachiappan Nagappan, E. Michael Maximilien, Thirumalesh Bhat, and Laurie Williams wrote a paper called “Realizing quality improvement through test driven development: results and experiences of four industrial teams“ (PDF link). The abstract:

Test-driven development (TDD) is a software development practice that has been used sporadically for decades. With this practice, a software engineer cycles minute-by-minute between writing failing unit tests and writing implementation code to pass those tests. Test-driven development has recently re-emerged as a critical enabling practice of agile software development methodologies. However, little empirical evidence supports or refutes the utility of this practice in an industrial context. Case studies were conducted with three development teams at Microsoft and one at IBM that have adopted TDD. The results of the case studies indicate that the pre-release defect density of the four products decreased between 40% and 90% relative to similar projects that did not use the TDD practice. Subjectively, the teams experienced a 15–35% increase in initial development time after adopting TDD.

In 2012, Ruby on Rails development practices assume TDD. I personally rely on tools like rspec for writing tests and mocks, factory_girl for creating objects, capybara for browser automation, simplecov for code coverage and guard for automating these tests.

As a result of using this methodology and these tools, I tend to agree subjectively with Nagappan et al...

gnat
  • 20,543
  • 29
  • 115
  • 306
Hiltmon
  • 219
1

I've seen a large complex project completely fail when TDD was used exclusively, i.e. without at least setting up in an debugger/IDE. The mock data and/or tests proved insufficient. The Beta clients real data was sensitive and could not be copied or logged. So, the dev team could never fix the fatal bugs that manifested when pointed at real data, and the whole project got scrapped, everyone fired, the whole bit.

The way to have fixed this problem would have been to fire it up in a debugger at the client site, live against the real data, step through the code, with break points, watch variables, watch memory, etc. However, this team, who thought their code to be fit to adorn the finest of ivory towers, over a period of nearly one year had never once fired up their app. That blew my mind.

So, like everything, balance is the key. TDD may be good but don't rely on it exclusively.

SPA
  • 19
  • 1
1

If combination of budget, requirements and team skills are in the quadrant of the project-space lebelled 'abandon hope all ye who enter here', then by definition it is overwhelmingly likely the project will fail.

Perhaps the requirements are complex and volatile, the infrastructure unstable, the team junior and with high turnover, or the architect is an idiot.

On a TDD project, the symptom of this impending failure is that tests cannot be written on schedule; you try, only to discover 'that's going to take this long, and we only have that'.

Other approaches will show different symptoms when they fail; most commonly delivery of a system that doesn't work. Politics and contracts will determine whether that is preferable.

soru
  • 3,655
1

Remember that unit tests are enforced specifications. This is especially valuable in complex projects. If your old code-base does not have any tests to back it up, no one will dare to change anything because they will be afraid of breaking anything.

"Wtf. Why is this code branch even there? Don't know, maybe someone needs it, better leave it there than to upset anyone..." Over time the complex projects becomes a garbage land.

With tests, anyone can confidently say "I have made drastic changes, but all tests are still passing." By definition, he has not broken anything. This leads to more agile projects that can evolve. Maybe one of the reasons we still need people to maintain COBOL is because testing wasn't popular since then :P

kizzx2
  • 277
1

Not directly tied to the question, but I think TDD in big projects finally clicked for me when I put together the refactor part with how you move the tests and make them work in the context of the code.

YOU DO NOT PUT ALL YOUR TESTS AT THE TOP

This seems obvious now, but god was it stuck in my mind that you'd have like 500 tests all trying to test through your logic 6 layers deep. I also thought that if you did move them, your tests would stop reflecting business requirements, and just turn into functional tests, until I ran it through.

Say you have a request for a WeatherForecast API from the business, it has these requirements:

  • Return humidity, temperature and percipitation
  • Return weather class, e.g. cloudy, rainy, etc.
  • Location based
  • Go up to seven days in advance

Modelling this out, you might have the following layers of code:

  • Handlers
  • WeatherProvider
  • WeatherForecast
  • WeatherClassifier
  • Location

Now you would start with the WeatherForecast, and add your first few tests to get the basic weather data. Get them running, refactor and you're starting strong.

Carry on and add the weather classifier tests in, get them passing, and refactor, part of that means to move the classifier to it's own interface. Now why do the classifier tests stay with WeatherForecast? This is the important part. They don't. Move them to the Classifier layer. Update them if you need to to match the new layer.

Rinse, repeat for all layers.

This way if you end up with a bug that only occurs when the humidity, temperature and percipitation are certain levels, causing the classifier to go haywire, it's simple to add a new test in the classifier, create the test with the scenario, you don't have to fall through all the layers, and the tests still map to business requirements.

Now you may be thinking 'Well where are my handler tests?' and this is a good observation, do you have any requirements given on the handler and how it should behave? No? Then you don't need to test it, or write any code, and no handler is required.

Obviously that's not the case, but it highlights how TDD forces you to get good requirements, you would like requirements for your handlers on what to do if you get an error, what the response schema looks like, what format should they return data in, you might decide this yourself, but they're behavioural requirements, and you should treat them like so.

In the end, you have all your layers with their own tests describing the behaviour of that layer.

Your handler returns the weather data in json Your WeatherForecast returns humidity, temperature, percipitation and weather class Your WeatherClassifier calculates the type of weather based on the humidity, temperature and percipitation

And that's the beauty of TDD.

DubDub
  • 645
-2

TDD might sound as pain upfront but in long run it would be your best friend, trust me TDD will really make application maintainable and secure in long run.

Rachel
  • 765