17

This might be a rather silly question as I am at my first attempts at TDD. I loved the sense of confidence it brings and generally better structure of my code but when I started to apply it on something bigger than one-class toy examples, I ran into difficulties.

Suppose, you are writing a library of sorts. You know what it has to do, you know a general way of how it is supposed to be implemented (architecture wise), but you keep "discovering" that you need to make changes to your public API as you code. Perhaps you need to transform this private method into strategy pattern (and now need to pass a mocked strategy in your tests), perhaps you misplaced a responsibility here and there and split an existing class.

When you are improving upon existing code, TDD seems a really good fit, but when you are writing everything from scratch, the API you write tests for is a bit "blurry" unless you do a big design up front. What do you do when you already have 30 tests on the method that had its signature (and for that part, behavior) changed? That is a lot of tests to change once they add up.

7 Answers7

18

What you call "big design up front" I call "sensible planning of your class architecture."

You can't grow an architecture from unit tests. Even Uncle Bob says that.

If you're not thinking through the architecture, if what you're doing instead is ignoring architecture and throwing tests together and getting them to pass, you're destroying the thing that will allow the building to stay up because it's the concentration on the structure of the system and solid design decisions that helped the system maintain its structural integrity.

https://hanselminutes.com/171/return-of-uncle-bob#

I think it would be more sensible to approach TDD from a perspective of validating your structural design. How do you know the design is incorrect if you don't test it? And how do you verify that your changes are correct without also changing the original tests?

Software is "soft" precisely because it is subject to change. If you are uncomfortable about the amount of change, continue to gain experience in architectural design, and the number of changes you will need to make to your application architectures will decrease over time.

Robert Harvey
  • 200,592
3

If you do TDD. You can't change the signature and behaviour without having driven it by tests. So the 30 tests that fail were either deleted in the process or changed/refactored along with the code. Or they are now obsolete, safe to delete.

You can't ignore the 30 times red in your red-green-refactor cycle?

Your tests should be refactored along side your production code. If you can afford to, re-run all tests after each change.

Don't be afraid to delete TDD tests. Some tests end up testing building blocks to get to a desired outcome. The desired outcome at a functional level is what counts. Tests around intermediate steps in the algorithm you chose/invented may or may not be of much value when there is more then one way to reach the outcome or you initially ran into a dead-end.

Sometimes you can create some decent integration tests, keep those and delete the rest. It somewhat depends whether you work inside out or top down and how large steps you take.

Joppe
  • 4,616
1

As Robert Harvey just said, you are probably trying to use TDD for something that should be handled by a different conceptual tool (that is: "design" or "modelling").

Try to design (or "model") you system in a quite abstract ("general", "vague") way. For example, if you have to model a car, just have a car class with some vague method and field, like startEngine() and int seats. That is: describe what you want to expose to the public, not how you want to implement it. Try to expose just basic functionalities (read, write, start, stop, etc) and leave the client code elaborate on it (prepareMyScene(), killTheEnemy(), etc).

Write down your tests assuming this simple public interface.

Change the internal behaviour of your classes and methods whenever you need it.

If and when you need to change your public interface and your test suite, stop and think. Most likely this is a sign that there is something wrong in your API and in your design/modelling.

It is not unusual to change an API. Most systems at their 1.0 version explicitly warn the programmers/users against possible changes in their API. Despite this, a continuous, uncontrolled flow of API changes is a clear sign of bad (or totally missing) design.

BTW: You should usually just have a handful of tests per method. A method, by definition, should implement a clearly defined "action" on some kind of data. In a perfect world, this should be a single action that corresponds to a single test. In the real world it is not unusual (and not wrong) to have few different "versions" of the same action and few different corresponding tests. For sure, you should avoid to have 30 tests on the same method. This is a clear sign that the method tries to do too much (and its internal code is grown out of control).

AlexBottoni
  • 1,531
1

Some good answers here already, but I think there is one thing missing. You asked

What do you do when you already have 30 tests on the method that had its signature (and for that part, behavior) changed?

and I actually don't find it very uncommon to really have 30 different tests on one method, I can think of several methods which deserve that many tests. However, if you really have to change a method at that point in time, I see two possibilities:

  • The 30 existing tests actually proof that the current signature / public API was quite useful. So if you think you need to change it, you may consider to change it in a backwards-compatible manner.

  • The 30 tests should be refactored to make less direct calls to the method under test. Calling the same method over and over again in a similar manner is often a sign of duplicate code which can be extracted into a common method. That way, you work towards a situation where you can change the signature more easily.

IMHO it is fully correct that you cannot "grow an architecture from unit tests", but that does not necessarily mean you have to do all the architectural design up-front. You can still start with TDD very early, to verify your first design scetches, but you have to refactor your test code similar to production code as soon as code duplication happens, so you don't paint yourself into a corner with those tests.

Doc Brown
  • 218,378
0

I look at it from the user's perspective. For example, if your APIs let me create a Person object with name and age, there had better be a Person(string name, int age) constructor and accessor methods for name and age. It's simple to create test cases for new Persons with and without a name and age.

doug

0

What do you do when you already have 30 tests on the method that had its signature (and for that part, behaviour) changed? That is a lot of tests to change once they add up.

If you follow TDD, no changes happen in the production code which is not driven by a unit test. You first write the test, but nobody is forcing you to touch the existing method and the existing signature. You can write a new one and go through the whole red to green cycle.

Once ended, you can mark as deprecated the old code (if backward compatibility is needed) or delete it and delete all the old tests. Delete all that code is fairly simpler and faster than changing it and, for some reason, more pleasant.

Laiv
  • 14,990
0

If I'm making changes to my public API and tests are all that break I'm a happy guy.

Changing a public API is painful exactly because it is widely depended on. If tests encourage you to sort this out early before the pain leaks into the rest of the code base be grateful.

The tests show you how the API may be used and call your attention to problems. They do this in a way that spares you having to think about the rest of the system at the same time.

This isn't to say that tests absolve you of the need to design. Rather they give you a chance to safely play with what you have designed before you attach it to something important.

candied_orange
  • 119,268