12

Here's one example: My web application contains draggable elements. When dragging an element, the browser produces a "ghost image". I want to remove the "ghost image" when dragging and I write a test for this behaviour.

My problem is that I initially have no idea how to fix this bug and the only way I can write a test is after I have fixed it.

In a simple function such as let sum = (a, b) => a - b, you can write a test as to why sum(1, 2) does not equal 3 before writing any code.

In the case I am describing, I can't test, because I don't know what the verification is (I don't know what the assertion should be).

A solution to the problem described is:

let dataTransfer = e.dataTransfer
let canvas = document.createElement('canvas');
canvas.style.opacity = '0';
canvas.style.position = 'absolute';
canvas.style.top = '-1000px';
dataTransfer.effectAllowed = 'none';

document.body.appendChild(canvas);
dataTransfer.setDragImage(canvas, 0, 0);

I could not have known that this was the solution. I could not even have written the test after finding the solution online, because the only way I could have known if it really worked, was to add this code into my codebase and verify with the browser if it had the desired effect. The test had to be written after the code, which goes against TDD.

What would be the TDD approach to this problem? Is writing the test before the code mandatory or optional?

Doc Brown
  • 218,378

4 Answers4

27

When I understood you correctly, you cannot even write a reliable automated test for your "ghost image" example after you found a solution, since the only way of verifying the correct behaviour is to look at the screen and check if there is no ghost image any more. That gives me the impression your original headline asked the wrong question. The real question should be

  • how to automatically test a certain behaviour of a graphical user interface?

And the answer is - for several kind of UI issues, you don't. Sure, one can try to automate making the UI showing the problem somehow, and try to implement something like a screenshot comparison, but this is often error-prone, brittle and not cost-effective.

Especially "test driving" UI design or UI improvements by automated tests written in advance is literally impossible. You "drive" UI design by making an improvement, show the result to a human (yourself, some testers or a user) and ask for feedback.

So accept the fact TDD is not a silver bullet, and for some kind of issues still manual testing makes more sense than automated tests. If you have a systematic testing process, maybe with some dedicated testers, best thing you can do is to add the case to their test plan.

Doc Brown
  • 218,378
5

What would be the TDD approach to this problem? Is writing the test before the code mandatory or optional?

One way is to apply an analog of a spike solution.

James Shore described it this way:

We perform small, isolated experiments when we need more information.

The basic idea is that you put down the design tools while you are figuring out what is going on. Once you have your bearings, you pick up the design tools again.

The trick: you bring the knowledge from your investigation back into your production code base, but you don't bring the code. Instead, you recreate it while using your disciplined design techniques.

Horses for courses.

EDIT:

How can you automate a test if the defect can only be seen by a human eye?

I'd like to suggest a slightly different spelling: "How can you automated a test if automating the test isn't cost effective?"

The answer, of course, is "you don't". Instead, strive to separate your implementation into two parts - a large part where testing is cost effective, and a smaller part that is too simple to break.

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies -- C.A.R. Hoare

So when working with third party code, we'll have a very thin shell of code that acts as an proxy for the third party library. In test, we replace that shell with a "test double", that verifies the protocol, without worrying that it produces the desired effects.

For the testing of our codes integration with the real third party code, we rely on other techniques (visual verification, tech support calls, optimism....)

It can be useful to keep some demonstration artifacts around, so that you can check that your assumptions still hold when you upgrade the third party library.

VoiceOfUnreason
  • 34,589
  • 2
  • 44
  • 83
0

What does a ghost image look like? If you created a dummy ui of one known color, where you put your draggable component? Would there be a specific color present if there was a ghost image.

Then the test could test for the absense of the color of the ghost image.

Such a test would be reasonable durable and doable.

0

Just a different perspective, testing around the UI/GUI can be done somewhat better in respect to acceptance testing (feature/business work flow centered tests).

For the web, I think frameworks like selenium webdriver have the potential to get close to the correct test, but the overhead to get started could be a bit much, and it is a shift in the flow seen with TDD in respect to just unit tests.

The part that would specifically help with your situation is something called Page Object Model (https://www.guru99.com/page-object-model-pom-page-factory-in-selenium-ultimate-guide.html). This achieves an explicit mapping of run-time GUI to some code, either by naming of methods/events/class members.

The main arguments against this would be the overhead, and that this overhead could usually be seen at the end of the cycle for development. The overhead being that the tests require some wrapper that would appear to create duplicated work.

Moving forward, it would depend on the cost/benefit of the team and business, so it could be a beneficial topic to discuss to determine expectations and views.