38

I know this sounds a lot like other questions which have already being asked, but it is actually slightly different. It seems to be generally considered that programmers are not good at performing the role of testing an application. For example:

Joel on Software - Top Five (Wrong) Reasons You Don't Have Testers (emphasis mine)

Don't even think of trying to tell college CS graduates that they can come work for you, but "everyone has to do a stint in QA for a while before moving on to code". I've seen a lot of this. Programmers do not make good testers, and you'll lose a good programmer, who is a lot harder to replace.

And in this question, one of the most popular answers says (again, my emphasis):

Developers can be testers, but they shouldn't be testers. Developers tend to unintentionally/unconciously avoid to use the application in a way that might break it. That's because they wrote it and mostly test it in the way it should be used.

So the question is are programmers bad at testing? What evidence or arguments are there to support this conclusion? Are programmers only bad at testing their own code? Is there any evidence to suggest that programmers are actually good at testing?

What do I mean by "testing?" I do not mean unit testing or anything that is considered part of the methodology used by the software team to write software. I mean some kind of quality assurance method that is used after the code has been built and deployed to whatever that software team would call the "test environment."

jhsowter
  • 491

9 Answers9

41

The question seems to be asking specifically about System Testing, so that's what I'm referring to throughout this answer.

I think there's an important distinction to be made between being a bad person to choose to perform testing, and actually being bad at testing.

Why programmers are bad at testing:

  • If you've written the code, you (should) have already thought of as many ways as possible that things could go wrong, and have dealt with them.
  • If finding a particularly niggly bug means that you have to then go and fix it, in a codebase you might be sick of, then that isn't going to help your motivation.

Why programmers are good at testing:

  • Programmers tend to be logical thinkers, and good at working in a systematic way.
  • Experienced programmers will be very good at quickly identifying edge cases and so coming up with useful tests. (If there's a formalised testing process, then most all of these cases should already have been identified and tested prior to systems testing.)
  • Programmers are pretty good at making sure that all the useful information goes into a bug report.

Why programmers are bad testers:

  • Programmers are more expensive than testers (in the vast majority of cases).
  • The mindset is fundamentally different: "Build a (working) product" vs "This thing isn't going out the door with any (unknown) bugs in it."
  • Testers will typically be more efficient - i.e. perform more tests in the same amount of time.
  • Programmers specialise in programming. QA professionals specialise in testing.
vaughandroid
  • 7,609
20

I think programmers are bad at testing their own code.

We like to believe our code works perfectly according to the requirements and test it as such. In my place we test our own code, then test each others code before releasing into the actual testing cycle and far more bugs were caught that way than just by testing our own code

Amy
  • 800
  • 4
  • 8
13

Programmers are definitely the right people to test some parts of the system -- in places they are the only ones who might be able to do it effectively.

One place programmers tend to be very bad at testing is the whole "use the UI like a normal user" bit -- they aren't normal users and don't behave like them. For example:

  • Programmers tend to be very good at getting text entries just right. A pretty common issue I see is leading or especially trailing spaces. Most folks don't seem them, but good programmers are probably religious about making their strings just the right string without extraneous spaces.
  • Programmers tend to be keyboardists, taking advantage of tabs and other shortcuts to speed up work. Normal users tend to grab the mouse between fields.
  • Programmers tend to understand what the system is telling them rather than ignoring error messages and just clicking OK.

So, normal users do lots of things programmers don't. You can't rely completely on the dev team for UAT.

Wyatt Barnett
  • 20,787
1

At the technical level (unit tests, integration tests, regression tests) programmers are probably the only qualified persons to be testers, because these kinds of tests are automatable and should thus be automated, which is something that requires programming.

But I don't think that's what you're talking about, and I'm pretty sure it's not what Joel Spolsky means either - it's the part that remains, the actual hands-on manual testing: turning a requirements document and functional spec into a test script and then meticulously executing this script against the finished product.

Being a good tester requires qualities that are mostly orthogonal to those that make a good programmer. There is a bit of overlap - you must be able to think analytically, you need a certain affinity with computers in general - but other than that, the skills of a tester are much different. That in itself doesn't mean you can have both skill sets, and in fact, quite some people probably do. However, to be a really good programmer requires a certain laziness (the desire to automate your chores away), while a really good tester needs persistence (check all of three thousand form fields for inconsistencies), and as a consequence, even those programmers who do have what it takes to be a tester typically abhor the idea.

And then there's the selective bias: A programmer who is already involved with a project, even if only marginally, already has some inside knowledge about the codebase, and will have a hard time approaching it with a blank mind, from an end-user's perspective. It doesn't even have to be explicit, as in "I know this button works, so I'll just note 'pass'"; it can be way more subtle, and those subtle effects can lead to critical edge cases being missed in testing.

tdammers
  • 52,936
1

From my experience, yes, programmers are bad testers. Too often have I seen others and myself go "Huh, but I tested that before I checked-in!" when confronted by a tester reproducing the bug in front of you.

Why? Well, I'm not sure why that is but maybe it's because we want to see the stuff working. Or we just want to get over with testing this or that feature already.

Anyway, testing isn't a skill we learned and we don't work as a programmer because we are good at breaking features. Also we might have no idea how to do proper test planning or all the other stuff that QA does. We aren't anymore qualified to do a tester's job than a tester is qualified to implement your new 3d rendering pipeline.

As in the question, testing doesn't mean anything automated but actually testing by using the program.

Morothar
  • 211
1

The are several level of testing. The "low level" testing can and must be done by developers. I think at unit testig.

On the other hand, "high level" testing are totally another thing. In general I think developers are bad tester not because they miss skills, but because is very hard change way to think and way to work in a few time.

I use to try test as mush as possible my codes, but after at least 10 mins made by a tester, something to consider a bug or enhancement arise. This means that test something you create, is an hard job. You know where to click, you know whene click, you know the business logic, you probably know how data are persisted. You are a god you'll never fall.

Angelo Badellino
  • 714
  • 4
  • 13
0

What kind of testing do you mean? If you mean comprehensive exhaustive testing then I could see some rationales for saying yes though I'd suspect most people would be poor in this category if one considers all possible combinations of inputs as a requirement for such testing.

I can acknowledge that the developer that designs the software may have tunnel vision when it comes to what the code is to handle and ignore some possible boundary cases that just may not have been considered. For example, if I build a web form that takes a number, n, and then prints from 1 to n on the screen I may miss some special cases like if nothing is entered or something that isn't a natural number like e or pi. What is the program supposed to do in these cases may be questionable.

Test Driven Development would be an example of a development methodology that puts testing into a different light that may give another view here.

JB King
  • 16,775
0

Programmers are fine defining tests when they define the tests before writing the code. With practice, they get even better.

However, when defining tests for code they have written, they don't do very well. They will have the same blind spots in testing that they had in writing the code.

Using programmers to do manual testing is just silly. Manual testing is silly enough on it's own; making programmers do it is extremely silly. It's expensive and drives away the competent programmers.

kevin cline
  • 33,798
0

One type of testing that I have particularly seen devlopers fail at is testing if the requirement has been met. What devlopers think something in a requirement means and what testers think it means are often two completely different things.

I can think of one just recently where the develoepr was asked to do a delta export and the dev thought that meant to get any records that had not been sent once and the testers thought it mean get any new recrds and any changes. They had to go back to the client to find out who was correct. I code reviewed it and I made the same assumption that the dev did about the requirement. Because logically if you wanted to include updates, you would have mentioned them. And I'm usually good at spotting those ambiguous things because I used to be on the user end of things.

So other devs doing the testing would tend to make many of the same assumptions because they too would make certain assumptions like "well they would have had more detail if they mean X vice Y because there are so many details to be answered before I could do it. But requirements writers don't think that way. So someone who thinks more like requirements writers needs to test the developer assumptions and someone who is not a developer is the better person to even see there is an issue.

HLGEM
  • 28,819