27

I've been involved with many projects in several companies because I've been a developer for a long time and I'm a contractor.

I estimate that less than 20% of projects are methodically tested. With methodically tested I mean any testing beyond ad-hoc no plan testing.

I also estimate that less than 10% of projects are thoroughly methodically tested where they have dedicated testers as part of the team, test plan document, where developers write automated tests and then they also track test coverage and measure results.

Two questions

  1. What are your percentage estimates about this issue?
  2. What's your professional experience regarding software testing?

Additional note

Since methodical testing question may get quite biased answers (people like to brag about being superior to others) I encourage other developers (those that're not exposed to methodical testing) to provide their answer as well, because otherwise it will look like testing is being done everywhere... except at your company.

yannis
  • 39,647

15 Answers15

11

The pattern I have seen with testing over my career shows a strong correspondence with the risk of failure in a project. Big projects are more likely to be tested than small ones, mission critical applications are more likely to be tested than one off marketing web sites, in house systems are less likely to be tested than public facing ones.

That said there are still projects that have been excessively tested and those that have not been tested enough, but these are the minority.

blunders
  • 4,538
Martin Brown
  • 327
  • 3
  • 10
7

Everything we produce gets completely tested. If our internal QA team is overloaded, we have an offshore team that tests the projects. They're not as good as our internal team but that's a different topic.

Ali
  • 211
  • 2
  • 8
5

Yes.

The amount of testing is in proportion to the reliability required of the app, as well as the maturity of the programmer culture.

Web sites are quite often walking bug-holes (broken links are a defect).

Video games are often buggy.

Windows (finally) is fairly reliable.

Routers are very reliable

Hospital monitors "don't break"

Note that the fiscal cost of failure is also correlated to reliability.

Paul Nathan
  • 8,560
  • 1
  • 34
  • 41
4

In 10 years, I never worked on a project with formal code testing.

In my current job, we only have functional testing.

The problem is that no one in management is even aware of code testing. The testing department doesn't even know about testing code -- at a high level, they just follow the high level specifications and verify that we comply with them, from a behavioral/functional standpoint.

We don't have a qualified software leader who forces us to code well. The result is spaghetti code, lots of regressions, missed schedules and so on...

Wizard79
  • 7,337
  • 2
  • 43
  • 76
3

The three companies I have worked for during the last 15 years all had unit tests which were run automatically.

At two of those companies I pushed for introducing them.

sbi
  • 10,052
3

In the last 9 years, i've basically only met acceptance/regression tests. There were only a few unit tests.

3

My sample is very small to deduce percentages from, but here goes anyway.

One was a fabless chip + firmware company, which did fanatical testing. 24/7 automated tests on tens of installations, each testing tens of units in parallel. Software teams dedicated to developing testing software. Hardware teams dedicated to building test rigs. Compatibility testing against tens of competitors. Heck, they even bought a multi-million dollar chip tester installation to develop and debug some of the tests that the fabs run when the chips leave the foundry.

Another one was a bank. This one is a completely different environment: no product releases, but lots and lots of in-house software to keep running continuously. These guys tested the cr*p out of every single change they made. They had very strict separation of DEV/QA/PROD environments, automated regression testing, mandatory QA testing signed off by end-users before releasing into production, etc.

So yeah, people do do methodical testing. But as you can tell I've never worked at a place that ships your typical GUI software for the typical computer user.

3

I currently write embedded firmware for a small startup company making wireless medical devices. We are required to do rigorous testing, and have a completely separate quality department headed by someone who reports directly to the CEO. I have never had my code so thoroughly tested before by separate testers (the only time that compares, is when I was working on satellite TV systems about 15 years ago.)

Our test results get submitted to the FDA (so far we have gotten two FDA clearances -- each submittal was around 500 pages long). Both our development and testing methodologies are both subject to periodic auditing.

So it is not only the big companies that do lots of formal testing.

Note -- in my 25+ years of contract programming/consulting, I have also worked for many companies that did virtually no formal testing. Most of them are not around anymore.

tcrosley
  • 9,621
2

We are a mid sized offshore company in South Asia. However, we always do USA based projects and directly work with requirements sent from USA company.

We apply methodical testing on every application we build. Perhaps, the quality of testing isn't up to standard, but we do employ them.

2

As much as the purist in me doesn't want to accept that there has to be some risk management built into the decision for how rigorously you test or whether you do formalized testing at all. For internal apps, which I suspect are a large % of programming projects, the cost of releasing a bug then quickly patching it after it is noticed can sometimes be outweighed by the cost of a full testing team. Of course it depends on the app and the potential cost of failures.

That said, I don't think risk management planning is the reason for the lack of formalized testing. I think it is more a result of non-technical managers not understanding the value it provides and only see the cost.

JohnFx
  • 19,040
0

In the last twenty or so years of my career through eight or so companies, I've never worked on a project that didn't do testing. The amount of testing differed at each company, but every professional development project I've ever worked on did formal testing. This applies equally to both small and middle sized companies (where "small" means less than 10 employees, and "middle sized" means a couple thousand employees or less).

Some companies didn't have much automated testing, some didn't have much manual testing, but they had at least one or the other.

Bryan Oakley
  • 25,479
0

It depends on the needs of the customer. In a contract situation there are probably acceptance tests. In-house is usually a slapjob with little testing. Consumer stuff is usually highly covered in frequent functionality but rough around the edges.

anon
  • 1,494
0

Short answer: Yes

Long answer:

  1. I do not have a good estimate for the first category (it is probably some distance from zero, but how much?), but my experience actually corroborates with your second estimate. It is hard to give meaningful percentages since the amount and type of testing depends on the kind of application being developed, and available timeframe and as well as the skillset of the developers and how the project is run. In practice the most important hurdle for the developers would be the acceptance test since that is an important milestone for billing purposes. But it is also the time where the unexpected may happen (more requirements) and the developers may be pressured to deliver and get by with whatever adhoc testing that is possible and timely (at this stage), on top of time needed for troubleshooting and overcoming the unexpected.

  2. I have been through a variety of projects with different combination of factors mentioned above:

    • no formal unit tests, only integration tests and mostly adhoc tests

    • very formal ranging from unit tests to detailed test plans involving dedicated QA resources, automated testing (as conducted by the testers with their own set of tools) and code coverage reports. But these are not always meaningful to the developers as much as to the managers

On the individual level I seek to maintain an understanding of my options when it comes to writing proper tests appropriate for the technology I am dealing with, and exercise them at my own discretion. Basically the things that are actually meaningful and beneficial to my work, and not so much cranking out numbers.

prusswan
  • 201
0

Almost every company I have been did methodical testing. My current company has some basic unit style tests and that is not sufficient. We have had some quality issues due to this. I highly recommend independent thorough testing on any project that is going to be used by anyone besides yourself. The money spent will be well worth it. Applications that don't work don't get used. That goes for internally facing as well as externally facing.

Bill Leeper
  • 4,115
  • 17
  • 20
0

Consider, for example, the CPAN library which is the core body of contributed software for the Perl programming language ...

Any time and every time you install a package from this [vast ...] library, an automated suite of tests provided by the package author is run on your machine. If any of those tests fail, the package will not be installed.

And then, there is "CPANTS." https://cpants.cpanauthors.org/

CPANTS is a testing service for CPAN distributions. One of its goals is to provide some sort of quality measure called Kwalitee. Though it looks and sounds like quality, higher Kwalitee score doesn't always mean a distribution is more useful for you. All it can assure is it's less likely for you to encounter problems on installation, the format of manuals, licensing, or maybe portability, as most of the CPANTS metrics are based on the past toolchain/QA issues you may or may not remember.

Yep ... this automated service constantly tests new CPAN package releases, on all sorts of widely-different hardware and software platforms (that the package is supposed to support ...) and tests if it actually does so.

Of course, this isn't the only library (or, language) that routinely does this.


Very long ago, I realized the personal value of "test-driven development." Write a class, then write a test for it. As you continue to add to the system, constantly run all the tests.

"Suh-prize!!"

Wow ... how did we break this? ... well, I sure am glad we caught that one early! It would have been very nasty had it made it to production!!

Uh huh. Once you learn that lesson for yourself – even when you're the only developer on the project – you don't forget it.