5

As a QA person, I always believe that a defect should have all the steps needed for anyone fixing it to be able to reproduce it.

however, is it crazy to think that every defect QA opens need to be checked if it already exists in production too? Reason, why I ask is we have a dev manager who constantly whines about, well if QA would have done this investigation whether if this issue exists in production or not it would save time for dev folks. I understand that but I still think its kinda non sense to check if every bug QA finds in testing cycle exists in production too.

4 Answers4

5

From the development manager's standpoint, it absolutely matters whether the defect is new or whether it is an existing defect because it has a direct and immediate impact on how the bug needs to be handled. From the development manager's standpoint, the most important question is whether a new bug needs to be resolved in the current release cycle or whether or whether it can wait and be prioritized in a later cycle. That, in turn, often depends on whether the bug is new or not.

If you've found a new bug, that implies that one of the new features/ bug fixes in the current release cycle introduced the issue. If that's the case, someone either needs to remedy the issue as part of the current release (either by reverting the change that introduced the bug or by fixing the bug itself) or the business needs to decide whether adding new feature X is worth deploying even if it introduces new bug Y. Almost always, the bug has to be resolved in the current build cycle or the offending change needs to be rolled back. On the other hand, if you've found an old bug that existed prior to the current round of changes, the current build cycle can generally continue and the new bug can be prioritized for a future release. Of course, there are cases where a newly identified bug needs to be handled in the current release cycle because the bug is just that critical, those cases tend to be rare.

Now, whether it should be QA's responsibility to check whether the bug affects the current release or whether that should be done by whoever is prioritizing the bugs (assuming that prioritization happens immediately) is an open question. My bias would be to ask QA to do it since they're already writing up the bug. Since the QA person already knows how to reproduce the bug, they're best positioned to verify whether it exists in production or not. The QA department also tends to have more hours available for this sort of investigation than the person doing the prioritization does since the work can be spread across many analysts.

Justin Cave
  • 12,811
2

Two separate questions here, should the productive system be checked and who should do it? Let's assume we're talking about two separate departments in the same company ...

  • Which department has the time to do it? In most projects I've experienced, be it waterfall or scrum, tests become the bottleneck as the release draws near. It might make sense to hand the check on the productive environment off to the dev staff for that reason alone.
  • On the other hand, many testers earn less than developers, so it makes economic sense to let them do the legwork if they're available in sufficient numbers.
  • Who has the necessary access to the productive environment? Where I work, Development often doesn't have it, just Operations (of course) and QA (because they're doubling as 2nd Level Support).

Tell your dev manager that you're happy to do the checks on stage, productive or whatever system as long as the project budget and schedule reflect this responsibility.

o.m.
  • 505
1

Once a defect is found, I think that the most important thing to do is to get it logged with the key information - a title or summary, details about the defect or problem is, what actually happens and what you expect should happen, the steps you need to reproduce it, and the environment where it was found (such as the OS, web browser, software version under test, etc). Depending on your process, the person submitting the defect report may also assign a priority and/or severity.

From a QA perspective, you are usually looking at software that is still under development, or at best a release candidate. Although the development team should be performing unit, integration, and some system tests before you get to it, their tests aren't perfect and you're going to find issues that the dev team should look at before they go to release. Ultimately, it's up to someone in the organization to prioritize identified issues and decide if something will be fixed in this release cycle or a later release cycle.

Throughout the development cycle, both the development team and the QA team should be changing their tests. There may be a regression test suite that always gets executed, but I would suspect that testing is more thorough and focused on features or parts of the software that have changed. Defects could be exposed for any number of reasons. A change could have introduced a defect. A change could have made defect more obvious, easier to trigger, or more common. The new test cases could have found a defect that has been latent in the software for a long period of time. Regardless of why the defect was detected by the testing isn't that important right now - the first priority should be to fix the defect.

In some environments, such as customer self-hosted software or where there is an obligation to support or maintain multiple versions, it may be necessary to test multiple versions of the software to determine when the defect was first introduced so that customers can be notified, especially if it is a significant defect. In environments such as these, I would expect more of a burden to be placed on the QA team in identifying affected versions(s) out of those that are currently supported and to address why the issue escaped from multiple levels and rounds of testing.

However, in environments where only one version of the software is supported, I'm not sure that it really matters what version the defect started in. If the defect was affecting users and they have a mechanism to report issues, then they will report it as a problem. However, from a quality perspective, it is important to understand known issues with the software. From a project management perspective, the amount of open issues can be used to determine if software is ready to be released or to plan work for a future development cycle. Even if it's not scheduled to be fixed prior to the end of the current development cycle, it could be reflected in user-facing documentation, along with workarounds, until it can be fixed.

At the end, after the defect has been fixed, you may choose to perform some kind of root cause analysis to determine when the defect was injected (in the current development cycle), why the defect wasn't discovered in previous development and QA cycles (if it wasn't recently introduced), and what types of tests should be created, executed, and managed to prevent this defect from returning in the future.


As an aside, I would like to note that there are multiple ideas for what a Software QA organization should be responsible for. In some organizations, Software QA is essentially a test team that is the last line of defense for software releases to ensure that it meets requirements and has no significant issues. In other organizations, Software QA is not only responsible for product quality, but also process quality and may be involved in every step of the development process to ensure that all work products (from requirements through the distribution media) are complete and correct per standards. There are other expectations, as well. This answer focuses more on Software QA being responsible for product quality.

Thomas Owens
  • 85,641
  • 18
  • 207
  • 307
0

As a developer I get this problem a lot.

Some new feature is sent to test and a bug is reported which describes behaviour which although it 'seems wrong' is present in the live enviroment.

This is a problem because we have had time assigned to develop the new feature. Not to fix a random 'bug'

Often there has to be an extended discussion about whether the behaviour is a bug at all, what priority fixing it should take etc.

The problem stems from creating additional test cases and not running them against live before adding them to the test suite.

Developers using scrum or some other aglie process usualy have thier time quite tightly managed.

If testers raise bugs ad hoc rather than against the specific requirements the developers are working too it causes delay and frustration.

Ewan
  • 83,178