24

Someone in my company recently proposed changes to our core product that our managers feel should trigger what I guess my company considers a full QA cycle (i.e. testing the entire product suite from the ground up). Apparently our QA takes 12 weeks to do a full QA cycle for our product. My problem with this is that we are trying to do Agile (although mostly half-assed in my opinion) development. We will do a whole set of sprints and then do a release, which QA will take forever to go through I guess. The question is really, if our QA is going to take 12 weeks to do their job, shouldn't we just give up trying to do Agile? What the hell is the point of trying to do Agile in a situation like this?

Kromster
  • 606
  • 1
  • 8
  • 18

9 Answers9

21

Well, the direct answer to your question would be Mu I'm afraid - there's just not enough details to make an informed guess whether you should or not quit trying.

The only thing I am pretty positive about is that level of agility should be driven by customer / market needs (which you gave no info about).

  • For example, as a user of IDE I am perfectly happy to upgrade to new version once or maybe twice a year and I am never in a hurry to do that. Ie if their release cycle is 3 months (12 weeks) then I am perfectly happy with that.
     
    On the other hand, I can easily imagine, say, financial trading company go bankrupt if it takes more than a month for their software to adapt to market changes - 12 weeks test cycle in this case would be a road to hell. Now - what are your product needs in this regard?

Another thing to consider is what level quality is required to serve your customer / market needs?

  • Case in point - in a company I once worked we found we need some new feature in a product licensed from some software vendor. Without this feature we suffered rather strongly, so yes, we really wanted them to be agile and to deliver update within a month.
     
    And yes, they appeared to be agile and yes they released that update in a month (if their QA cycle is 12 weeks then they likely just skipped it). And our feature worked perfectly well - guess we should have been perfectly happy? no! we discovered a showstopper regression bug in some functionality that worked just fine before - so we had to stick-n-suffer with older version.
     
    Another month passed - they released another new version: our feature was there but same regression bug was there too: again, we didn't upgrade. And another month and another.
     
    In the end we were able to upgrade only half year later so much for their agility.

Now, let's look a little closer into these 12 weeks you mention.

What options did you consider to shorten QA cycle? as you can see from above example, simply skipping it might not give you what you expect so you better be, well, agile and consider different ways to address it.

For example, did you consider ways to improve testability of your product?

Or, did you consider brute-force solution to just hire more QA? However simple it looks, in some cases this is indeed the way to go. I've seen the inexperienced management trying to fix product quality problems by blindly hiring more and more senior developers where just a pair of average professional testers would suffice. Pretty pathetic.

The last but not the least - I think one should be agile about very application of agile principles. I mean, if the project requirements aren't agile (stable or change slowly), then why bother? I once observed top management forcing Scrum in projects that were doing perfectly well without. What a waste it was. Not only there were no improvements in their delivery but worse, developers and testers all became unhappy.


update based on clarifications provided in comments

For me, one of the most important parts of Agile is having a shippable release at the end of each sprint. That implies several things. First, a level of testing must be done to ensure no showstopping bugs if you think you could release the build to a customer...

Shippable release I see. Hm. Hmmm. Consider adding a shot or two of Lean into your Agile cocktail. I mean, if this is not a customer/market need then this would mean only a waste of (testing) resources.

I for one see nothing criminal in treating Sprint-end-release as just some checkpoint that satisfies the team.

  • dev: yeah that one looks good enough to pass to testers; QA: yeah that one looks good enough for the case if further shippable-testing is needed - stuff like that. Team (dev + QA) is satisfied, that's it.

...The most important point that you made was at the end of your response in terms of not applying agile if the requirements are not agile. I think this is spot on. When we started doing agile, we had it dialed in, and the circumstances made sense. But since then, things have changed dramatically, and we are clinging to the process where it may not make sense any longer.

You got it exactly right. Also from what you describe it looks like you got to the state (team/management maturity and customer relationship) allowing you to use regular iterative model development instead of Scrum. If so then you might be also interested to know that per my experience in cases like that regular iterative felt more productive than Scrum. Much more productive - there was simply so much less overhead, it was simply so much easier to focus on development (for QA to respectively focus on testing).

  • I usually think of it in terms of Ferrari (as regular iterative) vs Landrover (as Scrum).
     
    When driving on a highway (and your project seem to have reached that highway) Ferrari beats the hell out of Landrover.
     
    It's the off-road where one needs jeep not sports car - I mean if your requirements are irregular and/or if the teamwork and management experience are not that good, you'll have to choose Scrum - simply because trying go regular will get you stuck - like Ferrari will stuck off-road.

Our full product is really made up of many smaller parts that can all be upgraded independently. I think our customers are very willing to upgrade those smaller components much more frequently. It seems to me that we should perhaps focus on releasing and QA'ing those smaller components at the end of sprints instead...

Above sounds like a good plan. I worked in such a project once. We shipped monthly releases with updates localized within small low-risk components and QA sign-off for these was as easy as it gets.

  • One thing to keep in mind for this strategy is to have a testable verification that change is localized where expected. Even if this gets as far as to bit-by-bit file comparison for components that didn't change, go for it or you won't get it shipped. Thing is, it's QA who is responsible for release quality, not us developers.
     
    It is tester's headache to make sure that unexpected changes didn't slip through - because frankly as a developer I've got enough other stuff to worry about that is more important to me. And because of that they (testers) really really need solid proof that things are under control with release they test-to-ship.
gnat
  • 20,543
  • 29
  • 115
  • 306
15

Oh, I do feel your pain. There are some serious changes you need to make to the QA team for this to work.

My advice is to split the team into three teams:

Feature testing - Fast turn-around on testing new developments.

Regression testing - Fully testing the product before it goes out of the door. This shouldn't take 3 months, even after reducing the team size because most bugs will be found by the first team.

Automated testing - Writing a full suite of regression tests to speed up the job of the regression testing team.

The third team is a bonus, but if you can't have the first two teams then you may as well be waterfall.

pdr
  • 53,768
13

By way of illustration:

enter image description here

Note that your QA team is probably working outside the (ATDD) circle, and you are working inside.

I think it is OK to work that way; if you can prove in your automated tests that you are fulfilling the customer's requirements on each sprint, you can allow QA to perform their tests at their leisure, and come to you with defects, which you can then work into the next sprint.

Robert Harvey
  • 200,592
8

It sounds like you have a "Definition of Done" problem.

Given that your QA group is external, and only involved on customer releases, you can't rely on them for timely feedback on issues. That means if you want rapid feedback, you're going to have to bring some degree of testing "in-house" for the team.

Treat the QA Group as if they don't exist. Act is if your release at the end of the sprint will be deployed to your most critical environment the next day. The software isn't done until it's ready to go to the customers.

QA should find nothing.

This will be harder to get to. You'll probably have some things that sneak through the first few times. Automated acceptance tests and "regression" tests are your best friends here. TDD will help you build up large parts of such suites. You should be able to know -- quickly -- if you've broken anything.

3

The process you described is not an agile process. Teams who have a high degree of agility are able to deliver reliable and potentially releasable builds every sprint. In most agile implementations, the QA function is built within the agile team helping to achieve this goal.

If you, your project lead, your product owner and the developers are not working together and you do not have an improvement plan (retrospectives) then name your process something else and move on. It does not appear that your teams problems are the fault of managers or QA. They seem to be reacting to some systemic problem coming out of the development organization. All is not lost if the team is willing to take responsibility and begin working with stakeholders.

You could try three things. One, make sure each stakeholder has concisely defined roles and that each person understands their responsibility. Two, stabilize the build and then get signoff from QA without introducing more changes. Three, institute test automation. The QA team will love you for it.

GuyR
  • 500
3

Do you have a customer representative / product owner who can see a given release before QA is done with it and give you authoritative feddback on it? If so, you can do, and have most of the benefit of, agile methods while treating QA as a secondary, somewhat slow source of feedback. A release would be only "officially ready" after QA is through with it, but you wouldn't have to wait for them before starting the next.

But if the company rules say that the customer must not see a release before QA is done with it, then you can pretty much forget about being agile, until you manage to have those rules changed.

2

Its a pity the feedback takes so long but I don't think it is worth stopping with agile. At the end of a sprint (or a couple) you release a product you are confident it could be put in the market. For your team agile brings the ability to focus on the work to be done and keep the product releasable. When the QA finds issues I suggest to create bug reports for these issues and address them in the next sprint (if they have a high enough priority).

Our product field tests take a full 8 week plus that we are dependent of outside growers. Still by doing agile we are able to stay focused on the work at hand and produce a new version really quick when needed.

The problem lies (in your eyes) with the QA department can the problem be solved there? Have you discussed it?

refro
  • 1,386
1

12 weeks is long, but hopefully QA can provide you with feedback and bug reports during that time (rather then after the three months).

Then you can still respond to the most important issues in an agile way and can fix many if not all before they've even finished!

Hugo
  • 3,699
-1

What are the QA people doing while you're executing multiple sprints? Sounds like they feel the need to test everything after every change (Which is why they wait for a whole bunch of changes.).

The development team is agile, but the rest of the company is not.

Whoever is in charge of QA either doesn't know what he/she is doing or they have performed a Jedi Mind Trick on upper management and are allowed to take their sweet time. How can QA take longer than development?

JeffO
  • 36,956