23

Assume you're using continuous integration processes which frequently update some target environments, so that every time there are some changes "you" can test your changes right away. That's part of the goals of CI, no?

But, also assume that you have other people involved in your test cycle, e.g. managers or customers. Makes sense to get other people involved in trying to review (break?) your upcoming changes, no?

But if you continuously keep delivering changes in the environment in which those other people are, seriously, trying to test them, then multiple issues may arise, such as:

  • they might waste their time in reporting issues which, by the time they save the (in depth) report, they cannot even reproduce the issue themselves anymore (e.g because accidently you also ran into the same issue, and already fixed it in their environment).
  • you might not be able to reproduce issues they reported, since the environments in which they ran into some issue, is no longer identical (you (!!!) might have overlayed their environment).

So what can you do (how to configure things?) to avoid such (frustrating) situations?

Dan Cornilescu
  • 6,780
  • 2
  • 21
  • 45
Pierre.Vriens
  • 7,225
  • 14
  • 39
  • 84

5 Answers5

10

I'll give my experience on this one, mostly because it showcases why some answers are not always applicable.

Some context to start:

  • We have 7 environments to host roughly 80 applications, most of them rely on each others through webservices or shared tables on db2-iSeries.
  • For good or bad, the iSeries are our DB system of reference.
  • This last point invalidates any idea of bringing the app with its dependencies in an isolated environment as bringing up an AS400 for each would cost too much and we wouldn't have the hardware to run it anyway.

What we are doing is not a complete automated Continuous Delivery, we have a schedule of releases to bring up a coherent lot of applications for the general operations. Aside from this each test teams can trigger a release in one of the Q/A environment for the application they are testing and can put a lock on some application version to avoid another team request breaking their tests.

Dependencies of applications are checked before release, so the system won't release something if other applications can't be updated or doesn't match its dependencies needed. The main idea is to allow updates when it won't impact someone, if there are no tests planned, it should flow from previous environment (and we're aiming at removing the scheduled releases in the 5 firsts environments on mid term now we have validated this 'on demand' method system).

The short version is to have a 'semaphore' system around the applications in the environment, a team should be able to lock its target application with its dependencies (and transitive dependencies) for the time of manual tests.
The implementation of this semaphore is highly dependent on your automation system so I won't extend on that.

Of course the easy way is, as others mentioned, to create a fresh environment for an application with all its dependencies to avoid the semaphore described above.

Pierre.Vriens
  • 7,225
  • 14
  • 39
  • 84
Tensibai
  • 11,416
  • 2
  • 37
  • 63
8

Sounds like you're talking about a test environment which is constantly re-used without being reliably re-initialized for every test execution. This makes such test an unreliable one. Similar, from the reliability perspective, with manual testing, if you want.

IMHO you shouldn't be using such testing inside your CI/CD qualification purposes as that will effectively invalidate your qualification process (at least in that area). Saying that the software passes test X without actually executing test X for every software version delivered or without having the certainty that the pass result obtained is not accidental (due to false positives) will erode your testing's confidence level. False negatives are not credibility damaging, but they are also undesired because of the unnecessary "noise" they create.

It's fine to execute such testing outside your CI/CD qualification process. But you'd be treating a failed result in such testing just like a customer-found bug: you'd need to reliably reproduce the issue to be able to develop a fix for it and confirm that the fix is working. And you can't really do that if the testing is not reliable.

If you plan to address the issue then ideally you'd first develop an automated, reliable test case for reproducing the issue. Which you'd use to develop a fix and confirm its effectiveness (test result should transition from FAIL to PASS). You can (should?) also place this testcase inside your CI/CD qualification process to prevent future re-occurence, if desired - to increase your overall software release quality level.

Dan Cornilescu
  • 6,780
  • 2
  • 21
  • 45
7

The usual approach is to create different environments:

DEV - this is the place where dev team mess the things. Here are create all changes tunings, deploy new version and so on. Here is the place where CI is integrated fully.

PREPROD/QA - this is the place "play" QA/test/validation team do tests. This environment usually freeze during the tests. Integration of CI with this environment is only to provide new version of the product, configurations, etc.

PRODUCTION - is it need to explain :)?

Romeo Ninov
  • 431
  • 5
  • 16
3

To add to Romeo Ninov's answer, internally within an environment you need to try and separate out the applications as much as possible. This is partially why docker has been so successful for dev/test. It let's you almost pretend that you aren't sharing an environment at all.

The other option is to have very clearly defined servers on which the applications run which are separate from the rest of the infrastructure that makes up your environment. Ie. All the environment management or enablement machinery goes on separate, long lived servers. Then you hook in new short lived servers based on a known image to test an application and, if any changes are made to the base image, you need to apply those changes everywhere for every new component. Which means testing the changes against everything.

If an appdev team ask for a change that breaks someone else's application, then tough luck, they need to internally create a mitigant in their code and keep their specific requirements separate from the environments offering.

Pierre.Vriens
  • 7,225
  • 14
  • 39
  • 84
hvindin
  • 1,754
  • 10
  • 12
3

If you're doing CI/CD, that implies that there are some automated tests happening (CI) prior to deployment (CD). If you're finding a lot of issues in your test environment, that means they aren't being caught by the tests being run prior to deployment; this indicates insufficient automated testing. If the developers are having issues where defects are cropping up in the test environment(s), they need to improve their automated test suites in order to prevent this. This will also improve quality and reliability overall, all the way through into production.

Adrian
  • 783
  • 4
  • 8