-1

In a feedback for a (deleted) question I asked here last year, I was told that there is not easy way to do software testing. We may find prepared test cases for protocols but in most case the test are based on the requirement and most of them are for that reason unique.

Today I was preparing a thesis subject for a student, in field of automate testing (measurement technology, NOT software testing) but I also found the site Standardized Test Information Interchange.

This tells me that indeed there is some intention to somehow standardize software testing.

Therefore I'm interested in which automated test technology, standards, best practices shell be applied for a software development to stay on a maintainable course for a long run if no previous automated software testing solution blocks the free engineering decision.

The text of the original site mentioned in this question:

Interoperability – Standardized Test Information Interchange

Have you ever tried switching from one automated testing tool to another but decided not to make the move because you already had too much time invested and too many automated test cases that existed using the old solution?

Hundreds of automated software testing tools solutions currently exist and each provides their own development language for automating test cases. One tool might use VBScript; another might use a proprietary scripting language; while yet another might let you choose from a multitude of programming languages to create your automated test cases .

The challenge with this non-standardized way of automating test cases development across the automated testing tool community is there is no interoperability and no data exchange capability among automated testing solutions. If you wanted to switch from tool A to tool B you would have to recreate all your tests in tool B; no standardized approach exists to automate this process.

To address this and other interoperability challenges, we at IDT, along with others, have proposed a standard to the OMG (www.omg.org) called the Test Information Interchange for Automating Software Test Processes for software systems (in short TestIF).

The goal of this standard is to achieve a specification that defines the format for the exchange of test information between tools, applications, and systems that utilize it. The term “test information” is deliberately vague, because it includes the concepts of tests (test cases), test results, test scripts, test procedures, and other items that are normally documented as part of a software test effort.

The long term goal is to standardize the exchange of all test related artifacts produced or consumed as part of the testing process.

SchLx
  • 113

1 Answers1

2

I think "ramping up testing" is more of a subgoal than a goal. The actual goal of all testing is to mitigate risk.

To mitigate risks, you need to know what those risks are. So the first step is a risk assessment process, which usually results in a risk matrix. I would pull together a group of key stakeholders and experienced technical staff and brainstorm on what those risks may be, in part based on your history with the software and how well releases have gone.

For each risk, the business will decide the impact (e.g. the risk of a "system down" scenario may be financial or reputational loss) and assign a score (usually 1-5). Meanwhile the technical team will decide the probability, also scored from 1-5. You then multiply these two factors and sort by the product to get a forced rank ordering (i.e. prioritization) of the risks.

Then for each risk you must have a mitigation strategy. Many of the risks will be mitigated by a certain type of testing-- functional testing, performance testing, failure mode testing, integration testing, data migration testing, penetration testing, etc. The team must then develop a test plan that provides confidence to stakeholders that the risk is mitigated.

One of the key risks is usually "Software is of low quality" or something similar. That specific risk is mitigated by Quality Assurance.

For an orderly QA, you must develop quality metrics. If you are starting from zero, I'd suggest gathering data from the existing code base and defect tracking system, so that you can establish a baseline; for example, you may find in your last five releases you had an average of one sev1 defect and 10 sev 2. You then may set a goal for zero sev1 and less than 5 sev2, that sort of thing. The development and QA teams will then need to come up with a plan to reach that goal.

The development team can improve quality with code reviews, automated unit tests, manual unit tests, pair coding, and other techniques. For each of these, goals may be set, e.g. for automated unit testing you may decide that 80% code coverage is required by a certain date. The QA team may use automated or manual smoke and functional tests. You may have specialized resources set up performance and stress tests. The QA and development leads should set up a process for continuous testing, measurement, and reporting.

Are these test approaches standard? Sort of. The terminology and the general techniques of each kind of testing are going to be more or less consistent. However, the prioritization for each type of testing will vary depending on where your risk areas are and how you ranked them. Some software will skip certain tests completely-- for example, you might not do certain kinds of security testing on an internal application-- while other tests may take higher priority-- testing for system stability on a high-performance 24/7 application, for example. Those choices and decisions will be business- and product- specific, and like all business decisions, are bounded by resource availability, budget, and time constraints. One size does not fit all.

John Wu
  • 26,955