8

In the Software testing book from Koirala, Sheihk, they say:

A positive test is when you put in a valid input and expect some action to be completed in accordance with the specification.
A negative test is when you put in an invalid input and receive errors.

Part of my question:

Let's say there is a requirement: If the customer name=="James Bond", do something.

Is name<>"James Bond" negative or still positive test, verifying the requirement?

Or a login function:
Is correct login positive test case and incorrect negative?
Or Login failed is just confirmation of requirement that a password must be correct ->confirming invalid pass will not work.

For example, if the program is supposed to give an error when the person types in "101" on a field that should be between 1 and 100, then that is positive then if an error shows up. If, however, the application does not give an error when user types 101, then you have a negative test.

superM
  • 7,373
John V
  • 4,946

10 Answers10

12

Positive and negative is one nomenclature, true and false another. In general, you have to distinguish two things, which are both of a boolean nature:

  1. Is the test itself written according to the requirements, or violating them? Let me refer to these as true and false from now on.
  2. Is the test run successful or not? Let me refer to these as positive and negative from now on.

What your quote denotes as "positive" and "negative" are in fact only two of the possible four combinations. The quoted phrase "positive" means a true-positive result, i.e. valid input and a successful test result. Accordingly, "negative" means false-negative, i.e. invalid input and a failed test.

Result classifications

Here's a brief description for each of the four possible combinations and what is hidden behind them:

True-positive

The simplest case, in which your input is adhering to the specification and the result is as expected. Not much to say about these really.

True-negative

These are good, too. You have provided valid input, and the failing test tells you that there is something wrong in your product.

False-positive

These are monsters. They show up in your reports as green checkmarks, but really what the test is testing is not adhering to the specification.

False-negative

Again you are testing something which doesn't make sense, and you do it in a way that fails. The good point about these is that you at least become aware of the problem by a failing test.

Application to 101 Example

Let's apply this to your 101 example:

True tests

First, you write two test cases: Test #1 checks that for valid inputs, the user does not see an error message, whereas Test #2 checks that for input outside of the valid range an error is shown. Both tests adhere to the specification, hence, you can only end up with the true-cases below:

  • Test#1 True-positive: I enter 50 and no error shows up
  • Test#2 True-positive: I enter 101 and an error shows up
  • Test#1 True-negative: I enter 50, but an error shows up
  • Test#2 True-negative: I enter 101, but no error shows up

The negative cases both indicate that you have to fix your implementations.

False tests

Now consider two more tests: Test #3 checks that when you input a number between 1 and 100, an error message shows up. And Test #4 checks that no error message is shown when you enter 101. Both of these tests are not adhering to the specification! But you can still execute them, because they are just executable tests after all. Here are the possible cases:

  • Test #3 False-positive: I enter 50 and an error shows up
  • Test #4 False-positive: I enter 101 and no error shows up
  • Test #3 False-negative: I enter 50, but no error shows up
  • Test #4 False-negative: I enter 101, but an error shows up

As you can see, due to the nature of these tests you will never get a True-X result. The false-negative results are nicer, because they point you to the problem.

Impact on development

Our problem as developers is that we only get to see the result of test executions, for example in an IDE, as success or failure, which maps to the latter positive or negative. You do not get any indication of whether your test is true or false (because it is far from trivial to automatically determine that a test adheres to a specification).

True-negatives are relatively simple to fix: you just fix your code to do what the customer wants it to do.

False-negatives are more trouble: As it is really the test itself, which is at fault, you risk losing valuable time trying to turn a False-negative into a False-positive by "fixing" your product code.

False-positives are not a problem at the time they occur, but further down the road. Imagine that all your tests succeed, but several versions later that one tests suddenly fails. It has been working all the time, so you will assume that some recent code change broke it, but in fact, the test itself was already broken.

You might argue that the False-X variants of tests should not be created - and of course that's right. But they usually do not come into existence, because someone writes them, but rather, because someone wrote them long ago and now someone else has changed the requirements/specification. A simple example would be to change your example to allow 51-150 as valid input. A test that checks for an error for the input 200 remains a True-X test, but the test that tries to enter 101 and expects an error becomes a False-X. Gladly, it will fail.

More problematic is the test that enters 50 and expects no error. Due to the changed specification, 50 is no longer a valid input, but you will not find out that your application still accepts it as valid easily, because your test still says everything is alright. The specification change changed the test from a True-positive into a False-positive.

Lesson to be learned: when changing your specification make sure to update your tests as well.

References

The nomenclature of true/false-positive/negatives comes from statistics, especially, medical tests. They often have the problem, that tests are not as deterministic in their nature as it is the case for software, so they are also interested in sensitivity and specificity.

Another nomenclature (also in statistics) is to refer to the false-X cases as type I and type II errors.

Frank
  • 14,437
6

I do not think it is a negative test, because the requirement dictates to show error when number ==101. Also typing 101 I am validating the requirement.

If you're testing the requirement that the program produces an error for input of 101, then entering 101 and expecting an error is a positive test.

If you're testing that the program accepts values between 1 and 100, then entering 101 and expecting an error is a negative test.

In other words, the difference between positive and negative test depends on exactly what you're testing. A positive test determines that the behavior works correctly (or doesn't), and a negative test determines that erroneous input fails.

Caleb
  • 39,298
6

Let me try to resolve your confusion in simple words.

Negative Test is a test designed to determine the response of the system outside of what is defined. So any kind of input which is not as per the design or requirement of the system is a part of negative test and your system should be capable enough to handle it. it should not crash or throw an unexpected error.

Logicalj
  • 174
4

To me, the difference between positive and negative tests is simple.

A positive test will verify that valid input is correctly processed. A negative test will verify that invalid input causes some validation to fail. (In a positive test, all validations will necessarily pass - if one doesn't, the test fails.)

Or in other words: if all validations are expected to pass for the given input/scenario, the test is positive; if at least one validation is expected to fail, the test is negative.

Therefore, in my understanding the book is wrong regarding the "101" example. In such a case, we would have a negative test for an invalid input, which must pass only if the associated validation (in this case, that the typed number is in the 1-100 range) fails.

This also means that, for some input which is not known to be valid or invalid, the behavior is undefined, with no tests for it.

Rogério
  • 562
3

There is always a lot of semantic confusion when you double your negatives or negate your positives. That's more a limitation of English than of testing theory of methodology. In essence, many scenarios can be described both in what should happen and what shouldn't happen. One is a "positive" effect (what should happen) and one is a "negative" effect - the error. But in actuality these are two different features you are testing.

In your 1-100 example, for instance, the two features are "only accept input between 1 and 100", and "show an error message when input isn't between 1 and 100". These aren't the positive and negative aspects of the same test, but two different tests. The positive element of the first test is to give an input between 1 and 100 and see that the function works. The negative aspect is to give an input of 101 and see that it doesn't work. That's all.

For the second feature, the positive test would be to give an input of 101, or Int32.MinValue, or 0, or other edge cases, and see the error shown. The negative test would be to give an input of 40, and ensure no error is displayed.

3

You're messing up tests and conditions.

If you look at the illustration in the referred book, you'll see that Koirala and Sheikh do not talk about a decision branch, but a system as a whole.

They only look at valid system tests, i.e., blackbox tests, which results are correct.

So, in this context,

  • a positive test is, when you enter valid data, and the system does process it
  • a negative test is, when you enter invalid data, and the system does not process it (and ideally tells the user about that)

It does not matter, how the validity of the data is determined, or what exactly the wording of the requirement is.

nibra
  • 678
1

Firstly (and probably unintentionally) your examples using * comparisons* tend to confuse the issue., getting the comparison "a <> b" which sounds sort of negativeish confused with the a test of whether the comparison was implemented correctly (a positive test).

So positive tests are where you test to see whether the system implements a use case correctly. (Think of a test for the vending machine use case "customer gets can of coke").

Negative tests are where you test to see if exceptions to a use case are handled properly. (Think of an exception in the above use case "user input foreign coins").

A positive test case passes if the use case succeeds.

A negative test case passes if the use case fails.

and vice versa.

0

Positive testing is the testing of functionality that we expect to work.

Examples:

  1. 1) The user enters the wrong username and password combination. 2) The user is expected not to get logged in. 3) Test that the user is not logged in.

Negative testing is the test of cases that we expect to fail.

Examples:

  1. 1) The user types in an invalid character in the password field. 2) The user is expected to be prompted by an "invalid character" message. 3): Test that the user is prompted by the "invalid character" message.
-1

I think the book is trying to make a distinction where it is not strictly necessary.

Tests should be written to make sure a system behaves according to its specification. It should do so for normal operating conditions as well as in exceptional situations.

I assume the author is making a distinction because it is easy to forget to test the exceptional situations. It is just a little guideline that can help you think about things.

Here is an example: If you are writing a function that will multiply a matrix with a scalar, most people will write a test with a demo matrix and some number and check that the multiplication result is correct. The book wants you to also be on the lookout for the case when your matrix might not be initialized or your number might be from the wrong ring.

Knowing or defining if a test case is positive or negative is not really necessary or useful. It's just a guideline to help you avoid missing classes of inputs. It is basically a simple way of creating equivalence classes: http://en.wikipedia.org/wiki/Equivalence_partitioning

Sarien
  • 751
-1

"The authors says the field should be between 0-100 and if not, show the error. Then he says showing the error when you type 101 is positive and if the error does not appear, it is negative. Does not make sense IMO."

If a test pass (whatever it test), it is positive (success). If not, its negative (unexpected behavior).

As already said, a test goal is to determine if your code should PASS or FAIL. Failing may be normal. Personally, I categorize my tests with OK (should pass) and KO (should fail) like that :

LoginSuccessTestOK()
{
    // Assert that login success in an expected manner
}

LoginFailTestOK()
{
    // Assert that login fail in an expected manner
}

LoginSuccessTestKO()
{
    // Assert that login should success but fail in an expected manner
}

LoginFailTestKO()
{
    // Assert that login should fail but succeeded in an expected manner
    // May be not applicable in this case
}

If these tests pass (green), they are positive. If one don't (red), it'll be because of an unexpected behavior.

JoeBilly
  • 521