20

Let's consider the following test.

[Fact]
public void MyTest()
{
     // Arrange Code
     var sut = new SystemWeTest();
 // Act Code
 var response = sut.Request();

 // Assert
 response.Should().NotBeNull();
 response.ResponseCode.Should().Be("1");
 response.Errors.Should().BeEmpty();

}

I have argued with a few colleagues that it's pointless to assert that 'response' is not null if you are going to then assert on some of its internals. Of course, if the only thing that you are interested is checking that the object is not null, nothing more, then it's fine.

My thinking is that each following assert statements are in fact implicit assertions that 'response' is not null. If it is null then it would throw a null reference exception and making the test fail as expected.

The only benefit I see from doing this is a somewhat clearer message on why the test failed. You'd get your test framework specific exception that's thrown by the 'NotBeBull' assertion instead of the generic 'nullReferenceException'. I don't feel this is useful as you are getting the same information anyway.

Am I missing something here? Does checking for null have any benefits in the example provided?

BAmadeusJ
  • 326

8 Answers8

75

Does checking for null has any benefits in the example provided ?

The only benefit I see from doing this is a somewhat clearer message on why the test failed.

You have successfully answered your own question. Clear error messages on test failures are very useful.

22

Code should be unambiguous where possible

If a random NullPointer is thrown in a test, I'm starting to investigate if the program is at fault, or if the writer of the test did just miss the fact of a variable possible being null. If someone writes "shouldNotBeNull" the intent is clear. This variable can and should never be null.

This of course assumes people don't just copy & paste a pattern blindly, but use the check where it makes sense and an if-null otherwise.

Falco
  • 1,299
8

I have written a lot of unit tests.

I want basic facts to be tested first. I wrote those tests first, and I ensured my code passed them first.

When I write more advanced tests, my goal is to test those more advanced situations. They may also test the basic facts, but that is at best a side effect.

I may refactor them later. Maybe the rules have changed, and now the proper way to get a response is processor.responseCode(response). And the responseCode method on processor handles null.

So my unit test now reads

processor.responseCode(response).Should().Be("1");

My refactoring of the test removed the "the response isn't null" requirement. It tests for the responseCode is "1".

If I actually cared that response cannot be null, then this perfectly correct refactoring made me lose that test. If I didn't care if response is null or not, then the previous test was an error and shouldn't have been checked!

However, that later change has a risk; my production code may have relied upon response never being null and behave badly. My decision to eliminate that test from the unit test thus requires me to audit the use of the response object.

A second reason to have the null check explicit is that complex tests can lead to complex errors. Having to parse the failure message of a line testing for a value of "1" to be because one specific parameter is null is extra work.

With it being on its own line, the line tests one thing, and clearly explains what went wrong. In some frameworks I'll even early exit to not spam the output with a bunch of additional errors from the fact the value was null and later tests are expected to fail.

A faster, easier to understand unit test failure makes iteration faster. When I rewrite the .Request() and it hits the null error, I can see my mistake instantly, and make it non-null. Then I realize I missed some parameters, and fix those, and repeat.

In this ideal world, the unit test writes my code for me, or at least tells me what feature to add next. The smaller the piece of code the faster the iteration is and the less likely I (or the jr developer working on it) will get lost.

Yakk
  • 2,209
6

I have argued with a few colleagues that it's pointless to assert that 'response' is not null if you are going to then assert on some of it's internals.

It might be "technically" pointless to do so, but code is much more than being about "technically" right.

Code conveys intend and asserting that response is not null says sut.Request() should not return NULL.

In addition to that, it adds a faster fail path, which is always¹ good.

¹ most of the time.

Marco
  • 377
2

My 2 cents:

Besides clear error message I would say I see benefits if internally your code logs if your response is null, but It should have a implicit test for that.

2

Whenever you're writing a test and you know a nullable object shouldn't be null, you can just add the check without trying to think if an implicit check already exists. You also won't end up losing test coverage if all of the implicit checks get removed for whatever reason (e.g. changing requirements, different test structure, copy-pasting).

2

I’m going to agree with everyone else, but with a slightly different focus: unit tests are a way to assert your understanding of how the code under test is supposed to behave and what some data is supposed to look like afterwards and convey that same understanding to someone that just reads the test.

In a language like JavaScript, you might assert that the result of a function call is an int or a string, in another language the compiler might take care of that for you by refusing to compile.

You are dealing with C#, if you don’t have nullable enabled AND it is your understanding that a particular value should not be null, then you should ABSOLUTELY have a test that verifies that that value is not null. Regardless of it coincidentally being tested by some other test. Now, whether that test should be included in a larger test or separately in its own isolated test, is a more difficult question. The more complex a test is, the harder it is to understand, but testing for null doesn’t make it that much more complex.

And then of course there is the question of what to do when nullable is enabled. Your understanding is that the value shouldn’t be null and the compiler only semi-enforces that. What to do?

Asserting that it’s not supposed to be null in that case doesn’t add to the hypothetical readers understanding, the language itself conveys that information. Include a specific test for null only if it’s been an actual problem and you are testing for regression and include that fact in the comments.

jmoreno
  • 11,238
1

Automated Test Parsing Tools can use the difference to distinguish between Sut.Request() returning an empty non-null value unexpectedly vs Sut.Request() getting a null response value

When evaluating the following line, there's a problem when determining what is Null:

response.ResponseCode.Should().Be("1");

Namely, both "response" and "ResponseCode" could be null - and you'll need to look into the stack trace to try and find out, specifically, which one is null.

Granted, that probably shouldn't happen - any response that is given, if (For example) HTTPResponse, should have a responseCode, so it should only come up with a null response object itself.

But if there's something wrong with the sut.Request() call - either in the code itself, or the specific way it requests a resource - that could likely indicate an environment issue (i.e. network service not started, something else weird going on.).

Checking for Null on the response itself beforehand reduces the issues to one on that line for NullPointerExceptions - you can indicate that something's wrong with the Environment that it interacts with, or the area it's trying to reach, if the response it's working with is Null, relative to "The responseCode is null". That's indicating the response itself exists, but the responseCode does not.

If you've encountered these type of issues intermittently before, you can write your test structure or reporting system to automatically classify these as two different errors, indicative of different approaches to fix them.

This can be useful if your testing code runs as a monitoring aspect of a larger thing - as far as diagnostics, you can take those two different stack traces, and definitively indicate what has historically been an issue with that.