4

Unit testing is something I love now after forcing myself to do it in projects (and doing it at work) after seeing the massive rewards it offers down the road when refactoring and ensuring things work the way they should.

Right now I'm writing a software renderer an I'm unsure if there's ways to go about setting it up to be tested. I'll give you an example of where I'm stuck:

When scanning a polygon, it's most convenient right then and there after generating the scan to set the z-buffer and pixel from the texture as you're going along; when you're rendering a ton of polygons you need all the speed you can get.

The nice unit test way would be to return those scans so I could verify that each scan was where it was supposed to be, and check the data set along with it. Then for the next component that takes the scans, ...etc (more testing here)

The problem here is that tons of polygon scans will require a lot of ranges to be returned in various cases, adding to not only more memory usage but extra function calls that will scale badly as users have higher resolutions. Doing it all in one go makes the renderer not choke, especially in polygon intensive scenes.

I thought of a few possible ways around this but they seem to all have their own drawback:

  • Just do it the optimized way and check the final pixels at the end of it (which I can intercept and check), but then if things break I'm going to be spending tons of time potentially finding out exactly where something broke.

  • Extend the classes and inspect a stub or somehow intercept the data before passing it on to drawing to a pixel buffer, however I'll need to make virtual methods (using C++) and then this will potentially introduce overhead that I don't need through a virtual table, unless I'm wrong. I am leaning towards this since I don't know if the vtable is actually that expensive, but I could be dead wrong when it comes to massive polygon rendering.

  • Just eat the performance penalty and maybe at the very end of the project, optimize it after its been tested enough, but this sounds like it's not very TDD since refactoring at a further point could make a mess.

I want to make sure that all my elements work, but so far I have to bundle them together and it makes unit testing feel not as proper... since if something goes wrong, I don't know if the polygon edge scanner is broken, or if the scanline algorithm is broken, or the z-buffer was set wrong, or if the mathematics for vector/cross/dot products is done wrong, etc.

I'm also not a fan of taking a screenshot at the end and with some tolerance checking if the renderer is working properly (more of an integration test I guess). I probably will do it anyways but it feels too fragile for me, as I like knowing "okay this submodule just broke" rather than "this entire pipeline just broke, gonna get my coffee and get comfy for the next few hours trying to find out where."

Assuming I'm not missing some 'forest from the trees' thing that's really obvious, what is a proper way to go about this?

Water
  • 366

2 Answers2

4

Use option 2 ("intercepting the data before passing it on to drawing to a pixel buffer"), but not by virtual methods. Instead use a compile time mechanism which only activates the "data inspector" or "data logger" during your unit tests.

For example, assumed your unit tests run only in "debug" mode of the application, utilize a preprocessor instruction like #ifdef _DEBUG which adds some inspection or logging calls only in "debug" mode. So in "release" mode, which you use for deploying your final code, you will have no performance penalty. If you need to run your tests in "release mode" for some reasons, you could also introduce a specific "Test" mode, which is almost identical to "Release" mode, with all other compiler and optimization flags identical, but differs only in the added inspection calls.

If you don't like preprocessor macros, you can also accomplish the same thing by template meta programming.

Doc Brown
  • 218,378
3

I don't think there's a "proper" method for anything which involves runtime, but I can tell you what I've had good luck with.

I created a Journal class which is an abstract base class. I pass this along to any of my functions like your code for polygon scans. Whenever I finish a scan, I check to see if the journal I've been given is not-null. If it's not null, I call a function on it to report the scan results.

By doing this, I can get access to this scan-level testing data. I simply create a Journal instance to collect it. When rendering normally, that pointer is null, so the only cost is the extra if statement needed.

This journal can be passed to your function in any way you find reasonable. You might pass it as a function argument. You might pass it as a global. You might use some exotic dynamic-scoped-variable if need be. Whatever approach makes the most sense for your particular program.

If you profile your code and find that the if statements are slowing you down, there's a few things you can do

  • Collect the journal data up and pipe it out in conveniently sized chunks. This can save time because it can be cheaper to save off the data than it was to do the if statement (in some extreme situations)
  • Set a const bool at the start of the function which stores whether the journal is null or not. Quite often a good optimizing compiler can see this and generate two copies of the code, one with journaling and one without.
  • As a last resort, for some of the highest performance code, you can duplicate it: one with journaling one without (and then manage the version control challenges of duplication)
Cort Ammon
  • 11,917
  • 3
  • 26
  • 35