-3

I'm developing a FOSS library which I am pretty fond of. More specific details probably don't matter.

I've already "finished" a feature set sufficient for an initial release IMHO. However - some of the features I introduced in there were added for completeness of the API; I didn't use them in my own applicative coding (which gave rise to the library), and they've never been tested at all. The ones I have used had only been tested through my use, which has not focused on questionable corner cases.

So here is the vicious cycle:

  1. To be released, even initially the library code needs to be functionally correct and well-performing - if not perfectly, than at least to good degree.
  2. To ensure correctness (not to mention performance), testing is necessary; at least, unit test coverage.
  3. To be involved in writing and running tests for the library, and resolving issues which come up during testing (and they do naturally come up), people have to like it and be interested in it.
  4. People won't get to know and perhaps become fond of the library before it's released (for some definition of release).

it's 1 -> 2 -> 3 -> 4 -> 1 and thus on and on in a vicious cycle.

So far, I've just spent quite some time just writing unit tests myself, and it seems this way I'll release when I'm retiring and the whole thing is irrelevant.

My question: How do I break this vicious cycle? Or in other words - how can I get some potential users into helping me with the annoying and somewhat boring work of writing and running unit tests (and perhaps resolving the issues that come up)?


Edit: I ended up writing the tests myself. It was "just" 108,849 different assertions... :-(

einpoklum
  • 2,752

4 Answers4

8

Open Source works best when there is a community behind the code. That means getting people interested in it. I would do a pre-alpha release and be really clear about the state of things. I.e. what you are confident in, what you are not confident in. Attempt to get people to start helping in the community.

As you build out the test suite (hopefully with the help of your community) to a level where you are confident in the API coverage, you can start beta releases. And when you are finally assured that the release is stable and performant, you can do your first official release.

Bottom line is, release early, release often. And drum up traffic for your project where ever is appropriate. This is the start of building community. The other part is not have the thing so polished that there is nothing left to do.

The release structure would be like this:

  • SNAPSHOT -- you get what you get, no guarantee of stability or safety
  • Alpha -- functionally complete, but could have serious bugs
  • Beta -- should be free of serious bugs, and no guarantees of performance
  • Release Candidate (RC) -- we think we are ready to do a real release, a way to make sure your release packaging is correct
  • Final -- the community is happy to sign off on this release

Many communities do a daily SNAPSHOT release. Each stage in the release structure represents a greater promise of stability and correctness. (Hopefully) No one is going to assume that any given library is completely and utterly free of problems. However, the care at which you take to put that final release seal on a particular version does reflect the level of confidence your users can have in your software.


On the testing aspect, if this is a library, test the API. Test expected inputs, and expected error cases. Use your unit tests for internal testing, but you need to build up a test suite for the API itself.

3

I believe you should look at Technology adoption life cycle:

Technology adoption life cycle

For you, the innovators and early adopters are the most important groups.

Innovators are tinkerers. They play and try things. They don't care about things working, as they are often willing to fix the problems themselves. They are willing to take on risks of things not working on themselves. For them, not having tests is often not an issue.

It is only when you start getting into early adopters, where you start to need to maintain some level of quality. It is then that test automation becomes a good long-term investment. Before that, tests could very well be a liability that prevents you from quickly iterating new ideas and experimenting.

Early on, you should really focus on figuring out if your solution is actually something that is useful for people and solves actual problem. Way too often, people create a solution and then have hard time finding actual problem to solve. If you truly have a solution for a problem people will, innovators will use it and will give you feedback or even help to improve the quality by helping you with tests.

Euphoric
  • 38,149
2

it's 1 -> 2 -> 3 -> 4 -> 1 and thus on and on in a vicious cycle

It's not a vicious cycle. It has a very clear starting point: you wrote the code.

If you wrote the code, then it stands to reason you understood its purpose. If you understand its purpose, then you can write tests.

Testing is nothing more than checking to see if the written code fulfills the purpose it was written to fulfill.

If you don't know what the purpose of the features is, then you don't need them. It's as simple as that.

If you can explain to me exactly why you actually need the features, then you are inherently explaining to me what their purpose is, and you can write tests for that exact purpose.

A simple example of what I mean:

einpoklum: I can't test this feature.
Flater: Why?
einpoklum: I don't know what to test.
Flater: Then you don't need the feature. Scrap it.
einpoklum: I can't, because users will want this feature because it [sanitizes their input].
Flater: So write a test to see if the feature correctly [sanitizes the input].


and the ones I have used were only tested through my use, which has not focused on questionable corner cases

Expecting your unit tests to catch every conceivable corner case is an unreasonable standard. You're going to forget things and that's perfectly normal. If you feel like you wrote the necessary tests, then you're good to go.

Should you stumble on a bug with an uncovered corner case, simply fix the bug and write a test that will (in the future) ensure that any regression will be flagged as a test failure.

Of course there is a reasonable line to draw on how many tests you write in the beginning, but that line is both subjective (to the developer) and contextual (to the use case). There is no universally correct line here.

Especially for FOSS, it's perfectly acceptable to cover only common cases and obvious corner cases.


The question is built on three premises:

  1. I don't want to release untested code
  2. I want to release
  3. I don't want to write the tests

There is a clear contradiction here. You can pick any two and it's fine, but the third will always cause contradictions.

Flater
  • 58,824
1

It is highly unlikely you will ever get anybody to write unit-tests for your code for free - even if they like your library and find it useful.

Think about why you don't write the tests yourself: You find it "boring and annoying". Nevertheless, you are the one who have the most to gain from the unit-tests and they are much easier for you to write than for anybody else, since you implemented the functionality. Not many people find it fun to write unit tests for code they didn't write themselves.

I suggest you either remove the functionality which you don't use (following the YAGNI principle), or adopt a test-driven development approach.

JacquesB
  • 61,955
  • 21
  • 135
  • 189