-3

Well consider a relative simple server for a SPA application. Written in nodejs with express and knex as backends.

Now if we do it properly each function will have unit tests (as they are always all visible to the end user). But besides of the choice of what function to create unit test for, is abstracting away the database actually worth the effort?

A simple function might look like:

function doSomething(request_object, response_object) {
    request_data = analyze_request();
    db_data = grabFromDatabase(request_data); //this is often just a knex query.
    result = reformatDBQuery(db_data);
    response_object.give_response(result);
}

Now to do it properly one would create mocks/stubs that either mock knex library. Or mock the interface methods we use ourselves. Both cases require a lot of writing, often so much as almost recreating the library ourselves.

The other option is to create only stubs for the exact function arguments required: but this would make tests quite brittle and the most common error I experience (by a large margin) wouldn't really be catched with this: Unexpected argument values in some function that do work but give nonsensical results that fail at another place.

So a tertiary option is possible: actually just use a database for testing. However this means the unit test is no longer a unit test. And it's really just an integration test. As this is about the only "complexity" that isn't trivial code it hence makes little sense to even have unit tests (the other tests are so trivial they can easily be added to the integration tests).

Especially since "launching" a real database connection in javascript is quite fast anyways. And writing mocks for every possible function in a library very, very time consuming.

So what do others do, especially with regards to "simple servers" where complexity is only found in database calls?

EDIT, to clarify, I understand the need for a unit test above integration tests. I wonder however what people do in practice given time constraints typically visible. And wonder how in reality people sidestep the problem of very very time consuming mocks.

EDIT2: to give some practicality:

Say I have a function that has as purpose "insert a new line with current session user id + some other data into a table". Which would look like:

async function createSheet(uId: number, data: number) {
    await express.locals.dbObject.insert({user: uId, data: data}).into("sheets");
}

Now a mocking library (I found) would do something like:

expect(knexDB).toHaveQueried(<exact query string here>).

But that query string (result from knex' query builder) will insert the data objects in "some" order generating a string, but it means I have to check into the implementation of knex itself in which order - very much an implementation detail.

While I'm basically only interested that I have inserted "something" for a given user-id, and that I can retrieve it later. -- So a proper "mock" would not only create stubs and let me query the exact query results, I'd also create an in memory database mimicking all functionality. So the tests can be descriptive.

paul23
  • 1,121

1 Answers1

1

The usual requirements for using a DBMS in automated testing are

  • isolated testing by different developers, each one with their local db

  • individual, stable test data for each test

  • easy to maintain test data

  • easy to maintain data for the expected outcome of a test, especially in case the system evolves and the data has to adapted to changed requirements

  • test data which can be versioned in your SCCS

  • spinning up the database and a connection quickly, regardless if one runs only one test or hundreds

  • and of course, certain performance requirements, even if there are several hundred tests.

To my experience, fulfilling all these requirements is possible when one can use a very lightweight DB, ideally a single-file DB like SQLite. That lets you start with an individual defined test db for each test (just copy a template DB file into a working folder), connect instantly and run the tests pretty fast (usually much faster than on a heavyweight multi-user DB like Postgresql or MySql).

Using this approach can indeed give you "the most bang for the buck", and it can sometimes be more efficient than completely abstracting all database calls away. If it really is in your specific situation is something you have to measure by yourself, there is no hard-and-fast rule which approach is "better".

If you cannot go that route, since SQLite is not powerful or compatible enough for your case, you can try to get the requirements fulfilled with some Client/Server DBMS, but this will usually require a higher effort, and you need to try out (or estimate) if you think the effort is worth it in your case.

Doc Brown
  • 218,378