0

Appreciate this is a rather odd question, so I will try to clarify as much as possible. Please also be assured this is a question purely for my own education, I'm not about to rush off and do crazy things in our software on the back of it.

I have a customer requirement for a transaction time of <10ms on a system that is based around an SQL database - in our specific implementation it is Oracle DB. I'm aware that this is not a useful or meaningful requirement, so with my business hat on I'll be dealing with that. I fully expect that the requirement will be revised to something more useful and achievable.

However, I am curious on a technical level. Could you squeeze transaction time on an SQL DB down below 10ms? Lets be generous and say this is pure SQL execution time, no comms, no abstraction layers etc. Right now, running select 1 from dual on one of our systems gives a reported execution time of 10-20ms* and I'd assume that's about the simplest query possible. What if anything might you do to reduce that time (a) within Oracle/SQL or the server environment (b) by making a different tech choice? I'd assume maybe a higher clock speed on the CPU might help, but I wouldn't bet on it.

*EDIT: turns out I had erroneously included the fetch time in my measurement above, so the actual execution time for a trivial query is generally <10ms in our implementation. So in a sense I've answered the question, it is viable. But that doesn't invalidate the discussion below.

3 Answers3

1

I am working on an implementation of SQL in C#, serving simple web pages, and a typical transaction ( which generates a page of HTML ) typically runs in 0.1 ms to 0.5 ms.

So it should be possible. Of course it helps if the SQL is pre-compiled as a stored procedure, and it depends what you are doing.

Another benchmark : it can summarise 5 million rows from a base table in well under one second. Microsoft SQL Server ( running on the same machine, my personal laptop ) is about twice as slow, and dramatically so for adding rows in T-SQL loop ( where it can be 20 times slower ). That's about all the performance comparison I have done so far ( and it may not be "fair" ).

1

Is a transaction time of <10ms for an SQL database viable? If so, under what conditions?

This is hardware specific, operating system specific, RDBMS specific, and SQL query specific.

Do you have a 128Gbytes of RAM computer with dozen of cores and terabytes of SSD? E.g. some AMD Ryzen Threadripper 3990X based motherboard with 512 Gbytes of RAM? Do you use SQLite? On such a computer (with Linux and PostGreSQL) with a 10Gbytes sized database and no rotating hard disk, you might achieve such a transaction time most of the time.

Is the database server on the same computer as the database client? Are they on the same continent? You could set up a PostGreSQL or MariaDB server running on some Linux VPS from OVH in France, and have the client application run in California (USA) on some Microsoft Windows machine connected to the Internet.

If your computer is on the moon (e.g. some autonomous robot there) and the database server on Earth, the electromagnetic signal from moon to earth takes more than 10 milliseconds.

If both database application and client application runs on the same hardware in the same room, without any rotating hard disk (so SSDs at least), you could usually achieve a 10 milliseconds delay.

Do you need a formal methods based proof that your transaction will always last less than 10 milliseconds?

You might be interested by Frama-C .... But you could spend half a million € (or US$) to get a formal proof, related to WCET. If you run an Oracle RDBMS, you could need to sign some NDA with Oracle. See also this draft report.

See also this answer.

I have a customer requirement for a transaction time of <10ms

Do you also have a good lawyer checking that requirement and your contract?

My recommendation is to add a clause such as "in 90% of the cases, the SQL query SELECT passwd_encr FROM tb_password WHERE passw_userid = (SELECT user_id WHERE user_email = $1) ORDER BY passw_mtime DESC LIMIT 1; will be served in less than 10 milliseconds", and to mention your database schema and the hardware running the application in your contract. BTW, that particular query is taken from this example.

Notice that a cheap RaspBerry Pi computer can run an SQL application. Most Android mobile phones applications are running SQLite.

PS. GNU/Linux and PostGreSQL or SQLite are open source. So in principle you might use a top500 supercomputer where your performance is achieved.

1

Yes, absolutely it should be possible. From the SQL landing at the server, to a resultset exiting an elapsed time of 10ms should definitely be attainable for a wide swathe of OLTP-type queries, or stored procedure executions, even with cold caches. The equivalent of your trivial query on this laptop's SQL Server 2019 installation is reporting 0ms elapsed i.e. less than the clock resolution. Reading a row from an on-disk (well, SSD) table is <5ms from cold.

Of course there's a whole lot of "it depends" to be applied, as you are well aware judging by your various comments. For a real query the speed that IO can be processed will likely be the limiting factor. Then it will be the amount of RAM to cache data and system objects (plans etc.). Next will be the quality of the optimizer, what simplifications it can achieve, how good its cost model is, and how accurate are statistics on the tables. Core count is likely not significant for single-query execution times as the cost of orchestrating parallel execution of simple queries outweights an benefit. Having many cores increases system concurrency and throughput. This is for OLTP with three to four joins returning a handful of tables.

For large OLAP queries reading gigabytes from tens of tables then "no" is the answer. That said optimized engines such as ClickHouse are getting scary-good at it.