4

I've been assigned to a code base responsible for millions of dollars of transactions, per quarter, and has been in use for over a decade. Sifting through the solution, I see doubles used everywhere to represent money and arithmetic is done on these variables; on the rare occasion is a decimal type used.

What is an apt approach to understand the extent of possible damage done by rounding errors due to using an inappropriate type for currency?

8protons
  • 1,379

5 Answers5

9

Using single precision float for money is fatal. Float has only 24 bits of precision, so if you are using dollars, then anything above 2^18 dollars has a resolution worse than one cent. So anything above $263,000 or so has big problems.

Using double precision is much much less of a problem. If you have an amount of one trillion dollars in double precision, the lowest bit in that number has a value of 2^-14 dollar, that is about 1/164th of a cent. Rounding error for numbers in that range is half that. For an amount of a billion dollars, errors are 1,000 times smaller.

If you need to look at rounding errors, they will be negligible. If you have calculations that must be done according to specific rules, and must end up in the exact right amount, and there are (costly) legal consequences, there might be problems.

If results have been rounded, say to cents, examine carefully how that rounding has been done. It could be biased which could have produced systematic errors.

gnasher729
  • 49,096
3

You're right in being concerned about the use of floats for monetary amounts. Unless they are just used to calculate something which is then properly represented as a rounded scaled decimal (or an integral number of cents, which is equivalent,) they shouldn't be used to represent money.

However, the damage that could possibly be caused is very dependent on the exact use of these values, so you will indeed need to find out where these numbers are used in transactions and reports and where discrepancies in the sums can happen.

Unless round-off errors have been systematically siphoned off into some malevolent developer's private bank account (which happened in the past somewhere if I remember correctly) the direct monetary damage to the company or its customers is likely within reasonable limits, but you can't be certain before you've done your analysis. The effort needed to fix the problem may be several orders of magnitude higher than the damage, but may still be unavoidable.

2

Regarding damage as an amount lost by your company, it will be random and very close to zero. The more operations you have the closer to zero the delta will approach. In some rare cases (for double) you will lose a cent, in some - gain. Unless some biased rounding method is in use, the subtotal will approach zero. If the app uses double, a chance of any error is very very low. I made that mistake. Built a financial accounting application that uses double all the way to the database. When i understood my mistake it was too late. Altering database from double to decimal type in production can do more damage than the invalid type by itself, not to mention sqlite problematics. Around 5 years ago I added tests to db integrity health routine that do recalculation in decimal for some most used entities. As of now - not a single cent discrepancy detected. Several hundreds of users, small companies.

1

Floating point numbers are simply not suitable for storing exact but arbitrary numeric values; every floating point number is just an approximation of a numeric value. Sure, some numbers can be represented exactly as a floating point value but many cannot and then you will have an error that can vary a lot, so it's better to always treat them as an approximation. E.g. even a simple value as 0.2 cannot be exactly represented as a double value. Calculating 0.2 + 0.1 in C# (assuming both are double values) results in 0.30000000000000004, whereas 0.3 would have been correct.

Money can actually easily be represented as integers, just store it as cent values. A 64 bit integer value is more than suitable for this task. During calculations you can convert it to higher precision values first (like decimals), perform the operation, and finally convert it back to cent at the end, using appropriate rounding but for many tasks you don't have to (addition, subtraction, and multiplication will be fine with integer values).

The problem with determining the error is that errors sum up. If you calculate interests, the error will be too small to be significant but if you calculate compound interests of a huge amount of money and then use the result in other calculations, the error can become significant. So if you just look what the error of a single operation is, then this may not be useful as this operation may be used as one operation in a set of operations to calculate a single result in the end.

Also errors can sum up over the years. If you store an account balance as a double and always round the value to two digits after the period for displaying but internally keep the full precession, you may get correct results for years and after 30 years all of a sudden you will be off by one cent which does not only depend on the balance of the account but also on the number of transactions over the years. Sure, most people will think "One cent, is nowhere near significant". Well, if you are a finance institute and you have 100 million accounts and each is off by just one cent, your total balance is off by 1 million dollar and I would call that significant.

I'm not sure of what exactly your code is calculating and how often a day it is doing so without interim rounding, so I cannot really provide a better answer here. E.g. if the money is really just transferred (subtracted from account A and added to account B), using double will not easily cause significant issues. You usually need more complex operations that include multiplications, exponentiations/square roots to quickly accumulate significant errors.

Mecki
  • 2,390
  • 1
  • 16
  • 20
-3

To my gut feeling, double precision should be quite good for most practical cases. I don't risk to evaluate any numbers at the moment, but I think there is very good chance the errors would be negligible small for any decision making.

What can be an issue is that there may be a few cents (or whatever your currency units) in an audited results, like accounting. Then even small errors may get the company into troubles.

max630
  • 2,605