12

I can find many questions about libraries to use for representing amounts in some currency. And about the age old issue of why you shouldn't store currency as an IEEE 754 floating point number. But I can't seem to find anything more. Surely there's a lot more to know about currency in real world usage. I'm particularly interested in what you'd need to know to represent it in physical usage (eg, with the dollar, you never have precision of less than $0.01, allowing representation as an integer number of cents).

But it's hard to make assumptions about how versatile your program is when the only currencies you know are the popular western ones (such as the dollar, euro, and pound). What else is relevant knowledge in a purely programmatic perspective? I'm not concerned about the topic of conversion.

Particularly, what do we need to know to be able to store values in some currency and print them out?

GlenPeterson
  • 14,950
Kat
  • 330
  • 2
  • 10

3 Answers3

24

eg, with the dollar, you never have precision of less than $0.01

Oh really? enter image description here

the age old issue of why you shouldn't store currency as an IEEE 754 floating point number.

enter image description here

Please feel free to store inches in IEEE 754 floating point numbers. They store precisely how you'd expect.

Please feel free to store any amount of money in IEEE 754 floating point numbers that you can store using the ticks that divide a ruler into fractions of an inch.

Why? Because when you use IEEE 754 that's how you're storing it.

The thing about inches is they're divided in halves. The thing about most kinds of currency is they're divided in tenths (some kinds aren't but let's stay focused).

This difference wouldn't be all that confusing except that, for most programming languages, input into and output from IEEE 754 floating point numbers is expressed in decimals! Which is very strange because they aren't stored in decimals.

Because of this you never get to see how the bits do weird things when you ask the computer to store 0.1. You only see the weirdness when you do math against it and it has strange errors.

From Josh Bloch's effective java:

System.out.println(1.03 - .42);

Produces 0.6100000000000001

What's most telling about this isn't the 1 sitting way over there on the right. It's the weird numbers that had to be used to get it. Rather than use the most popular example, 0.1, we have to use an example that shows the problem and avoids the rounding that would hide it.

For example, why does this work?

System.out.println(.01 - .02);

Produces -0.01

Because we got lucky.

I hate problems that are hard to diagnose because I sometimes get "lucky".

IEEE 754 simply can't store 0.1 precisely. But if you ask it to store 0.1 and then ask it to print then it will show 0.1 and you'll think everything is fine. It's not fine, but you can't see that because it's rounding to get back to 0.1.

Some people confuse the heck out of others by calling these discrepancies rounding errors. No, these aren't rounding errors. The rounding is doing what it's supposed to and turning what isn't a decimal into a decimal so it can print on the screen.

But this hides the mismatch between how the number is displayed and how it is stored. The error didn't happen when the rounding happened. It happened when you decided to put a number into a system that can't store it precisely and assumed it was being stored precisely when it wasn't.

No one expects π to store precisely in a calculator and they manage to work with it just fine. So the problem isn't even about precision. It's about expected precision. Computers display one tenth as 0.1 the same as our calculators do, so we expect them to store one tenth perfectly the way our calculators do. They don't. Which is surprising, since computers are more expensive.

Let me show you the mismatch:

enter image description here

Notice that 1/2 and 0.5 line up perfectly. But 0.1 just doesn't line up. Sure you can get closer if you keep dividing by 2 but you'll never hit it exactly. And we need more and more bits every time we divide by 2. So representing 0.1 with any system that divides by 2 needs an infinite number of bits. My hard drive just isn't that big.

So IEEE 754 stops trying when it runs out of bits. Which is nice because I need room on my hard drive for ... family photos. No really. Family photos. :P

Anyway, what you type and what you see are the decimals (on the right) but what you store is bicimals (on the left). Sometimes those are perfectly the same. Sometimes they're not. Sometimes it LOOKS like they're the same when they simply aren't. That's the rounding.

Particularly, what do we need to know to be able to store values in some currency and print it out?

Please, if you're handling my decimal based money, don't use floats or doubles.

If you're sure things like tenths of pennies won't be involved then just store pennies. If you're not then figure out what the smallest unit of this currency is going to be and use that. If you can't, use something like BigDecimal.

My net worth will probably always fit in a 64 bit integer just fine but things like BigInteger work well for projects bigger than that. They're just slower than native types.

Figuring out how to store it is only half the problem. Remember you also have to be able to display it. A good design will separate these two things. The real problem with using floats here is those two things are mushed together.

enter image description here

candied_orange
  • 119,268
10

I can find many questions about libraries to use for representing amounts in some currency.

"Libraries" are unnecessary unless your language's standard library is lacking in certain data types, as I will explain.

Surely there's a lot more to know about currency in real world usage. I'm particularly interested in what you'd need to know to represent it in physical usage (eg, with the dollar, you never have precision of less than $0.01, allowing representation as an integer number of cents).

Quite simply, you need fixed point decimal, not floating point decimal. For example, Java's BigDecimal class could be used to store a currency amount. Other modern languages have similar types built-in, including C# and Python. Implementations vary, but they typically store a number as an integer, with the decimal location as a separate data member. This gives exact precision, even when performing arithmetic that would give odd remainders (e.g. 0.0000001) with IEEE floating point numbers.

Particularly, what do we need to know to be able to store values in some currency and print it out?

There are a few important points.

  1. Use an actual decimal type rather than floating point.

  2. Understand that a currency amount has two components: a value (5.63) and a currency code or type (USD, CAD, GBP, EUR, et al). Sometimes you can ignore the currency code, other times it is vital. What if you are working on a financial or retail/e-commerce system that allows multiple currencies? What happens if you are trying to take money from a customer in CAD, but they want to pay with MXN? You need a "money" type with a currency code and currency amount to be able to mix these values (also exchange rate, but I do not want to get too far on a tangent). At the same time, my personal finance software never needs to worry about this because everything is in USD (it can mix currencies, but I never need to).

  3. While a currency may have a smallest physical unit in practice (CAD and USD have cents, JPY is just... Yen) it is possible to get smaller. CandiedOrange's answer points out fuel prices in tenths of a cent. My property taxes are assessed as mills per dollar, or tenths of a cent (1/1000 of a USD dollar). Do not limit yourself to $0.01. While you may display those values most of the time, your types should allow smaller (the decimal types referenced above do).

  4. Intermediate calculations certainly must allow more precision than a single cent. I have worked on retail/e-commerce systems where internal values were rounded to $0.00000001 internally. Infinite precision is not typically supported by decimal types (or SQL) so there has to be some limit. For example, dividing 1/3 using Java's BigDecimal will throw an exception without a RoundingMode or MathContext specified because the value cannot be represented exactly.

    Anyway, this is critical in certain cases. Let us assume you have a user with six items in his shopping cart, and he goes to check out. Your system has to calculate tax, and does so per item because items may be taxed differently. If you round the taxes at each item, you may get penny rounding errors at the transaction/cart level. One approach to fix this might be to store taxes to more decimal places per item, get the total for the whole transaction, and go back and round each item so the total tax is correct (maybe one item rounds up, another down).

The important thing to realize from all of this is that something as important as rounding pennies can be very important to the right people (e.g. some of my past customers who had to pay the government sales taxes on their customers' behalf). However, these are all solved problems. Keep the above points in mind and do some experimentation on your own, and you will learn by doing.

4

One place where lots of developers are confronted with a single data representation for any currency is for in-app purchases for iOS applications. Your customer can be connected to a store in almost any country in the world. And in that situation, you will be given a purchase price consisting of a double precision number, and a currency code.

You need to be aware that the numbers could be big. There are currencies where the equivalent of say ten dollars is more than 100,000 units. And we are lucky that there are no currencies like the Zimbabwean dollar around at the moment, where you could have a hundred trillion banknote!

For displaying currencies, you will need some library - you have no chance to get it all right yourself. The display depends on two things: The currency code, and the user's locale. Think how US dollars and Canadian dollars would be displayed with a US locale and a Canadian locale: In the USA, you have $ vs CAN$, and in Canada you have US$ vs. $. Hope that's built into the OS, or you have a good library.

For calculations, any calculation will end with a rounding step. You will have to find out how you have to perform that rounding legally. That's not a programming problem, it's a legal problem. For example, if you calculate VAT in the UK, you have to calculate tax per item or per item line, and round it down to pennies. What you round to depends on the currency. But the rules obviously depend on the country. You can't expect that a calculation that is legally correct in the UK would be legally correct in Japan and vice versa.

gnasher729
  • 49,096