3

In programming languages, for represent numbers you have (mainly) two types: Int(s) and floats.

The us of Int is very easy. However floats cause a lot of unexpected surprises:

http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/float.html

I wonder why is not used a decimal-based type make more sense as the default for literals and normal math?

Where normal math is more precise?

mamcx
  • 432

3 Answers3

6

Languages which are using a single representation of numbers are relatively rare. As statically typed I can think only of some basic variants, and even in the dynamically typed realm I know more which are using at least integers and float than a unique format. Those who are came relatively late in the game and the choice of binary floating point is probably the result of the fact that binary floating point was already available in hardware and as type in the implementation language of their interpreter.

Now, why binary floating point has become the method of choice for type modelling real numbers?

If you want a type to modelize real numbers while using a constant size of a few words, you have the choice between fixed point and floating point numbers. Other kind of representation for rational numbers (fixed point and floating point are able to represent only rational number and a few other value like infinities -- which aren't real numbers BTW) exist but they are less widely useful.

Fixed point representations have some advantages, but they have one big issue: there is no good default for the number of digits after the point, so all languages and libraries which provide them ask the programmer to choose in the context of their application.

Floating point representations don't have that problem: once the size is fixed, you can come up with a repartition between the significand (often improperly called the mantissa) and the exponent good enough that allowing to customize it brings more problems than it would solve.

You may wonder why one has chosen a binary floating point format (BFP in the following of the discussion) instead of a decimal one. Decimal floating point (DFP) has one (and none more) advantage over binary: it can represent exactly real constant as we usually write them, in decimal. All other problems of BFP are also problems of DFP, but they may be hidden for simple enough computations by the advantage above (you get the exact result because the computation is so simple that all results -- final and intermediary -- are exactly representable).

DFP has some disadvantages of his own. Some properties of real numbers are also properties of BFP, but not of DFP (the most obvious is that (a+b)/2 may be outside the closed interval [a, b] with DFP) and because of that and for some other reasons, analysis of numerical properties of algorithms is more difficult for DFP than for BFP. Worse, error bound achievable with DFP are worse than one achievable with BFP for a given representation size. Implementing DFP is more difficult than implementing BFP (not so much that it has any importance nowadays, but it was a factor -- both of complexity and achievable performance -- when the trend was set). Finally for scientific computing -- historically the major consumer of FP -- the advantage of DFP is not pertinent, while all its disadvantages are.

So you got BFP in the hardware because it was what people needing FP wanted, and although some language definitions allow for DFP, implementations rarely use the possibility and prefer to use what the hardware provides.

AProgrammer
  • 10,532
  • 1
  • 32
  • 48
2

Languages use binary floating point numbers because that's what the hardware usually provides. Very few CPUs have instructions for handling decimal floating point numbers, and the adoption of the IEEE 754 appears to have fixed the decision for the future.

1

Floating point numbers are a tradeoff between precision and accuracy. They are notable for their ability to handle very wide numeric ranges, with reasonable accuracy. They are therefore considered a "one size fits all" type.

If you take a floating point number, and constrain its range to those numbers that can be represented by the mantissa only, you can treat it like an integer for accuracy purposes (in other words, you won't get the weirdness you would normally get from floats; you can compare them exactly in an if statement, for example).

This is why Lua chose a floating point as its default data type; in a sense, you can get the best of both worlds.

See Also
How Lua handles both integer and float numbers?

Robert Harvey
  • 200,592