2

Floating point units are standard on CPUs today, and even desktops might use them today (3D effects). However, I wonder which applications have initially driven the development and mass adoption of floating point units in history.

Ten years ago, I think most uses of floating point arithmetics constituted in either

  1. Engineering and Science applications
  2. 3D graphics in computer games

I think for any application where decimal numbers might have appeared at those times, fixed point arithmetic has been sufficient (2D graphics) or even desirable (finance). Usage of integers would have been sufficient then.

I think these two applications have been the major motivation to establish floating point arithmetic in hardware as a standard. Can you name others, or is there a compelling reason to disagree?

shuhalo
  • 231

4 Answers4

7

I think the real driver was silicon processes. Once circuits were shrunk small enough that there was room for floating-point units, they were incorporated. Same for MMUs and memory controllers. Engineers abhor empty die space.

TMN
  • 11,383
3

I can go back even farther than 1990. Oil and gas exploration companies used whatever minicomputers existed in the 1960's, plus the IBM 360 when it came out, to perform geophysical calculations.

The Electronic Numerical Integrator And Computer (ENIAC), developed in 1946, was used mainly to calculate artillery firing tables.

In 1822, Charles Babbage designed a difference engine. A difference engine is an automatic, mechanical calculator designed to tabulate polynomial functions. The London Science Museum constructed a working difference engine from 1989 to 1991.

The need for floating point calculations has been with us since the dawn of computing.

Gilbert Le Blanc
  • 2,839
  • 21
  • 18
3

I worked on PC's during a time where a floating point co-processor was an optional extra. You had to pay a significant extra cost to have an 80x87 chip added to an 80x86 system and few programs took advantage of it.

One exception was the first real killer-app for the IBM-PC, the ubiquitous spreadsheet program Lotus 1-2-3. This supported floating-point operations in hardware from relatively early on, substantially speeding up certain operations if you had an FPU.

When Intel got to the 80486, they started integrating the floating point unit onto the CPU, but even then they offered the 486SX variant with the FPU present but disabled. This was substantially cheaper than the 486DX chip and many people took that option to keep costs down.

By this point, the incremental cost in silicon terms must have been lower than the additional costs of the R&D and tooling to create separate 486SX, 487SX and 486DX chips. In fact, if you bought a 486SX system and later added a 487SX co-processor, you effectively had two whole 486DX CPU's both with different halves of the chip disabled!

By the time the Pentium came around, floating point units were expected, and it's infamous FDIV bug caused quite a storm, not just in the scientific community, but in the business community too.

Mark Booth
  • 14,352
1

Computers are that computers. Scientific applications, starting from table computations, which need floating point (or carefully manual usage of a scale with fixed point) has always been an important aspect.

AFAIK, the first computer with floating point -- in hardware -- was the Zuse Z4 mid 40's. The first "common" machine with FP capability is probably the IBM 704 about mid 50's.

For the Intel x86 family, the 8087 co-processor was announced in 80 and until the FPU was integrated with the rest of the processor (which happened early 90's), there always has been co-processor available, even third-party one. At that time, serious scientific applications weren't done on PC, but spreadsheets were among the programs to benefit from having a math coprocessor.

AProgrammer
  • 10,532
  • 1
  • 32
  • 48