Posted 2009-04-04 22:00:00 GMT
A fixed point representation stores fractions in integers. When multiplying or dividing the answer has to be scaled, but for addition and subtraction everything can stay the same. Before FPUs became prevalent it was widely used. It should come back. The superficial ease of floating point is seductive but leads to errors.
Chuck Moore is a hero of NIH syndrome. He not only implemented his own programming language, an editor and compiler, but even the system for designing the chips to run it on. I just came across one his page of bugbears. One is floating point computation. If you think hard enough about what your program is doing then you don't probably don't need floating point and can choose reasonable units (fixed point) or even better, integer units (the classical example is representing money in integral numbers of pennies).
Floating point is difficult because precision can be lost. Operations are not therefore not associative. For example, with 32-bit single floats, 2.0^24 - 2.0^24 + 1.0 might end up as 0.0 or 1.0. You have to be careful when using floating point. If you don't know the rough relative magnitudes of the numbers you are dealing with, then you have an interesting source of bugs.
If you do know the rough magnitudes, then fixed point could be your friend. Consider it seriously. Integer operations are much faster, more natural and often make more sense. You need to figure out which scale you data is on anyway if you're using floating point!
Post a comment