Posted 2009-04-04 22:00:00 GMT
A fixed point representation stores fractions in integers. When multiplying or dividing the answer has to be scaled, but for addition and subtraction everything can stay the same. Before FPUs became prevalent it was widely used. It should come back. The superficial ease of floating point is seductive but leads to errors.
Chuck Moore is a hero of NIH syndrome. He not only implemented his own programming language, an editor and compiler, but even the system for designing the chips to run it on. I just came across one his page of bugbears. One is floating point computation. If you think hard enough about what your program is doing then you don't probably don't need floating point and can choose reasonable units (fixed point) or even better, integer units (the classical example is representing money in integral numbers of pennies).
Floating point is difficult because precision can be lost. Operations are not therefore not associative. For example, with 32-bit single floats, 2.0^24 - 2.0^24 + 1.0 might end up as 0.0 or 1.0. You have to be careful when using floating point. If you don't know the rough relative magnitudes of the numbers you are dealing with, then you have an interesting source of bugs.
If you do know the rough magnitudes, then fixed point could be your friend. Consider it seriously. Integer operations are much faster, more natural and often make more sense. You need to figure out which scale you data is on anyway if you're using floating point!
PS. See another piece on why it's bad to substitute doubles for integers.
Post a comment
Ratios are always good, too, especially when done with a pair of unlimited-precision integers.
Posted 2009-05-26 12:55:02 GMT by Curt Sampson <cjs@cynic.net>
And let's not forget that the way your code treats floating points (whether it be Python, Ruby, PHP, Java, Lisp, C, C++ or ?) is not necessarily how your database will treat the same value (whether it is PostgreSQL, MySQL, SQLite, Oracle or ?).
I have found that quite often floating point numbers tend to be the source of more bugs than just about anything else.
Posted 2009-05-26 00:55:02 GMT by Sjan
I would counter with "Real men who use floating point understand how they work". Floating point ≠ real number, deal with it. Likewise, int32 ≠ integer. Does the possibility of integer overflow and other imperfections in the int32/64 model for integers similarly dictate that we should only program using arbitrary precision numbers? What about the fact that those representations tend to take up more and more memory per number as more computations are performed on them? Or the fact that even the most precisely known physical constants are only known to at most a dozen or so decimal places?
May I suggest reading http://cr.yp.to/2005-590/goldberg.pdf — "What Every Computer Scientist Should Know About Floating Point Arithmetic" rather than making rather overstated and silly blanket claims about the "badness" of floating point arithmetic.
Posted 2011-02-22 00:22:15 GMT by Stefan Karpinski
Your other post on floating point arithmetic is much more reasonable but this apparent vendetta against floating point arithmetic because it fails to match the perfection of the Platonic real numbers is still quite unreasonable, imo. Floating point numbers are incredibly useful, their operations are almost as efficient as integers these days, and it is fairly rare in my experience to actually encounter a nasty bug from the imperfection of floating point representation. Perhaps your case could be made more convincingly with some examples of how it can go awry and how that's actually caused bugs that you've had problems with. The paper I linked to above has some examples.
Posted 2011-02-22 00:35:38 GMT by Stefan Karpinski
You might find Q-Floats ( http://en.wikipedia.org/wiki/Q_(number_format) ) interesting as alternatives to IEEE floats.
Posted 2011-02-22 07:46:28 GMT by Jonathan Dickinson
What choice of storage format to choose for a value, ideally, is based on the type of number required for what it represents and how it will be used. I've found ieee 754 doubles useful to store large numbers (10^100) where only a few digits of accuracy are needed, and with SIMD SSE and SSE2 instructions, great speed is achieved. Most new GPU/video cards have dozens to 200 or so SSE2 capable processors in them. True, though, to the articles point, these very same GPUs have hundreds to thousands of MMX/integer stream processors in them. Could be a large amount of floating point data be processed faster this way? I've found the overhead of working with SSE2 often diminishes the over all speed gain. The time frame to convert floats to integers can be a challenge too, but here SSE is a speedy friend, with older FPU instructions second best.
Thanks for the tip on Q-Floats ! I'll add it to my list of math tools.
Posted 2011-02-22 09:26:18 GMT by F. Zacharias
FPU, NIH, WTF?
Posted 2011-02-22 09:54:25 GMT by Anonymous from 84.45.225.192
Speaking of bugs, did you even proofread your post? I'm usually not a grammar nazi, but there's so much wrong with this post, it negates a lot of good points I think you are trying to make.
"Chuck Moore is the hero of NIH syndrome." <- What the hell is NIH syndrome? You never explained.
"I just came across one his page of bugbears." <- Is that sentence even English?
"One is floating point computation." <- One what?
"...don't probably don't need floating point..." <- I don't probably don't need don't no don't grammar!
"Floating point is difficult because precision can be lost." <- Did you mean floating point is a bad design decision?
"Difficult" doesn't seem to be the right word here.
"Operations are not therefore not associative." <- Some word is missing here. I have no idea what it should be.
Posted 2011-02-22 14:38:35 GMT by Anonymous
NIH == Not Invented Here (Duh)
Posted 2011-02-22 17:10:52 GMT by Anonymous from 129.170.26.62
Floating point is not bad in itself. However, it has limited applicability in most business applications. Anyone who has done any signifficant work with monetary values is aware of the inherent problems of using floating point. I'm not referring to hitting the occasional FPU flaws, but rather rounding issues and their consequences. If you think rounding is a simple matter, it may be because you deal with it frequently - or more likely - that you aren't versed in it.
As the author says - "The superficial ease of floating point is seductive but leads to errors." These errors arise from not fully considering the side effects (such as precision and rounding issues) and special cases (small residual values, non-associative behavior, non-equality due to fractional differences, etc.).
It is all about choosing the correct data type for the task at hand, and understanding its behavior - as well as the behavior of the compiler and run-time system during calculations. And too often, floating point is chosen when it is NOT the correct choice, and NOT fully understood.
@Anonymous:
NIH = Not Invented Here is a relatively common abbreviation.
bugbear => you'd be surprised at some of the words that are in use but not in the dictionary... But this one is. Merriam-Webster includes this definition: "a continuing source of irritation : PROBLEM".
One is floating point computation => If you are going to attack grammar in a blog post, you should realize that the word "one" can refer to one of a group previously specified. In this case it reflects a "bugbear", which was the last word of the previous sentence.
Finding typos in a blog post is no treasure hunt. They're a dime a dozen. And sure, immature language will reflect on an author's credibility. In my book, artifacts of modified sentence structure (for example the same word in two places) doesn't neccessarily correlate with credibility. I had no problem with the language in this blog post. But then, I recognized the basic content to be reasonable - in context.
Posted 2011-02-22 17:28:12 GMT by Tore Bostrup
Fixed-point is no cure for inexactness or lack of associativity. Some operations that are inexact in floating-point are exact in fixed-point, but others are just as bad.
Posted 2011-02-22 17:49:37 GMT by Anonymous from 97.113.154.173
Any good alternatives to floating point? What do real men use instead? I've implemented Q numbers (Q15) for a past project... Are there better/quicker/neater alternatives? (where one assumes we're not dealing with a money type/class... But instead where relative magnitude between variables needs to be maintained)
Posted 2011-03-09 17:53:20 GMT by Anonymous from 193.13.65.11