Re: A question regarding float and double
Quote:
Originally Posted by
2kaud
Please can some kind guru explain what is happening here as I don't understand.
The only change being the highlighed line which should have no effect on the calculations.
In a release build...
The one example optimizes differently than the other.
In the first example it's possible to keep the intermediate result in the FPU (as a 10byte floating point)
In the 2Nd example this can't be done and the intermediate is stored in memory (as a double (8 byte))
So even though "it can't have an effect on calculations". Such assumptions can't be made about floating point intermediates.
Re: A question regarding float and double
Quote:
Originally Posted by
razzle
As you can see I've changed the accumulating d calculation to a direct multipliction with powers of 10 because otherwise small errors may accumulate and become big.
better, but still not ideal. I wouldn't call it "fixed", but within the confines of that particular test, yes, maybe.
and still subject to work within only a set range of values.
This may be adequate, or it may not be, there are alternative ways to the problem, but it depends on what the real question is, is it just a matter of "how many decimals after the dot" or "need to know the mantissa in decimal", or "something else entirely".
Quote:
There are two major no-nos when doing floating point:
there are more than 2 :p
Quote:
1. don't compare for equality, and
2. don't accumulate constants.
1. comparing for equality is possible depending on the problem area.
2. this isn't correct. It's more "beware of accumulating errors"
What many people new to floating point don't realise is that floating point values are NOT decimally accurate. 0.1 is not "really" 0.1, it's an approximation in the same way as 0.33 is an approximation of 1/3. This approximation means there is a slight "error" in the decimal result you SEE displayed, and the actual value stored.
Calculating with these approximations means you will increase the relative error on every calculation.
To explain in decimal... so while 0.33 is only off by 1% of the actual value of 1/3. 0.33 * 0.33 = 0.1089 (as opposed to 1/3 * 1/3 or 1/9 or 0.1111111111) is 2% off the actual value. A single multiplication doubled the error range.
For an addition in this case, the error stays the same. 0.33+0.33 = 0.66 which is 1% off 2/3 (addition won't always keep the relative error the same)
Depending on the calculations done, you can increase the relative error (could be a lot, could be minor), keep it stable, or even reduce it. If you use calculated intermediated to do more calculations on you run into a case where the accumulated error may be so great that it starts to affect your results.
If you want accuracy, and especially decimal accuracy... don't use floating points. They're unsuited for "accuracy" problems. (accounting, finances, ...), if you're in this case, you should either use a fixed point lib, a BCD lib or a string-based lib, you'll have to make it yourself or find one, as it doesn't come with C++ by default.
floating point are good for stuff like 3D modeling, CAD, mathematical problems, statistics, where precision is important, but accuracy isn't. That isn't to say that accuracy can't creep in these type apps as well depending on what calculations are being done.
this isn't a C++ specific issue, it's present in any language that's based on IEEE floating points.
Re: A question regarding float and double
Quote:
Originally Posted by
OReubens
In a release build...
The one example optimizes differently than the other.
In the first example it's possible to keep the intermediate result in the FPU (as a 10byte floating point)
In the 2Nd example this can't be done and the intermediate is stored in memory (as a double (8 byte))
So even though "it can't have an effect on calculations". Such assumptions can't be made about floating point intermediates.
Thanks. :thumb: